Scaling RT to Enterprise tool

Dear RT Users,

we do have a very successful implementtion of Rt 3.4.1.
We did reach ticket number 1000 within 2 months.

Since the toll shell be expanded to a much broader base, we are looking for people having experience with large installations.
This is mainly because we already found performance shortcomings and tremendous growth of certain database tables (transactios).

In future we expect to have something like 400 users (agents), 5- 10 000 customers (requestors) and mor than 1000 tickets a month.

Anyone having a comparable installation?
How is a distributed installation handled (we think of 1 database and severeal webservers hosting RT)?
How must maintenance be done to control database size and keep up performance?

We also want to open RT to Partners. Therefor we need an extended self service, embedded into a portal.
Single Sign on would be very good. How can this be done with RT?
As we do use Open Source intensively we suggest Typo3 (www.typo3.org) as a portal.
Anyone having linked Typo3 and RT (for single sign on?)

It would be great to exchange experience for large user installations.
Thanks for answers

Joerg
XXL-Speicher, PC-Virenschutz, Spartarife & mehr: Nur im WEB.DE Club!
Jetzt gratis testen! http://freemail.web.de/home/landingpad/?mc=021130

In future we expect to have something like 400 users (agents), 5- 10 000 customers (requestors) and mor than 1000 tickets a month.

Anyone having a comparable installation?
thats sounds familar to me :wink:

How is a distributed installation handled (we think of 1 database and severeal webservers hosting RT)?
How must maintenance be done to control database size and keep up performance?
after two years we are reaching the 200000 ticket line soon. we just run
on one server (2 xenon 3.0ghz, 2gb ram, scsi mirror). to distrbute an
application is always problematic and it never scales linear. so my tip
buy big iron with a lot of memory, make regular backups, have a cold
standby machine or ask best practical for consultancy.

We also want to open RT to Partners. Therefor we need an extended self service, embedded into a portal.
Single Sign on would be very good. How can this be done with RT?
you can authenticate through the web server mechanism, for example with
mod_auth_pam, in pam you can configure whatever you like (kerberos, ldap
and so on), but be aware that this don’t work very well with the CLI

As we do use Open Source intensively we suggest Typo3 (www.typo3.org) as a portal.
Anyone having linked Typo3 and RT (for single sign on?)
this sounds not very clever, because rt is coded in perl. they provide
an perl-API to access the application from the outside.

best regards!

Jörg Ungermann wrote:

In future we expect to have
something like 400 users (agents),
5- 10 000 customers (requestors) and mor than
1000 tickets a month.

We’ve got about 150 agents, 10000 active customers and about 3000
tickets a month. The help desk alone processes 1000 tickets a month.
Works great.

Anyone having a comparable installation?
How is a distributed installation handled (we think of 1 database and severeal webservers hosting RT)?

We’re currently running the web & e-mail front ends and other stuff on
one server, and the database on another server. Works fine. In a pinch,
one server could probably do both.

How must maintenance be done to control database size and keep up performance?

With about 60000 tickets in the system, we’re doing OK. We’ve taken no
special measures to reduce database size.

Rick Russell
For computer help, call xHELP (x4357 or 713-348-4357)
OpenPGP/GnuPG Public Key at ldap://certificate.rice.edu
761D 1C20 6428 580F BD98 F5E5 5C8C 56CA C7CB B669
Helpdesk Supervisor, Client Services
IT/Academic & Research Computing
Rice University
Voice: 713.348.5267 Fax: 713.348.6099

Could you share what your hardware/software setup is? What version of RT, what database and version, etc. Thanks.

Hello!On Wed, 2006-02-08 at 13:21 +0100, Jörg Ungermann wrote:

Anyone having a comparable installation?

You should search the archives for the word “performance” … I asked
similar questions roughly a year ago and got answers from folks with
500K to 2M tickets in db.

We also did some in-house testing to punish RT with several hundred
thousand tickets and found that while there might be some useful indexes
to add (and no, Jesse, we didn’t pin it down as it was speculative at
best =) when your installation grows, by and large RT seems to scale
very well.

Cheers!

–j
Jim Meyer, Geek at Large purp@acm.org

We’re getting there. We started production use (just by my group) in
mid September, and then started slowly rolling out use to teams within
C&C. We’ll probably cross 12,000 tickets today – with ~5600 of those
just in 2006. Our use will probably go up by a factor of 2x to 4x in
the next year.

Currently we have 2 web/email frontends and a pgsql backend, but we’ve
just put in a request for 2 servers that will exclusively handle
incoming email, reporting, and other automated processes, leaving our
current web frontends for only user web sessions. We’re not doing this
for load, but for isolation. We’ve had two outages with RT, both were
related to incoming email.

  1. RT got in a fight with Mailman – a battle RT will always lose. But
    RT did pretty well. By the time we stopped the fight, RT had a ticket
    with 200,000 attachments (and 40K messages in the sendmail queue), which
    I had to purge manually since RTx::Shredder can’t handle a ticket that big.

  2. Due to a configuration error by another sysadmin a mail loop was
    created and when corrected RT got hit with 800 emails that RT could not
    process and was exiting Gateway with Temp_Fail (a bug in some of our
    custom code) so they never went away.

In both cases rt-mailgate was hogging all of the web sessions so no
users were able to connect via the production hardware – though we
could redirect users to our eval server (which connects to the same
database) and they were able to work just fine.

We’re using rt-3.4.4, with a lot of custom code.

The web/email frontends are Dell 1850s, ~3GHz CPU, 2GB Mem.
The database server +failover are Dell 2850s, 2x ~3GHz CPU, 4GB Mem,
200GB of disk on Raid5.

Joby Walker
C&C SSG, University of Washington

Jörg Ungermann wrote: