Performance (was: Kill spam from mail)]


#1

In 1.3.70…

The RT system is currently rather slow. Many of the pages takes 10
seconds to build. The database has around 150 tickets. /proc/cpuinfo
says 76.8 bogomips. We will move to a faster machine. – I just
wanted to say that there is a reason for shortcuts…

Jesse wrote:

I probably wouldn’t run it in production with less than a 500mhz cpu and 128
megs of ram in the box. As I said though, there’s ongoing performance work,
though it’s not the highest priority thing going on. Buying faster hardware
and sponsoring work on performance are both valid ways to deal with this.

I’m running RT on a dedicated dual 450MHz P3 with 512 Meg RAM, and performance
seems to be slowing down considerably, especially when displaying long tickets.
We only have 36 in the database with a total of 121 attachments(just starting
to use it in our backbone team) and building some of them can take ten seconds
before the page starts displaying. I’m sure it’s not the mysql server as if
issue the SQL commands manually it’s all sub 1 second.

(And no, it’s not the network either. It’s on a 100Mb full duplex feed to an
unloaded switch on it’s own vlan and FTP from here to there runs at about 97
Mbits …!)

Anything which updates the database takes even longer.


#2

RT’s dismal performance comes mostly from redundant database queries. ACL
checks are particularly bad. The same records will be queried from the
database sometimes hundreds of times.

In my usage, there were a couple sorely missing indices, also, not sure if
those have been fixed in the official tree or not.

If you want to hack on RT to tune it, I suggest using something like this
for webrt/Elements/Footer:

% if ( $DBIx::SearchBuilder::Handle::DBIHandle->can(‘sprintProfile’) ) {

<% $DBIx::SearchBuilder::Handle::DBIHandle->sprintProfile() |h %>

% $DBIx::SearchBuilder::Handle::DBIHandle->{‘private_profile’} = {};
% }

then, to turn on profiling, (install DBIx::Profile and) put:

PerlModule DBIx::Profile

in your Apache configuration. If you’re using Apache::DBI, you should
comment that out while using DBIx::Profile.

You’ll then get a count of how many times each database query was
executed, the wallclock time etc. This helped me immensely at figuring
out which particular things I needed to optimize rather than doing so
speculatively.

(BTW Jesse I forget - you are using placeholders in DBIx::SearchBuilder
now?)

I have some patches to eliminate (some of) the redundant database queries
and cache ACL decisions; this speeded things up significantly in my
installation. I’ll dig them up and send them when I get a chance.On Fri, May 11, 2001 at 11:17:37AM +0100, Mark Vevers wrote:

On Thu, May 10, 2001 at 01:24:04PM +0200, Jonas Liljegren wrote:

In 1.3.70…

The RT system is currently rather slow. Many of the pages takes 10
seconds to build. The database has around 150 tickets. /proc/cpuinfo
says 76.8 bogomips. We will move to a faster machine. – I just
wanted to say that there is a reason for shortcuts…

Jesse wrote:

I probably wouldn’t run it in production with less than a 500mhz cpu and 128
megs of ram in the box. As I said though, there’s ongoing performance work,
though it’s not the highest priority thing going on. Buying faster hardware
and sponsoring work on performance are both valid ways to deal with this.

I’m running RT on a dedicated dual 450MHz P3 with 512 Meg RAM, and performance
seems to be slowing down considerably, especially when displaying long tickets.
We only have 36 in the database with a total of 121 attachments(just starting
to use it in our backbone team) and building some of them can take ten seconds
before the page starts displaying. I’m sure it’s not the mysql server as if
issue the SQL commands manually it’s all sub 1 second.

(And no, it’s not the network either. It’s on a 100Mb full duplex feed to an
unloaded switch on it’s own vlan and FTP from here to there runs at about 97
Mbits …!)

Anything which updates the database takes even longer.

From the cpu usage it all seems to be disappearing into apache, so I suspect
it is all the perl/mod-perl code. I haven’t yet looked at the source yet, but
(I’m probably teaching you to suck eggs here) are you using one connection
for all the SQL queries or opening and closing each one. Also, are you using
placeholders for getting the ticket contents as this means you can reuse your
prepared SELECT statements.

I’ve not used Mason before so I need to get to grips with what thats up to
before I can really go poking into the source.

Regards
Mark


Mark Vevers. mark@ifl.net / mvevers@rm.com
Internet Backbone Engineering Team
Internet for Learning, Research Machines Plc
Tel: +44 1235 823380, Fax: +44 1235 823424


Rt-devel mailing list
Rt-devel@lists.fsck.com
http://lists.fsck.com/mailman/listinfo/rt-devel

meow
_ivan


#3

I probably wouldn’t run it in production with less than a 500mhz cpu and 128
megs of ram in the box. As I said though, there’s ongoing performance work,
though it’s not the highest priority thing going on. Buying faster hardware
and sponsoring work on performance are both valid ways to deal with this.

I’m running RT on a dedicated dual 450MHz P3 with 512 Meg RAM, and performance
seems to be slowing down considerably, especially when displaying long tickets.

There’s a bunch of stuff that’s untuned as of 1.3.70. I’ve put some time into
it in the past couple of days. What I’m currently hacking on has several
new indices that help considerably. The next version of DBIx::Searchbuilder
also has a couple of improvements that drastically improve performance.

The two biggies are a much better Count method (when the search that’s been
described hasn’t yet been performed) and Matt Knopp’s amazing new
DBIx::SearchBuilder::Record::Cachable, which caches Record objects pretty much
completely transparently.

On my stress-test database (25,000 tickets) this took queries from > 5 minutes
to ~15 seconds. That’s not as fixed as I’d like it, but it’s definitely a start.

We only have 36 in the database with a total of 121 attachments(just starting
to use it in our backbone team)

How big are the attachments you’re talking about? I’ve not yet done extensive
testing with large attachments.

and building some of them can take ten seconds
before the page starts displaying. I’m sure it’s not the mysql server as if
issue the SQL commands manually it’s all sub 1 second.

As in you’re copying all the SQL commands out of the mysql logs and manually
pasting them into the mysql monitor?

From the cpu usage it all seems to be disappearing into apache, so I suspect
it is all the perl/mod-perl code.

That’s kind of odd and runs counter to all my experiences.

I haven’t yet looked at the source yet, but
(I’m probably teaching you to suck eggs here) are you using one connection
for all the SQL queries or opening and closing each one.

We keep a connection open. Additionally, you should make sure that you have
a ‘perlmodule Apache::DBI’ in your httpd.conf.

Also, are you using
placeholders for getting the ticket contents as this means you can reuse your
prepared SELECT statements.

We use placeholders (thanks to ivan) in the lookups stuff within Record.pm,
but Searchbuilder objects are not currently using placeholders…then again
the claim I’ve read is that pre-prepared SELECTS are not a speed issue with
mysql (though they certainly help with pg and oracle)

-j

jesse reed vincent – root@eruditorum.orgjesse@fsck.com
70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90

Pelcgb-serrqbz abj!


#4

Jesse wrote

There’s a bunch of stuff that’s untuned as of 1.3.70. I’ve put some time into
it in the past couple of days. What I’m currently hacking on has several
new indices that help considerably. The next version of DBIx::Searchbuilder
also has a couple of improvements that drastically improve performance.

Ok, just got your last CVS update for that

The two biggies are a much better Count method (when the search that’s been
described hasn’t yet been performed) and Matt Knopp’s amazing new
DBIx::SearchBuilder::Record::Cachable, which caches Record objects pretty much
completely transparently.

I’ve updated my CVS working directory and also patched the DB to match the
latest Schema. BTW Why do you create two indices each time? The schema.pm
defines one, but you translate that to create two as the INDEX (column1,
column2
column3) creates an index called column1. You then create a new named index
with exactly the same parameters.

PRIMARY KEY (id),
INDEX (Scope, Value, Type, Owner)
);
CREATE INDEX Watchers1 ON Watchers (Scope, Value, Type, Owner);

On my stress-test database (25,000 tickets) this took queries from > 5 minutes
to ~15 seconds. That’s not as fixed as I’d like it, but it’s definitely a start.

It certainly seems to have helped. Much snappier.

As in you’re copying all the SQL commands out of the mysql logs and manually
pasting them into the mysql monitor?

Well, at least most of them. That’s what pipes are for isn’t it :wink:

From the cpu usage it all seems to be disappearing into apache, so I suspect
it is all the perl/mod-perl code.
That’s kind of odd and runs counter to all my experience.

I’ll look at this in more detail if I get the time.

Additionally, you should make sure that you have
a ‘perlmodule Apache::DBI’ in your httpd.conf.

Already in there.

We use placeholders (thanks to ivan) in the lookups stuff within Record.pm,
but Searchbuilder objects are not currently using placeholders…then again
the claim I’ve read is that pre-prepared SELECTS are not a speed issue with
mysql (though they certainly help with pg and oracle)

MySQL AB seem to think placeholders help.

Also something else I spotted, make upgrade splats config.pm. Good job
I kept
a backup copy … :wink:

Cheers
Mark

Mark Vevers. mark@ifl.net / mvevers@rm.com
Internet Backbone Engineering Team
Internet for Learning, Research Machines Plc
Tel: +44 1235 823380, Fax: +44 1235 823424


#5

Jesse wrote

There’s a bunch of stuff that’s untuned as of 1.3.70. I’ve put some time into
it in the past couple of days. What I’m currently hacking on has several
new indices that help considerably. The next version of DBIx::Searchbuilder
also has a couple of improvements that drastically improve performance.

Ok, just got your last CVS update for that

RT doesn’t yet use Record::Cachable. you’ll need to modify RT::Record.

The two biggies are a much better Count method (when the search that’s been
described hasn’t yet been performed) and Matt Knopp’s amazing new
DBIx::SearchBuilder::Record::Cachable, which caches Record objects pretty much
completely transparently.

I’ve updated my CVS working directory and also patched the DB to match the
latest Schema. BTW Why do you create two indices each time? The schema.pm
defines one, but you translate that to create two as the INDEX (column1,
column2

It’s an issue with older versions of DBIx::DBSchema related to how mysql
used to be broken. It’s fixed in ivan’s CVS. I haven’t had time to
suck it down and rerun the schema generation.

It certainly seems to have helped. Much snappier.

grin I’m glad. Most of what you’re seeing is likely the Count(); fix.
That alone should help drastically with CPU usage.

From the cpu usage it all seems to be disappearing into apache,
so I suspect it is all the perl/mod-perl code.
That’s kind of odd and runs counter to all my experience.

I’ll look at this in more detail if I get the time.

I’m definitely seeing some of this that I didn’t before.

We use placeholders (thanks to ivan) in the lookups stuff within Record.pm,
but Searchbuilder objects are not currently using placeholders…then again
the claim I’ve read is that pre-prepared SELECTS are not a speed issue with
mysql (though they certainly help with pg and oracle)

MySQL AB seem to think placeholders help.

Inneresting. that’s a change then. it used to be “Mysql ignores statement
precaching”

Also something else I spotted, make upgrade splats config.pm. Good job
I kept a backup copy … :wink:

It should move the old one to config.pm.bak.

I’m actually right in the middle of another of what I hope will be a bigish
performance tweak with ACLs. Long term, I’m going to need to drastically
rewrite the ACL check routines, but for now, I think that I can get
some fairly significant performance gains for most cases by flipping around
the way we do ACL checks to cut out 3 sql queries per check for most cases
by adding an additional check in case of ‘metagroup’ rights.

-j

Cheers
Mark

jesse reed vincent – root@eruditorum.orgjesse@fsck.com
70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90

‘"As the company that brought users the Internet, Netscape is now inviting
the more than 60 million people who have used our client software to
’tune up’ and upgrade to Netscape Communicator," said Mike Homer,
senior vice president of marketing at Netscape.’ Sometimes I wonder.


#6

If anyone’s interested, I just checked in some more fixes to DBIx::SearchBuilder
and RT itself, which make my 40,000 ticket stress-test instance positively
snappy. You need the new DBIx::SearchBuilder for the new RT… I’m
going to play some more this afternoon and will probably roll a release
tonight or tomorrow.

    -jOn Fri, May 11, 2001 at 11:32:06AM -0400, Jesse wrote:

On Fri, May 11, 2001 at 04:23:00PM +0100, Mark Vevers wrote:

Jesse wrote

There’s a bunch of stuff that’s untuned as of 1.3.70. I’ve put some time into
it in the past couple of days. What I’m currently hacking on has several
new indices that help considerably. The next version of DBIx::Searchbuilder
also has a couple of improvements that drastically improve performance.

Ok, just got your last CVS update for that

RT doesn’t yet use Record::Cachable. you’ll need to modify RT::Record.

The two biggies are a much better Count method (when the search that’s been
described hasn’t yet been performed) and Matt Knopp’s amazing new
DBIx::SearchBuilder::Record::Cachable, which caches Record objects pretty much
completely transparently.

I’ve updated my CVS working directory and also patched the DB to match the
latest Schema. BTW Why do you create two indices each time? The schema.pm
defines one, but you translate that to create two as the INDEX (column1,
column2

It’s an issue with older versions of DBIx::DBSchema related to how mysql
used to be broken. It’s fixed in ivan’s CVS. I haven’t had time to
suck it down and rerun the schema generation.

It certainly seems to have helped. Much snappier.

grin I’m glad. Most of what you’re seeing is likely the Count(); fix.
That alone should help drastically with CPU usage.

From the cpu usage it all seems to be disappearing into apache,
so I suspect it is all the perl/mod-perl code.
That’s kind of odd and runs counter to all my experience.

I’ll look at this in more detail if I get the time.

I’m definitely seeing some of this that I didn’t before.

We use placeholders (thanks to ivan) in the lookups stuff within Record.pm,
but Searchbuilder objects are not currently using placeholders…then again
the claim I’ve read is that pre-prepared SELECTS are not a speed issue with
mysql (though they certainly help with pg and oracle)

MySQL AB seem to think placeholders help.

Inneresting. that’s a change then. it used to be “Mysql ignores statement
precaching”

Also something else I spotted, make upgrade splats config.pm. Good job
I kept a backup copy … :wink:

It should move the old one to config.pm.bak.

I’m actually right in the middle of another of what I hope will be a bigish
performance tweak with ACLs. Long term, I’m going to need to drastically
rewrite the ACL check routines, but for now, I think that I can get
some fairly significant performance gains for most cases by flipping around
the way we do ACL checks to cut out 3 sql queries per check for most cases
by adding an additional check in case of ‘metagroup’ rights.

-j

Cheers
Mark


jesse reed vincent – root@eruditorum.orgjesse@fsck.com
70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90

‘"As the company that brought users the Internet, Netscape is now inviting
the more than 60 million people who have used our client software to
’tune up’ and upgrade to Netscape Communicator," said Mike Homer,
senior vice president of marketing at Netscape.’ Sometimes I wonder.


Rt-devel mailing list
Rt-devel@lists.fsck.com
http://lists.fsck.com/mailman/listinfo/rt-devel

jesse reed vincent – root@eruditorum.orgjesse@fsck.com
70EBAC90: 2A07 FC22 7DB4 42C1 9D71 0108 41A3 3FB3 70EB AC90

“It’s buried in the desert, got sand in it, melts Nazis. You know,
the Ark of the Covenant” – siva