Memory leaks with mod_fastcgi

Hey folks,

Is there no clean way around memory leaks in RT when using mod_fastcgi
short of HUPing the individual rt-server.fcgi processes when they reach a
certain size?

Running 4.0.13 right now. With a load balancer in front hitting a simple
html page located at /rt/NoAuth/LoadBalancer.html every couple of seconds,
we see memory usage increase ~100MB per process with 5 rt-server processes
over the course of about 12 hours.

Is there any simple way to track down why memory is increasing so much?

–andy

Is there no clean way around memory leaks in RT when using mod_fastcgi short of HUPing the
individual rt-server.fcgi processes when they reach a certain size?
Running 4.0.13 right now. With a load balancer in front hitting a simple html page located at
/rt/NoAuth/LoadBalancer.html every couple of seconds, we see memory usage increase ~100MB per
process with 5 rt-server processes over the course of about 12 hours.
Is there any simple way to track down why memory is increasing so much?

Is that 100M with no other rt services running but the request for
Loadbalancer.html or are you mixing testing requests in with
production RT requests?

rt-server.fcgi processes will grow as large as is needed to process
the current task, so if RT sends a huge email or processes a huge
attachment, I would expect individual processes to grow.

Your perl build can also affect the size of child processes.

You can try attaching something like

to a single rt-server.fcgi child and seeing what changes between
requests. Simple repeatable actions that cause consistent growth is
the easiest way to solve something like this. I’m used to seeing 70M
stable fcgi processes.

-kevin

Is there no clean way around memory leaks in RT when using
mod_fastcgi short of HUPing the
individual rt-server.fcgi processes when they reach a certain size?
Running 4.0.13 right now. With a load balancer in front hitting a
simple html page located at
/rt/NoAuth/LoadBalancer.html every couple of seconds, we see memory
usage increase ~100MB per
process with 5 rt-server processes over the course of about 12 hours.
Is there any simple way to track down why memory is increasing so
much?

Is that 100M with no other rt services running but the request for
Loadbalancer.html or are you mixing testing requests in with
production RT requests?

That was just LoadBalancer.html. No other page hits.

rt-server.fcgi processes will grow as large as is needed to process
the current task, so if RT sends a huge email or processes a huge
attachment, I would expect individual processes to grow.

Your perl build can also affect the size of child processes.

We’re actually using rpmbuild+shipwright to create packaged,
redistributable RPMs containing everything necessary to run RT.

You can find the package we’re using here:
http://yum.ait.psu.edu/ait/rhel/6/base/x86_64/rt413-vessel-opt-4.0.13-1.el6.ait.x86_64.rpm

We are currently using 5.14.1, and here’s the output of perl -V,

You can try attaching something like

Devel-SizeMe-0.16 - Extension for extracting detailed memory usage information - metacpan.org
to a single rt-server.fcgi child and seeing what changes between
requests. Simple repeatable actions that cause consistent growth is
the easiest way to solve something like this. I’m used to seeing 70M
stable fcgi processes.

I started playing around with Devel::Gladiator a little bit, but started
getting into unfamiliar territory rather quickly. I’ll take a look at
Devel::SizeMe.

–andy