I’m experiencing some performance issues with my Request Tracker (RT) setup and would appreciate any advice on how to optimize it.
Recently, I encountered the following error message:
Your query exceeded the maximum run time and was stopped. Try modifying your query to improve the performance or contact your RT admin.
Current Configuration:
RT Version: 5.0.6
Operating System: Red Hat Enterprise Linux 8.9 (Ootpa)
Database: MySQL (MariaDB 10.3.39) for Linux (x86_64), using readline 5.1
Hardware:
2 vCPUs
8 GB RAM
4 GB Swap
Configuration Adjustments:
In an attempt to resolve the issue, I modified the fcgid.conf file by increasing the FcgidConnectTimeout value from 20 to 60 seconds. This stopped the warning from appearing. Here is the relevant configuration:
When externalising attachments, you need to choose the ExternalStorageCutoffSize carefully. The default is 10MB which may not be enough to make a sizeable change in your database. In our postgres database, I used the following query to analyse the Attachments table:
select
pg_size_pretty(sum(length(content))) as TotalSize,
length(cast(length(content) as text)) as OrderOfMag
from attachments
where contentencoding != 'external'
group by OrderOfMag
order by OrderOfMag desc
;
OrderOfMag is the order of magnitude of the content field, eg 1234 bytes would be 4, 7865431 would be 7. With the default setting, only records of magnitude 8 will be externalised.
I found that I needed to set ExternalStorageCutoffSize to 10000 to make any significant reduction in the table size.
Note that the rt-externalize-attachments script maintains a “high-water mark” record so that it can speed up subsequent runs. If you modify ExternalStorageCutoffSize, you will need to cancel the high water mark to force it to re-run on previous records:
delete from Attributes where Attributes.Name='ExternalStorage';
SimonW, I’ll just run rt-clean-sessions. If you write that your database is 19 GB and works fine, I won’t mess around and move it to an external drive.