Attachments.ibd in MySQL just hit 100GB

I just got into work, and was greeted by an alert that my MySQL DB server used for my RT instances was full. Upon examination, it looks like the most heavily used database (by our service department) has an attachments.ibd file that has hit 100GB. Is this normal? Is there anything I can do to get it to reduce the size? Should I bring up pruning the database data at all? I’ve added some space to the drive now, but I can see us blowing through it rather quickly.

Well I discovered why…

A mail loop/flood got created and ran over the weekend. I now have to get rid of the ticket completely. If anyone has a good way to delete a ticket and all assetsw/attachments/etc related to it, I’d love to hear about it.

Shredder keeps running out of memory when trying to clean up the ticket. I may have to write something procedural in Perl to overcome the memory limitations & make sure everything gets deleted. Will update.

Ok; I’ve managed to work out a solution. It’s not pretty at all, but it’s working:
rt-shredder can’t handle the volume of data to delete. Since rt-shredder can’t complete execution, nothing is getting deleted from the database.

So; I’m using PuTTY to log the output from what rt-shredder claims its doing. Once it dies, or I stop it, I then upload the log file to my RT server. I then run a second Perl script that parses the files line by line and executes rt-shredder one a single object at a time. I went from 9.7 million attachments down to 9.56. So now; I just have to keep this up until I can get the counts down low enough to run the rest of the repair tools on the database (rt-validator, etc).

Hi,
try this extension: GitHub - bestpractical/rt-extension-externalstorage
it moves the attachments out of the db into FS.

tiorsten

I thought about that, but I don’t think it will solve my problem. If it moved the attachments out of the DB, it still has all the references inside the DB to them. Would rt-shredder run faster/more reliably that way?

The script I wrote is moving along well, but still taking a while. To help expedite this, I’ve split the log file I generate into pieces & am running multiple instances of it to shred faster. I’m running 6 copies of rt-shredder with different inputs & the speed has drastically improved.