Attachments table constantly fragmenting

Hello,

I’ve been using the mysqltuner script to check up on things, tune mem use to
our resources, etc. One thing remains consistent: no matter what i have
things set to, within a few hours of restart or full stop / start (at most),
the Attachments table becomes fragmented. Here’s the most recent report,
and i realize it’s only a few hours since last restart:

-------- General Statistics
[–] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.0.51a-3ubuntu5.1-log
[OK] Operating on 64-bit architecture

-------- Storage Engine Statistics
[–] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster
[–] Data in MyISAM tables: 6M (Tables: 15)
[–] Data in InnoDB tables: 453M (Tables: 86)
[!!] Total fragmented tables: 1

-------- Performance Metrics
[–] Up for: 3h 9m 37s (29K q [2.583 qps], 157 conn, TX: 56M, RX: 5M)
[–] Reads / Writes: 64% / 36%
[–] Total buffers: 1.8G global + 2.6M per thread (100 max threads)
[OK] Maximum possible memory usage: 2.0G (25% of installed RAM)
[OK] Slow queries: 0% (275/29K)
[OK] Highest usage of available connections: 14% (14/100)
[OK] Key buffer size / total MyISAM indexes: 512.0M/489.4M
[!!] Key buffer hit rate: 37.8% (386 cached / 240 reads)
[OK] Query cache efficiency: 59.0% (13K cached / 23K selects)
[OK] Query cache prunes per day: 0
[OK] Sorts requiring temporary tables: 0% (2 temp sorts / 695 sorts)
[!!] Temporary tables created on disk: 38% (366 on disk / 960 total)
[OK] Thread cache hit rate: 91% (14 created / 157 connections)
[OK] Table cache hit rate: 95% (120 open / 126 opened)
[OK] Open file limit used: 3% (67/2K)
[OK] Table locks acquired immediately: 100% (17K immediate / 17K locks)
[OK] InnoDB data size / buffer pool: 453.2M/512.0M

-------- Recommendations
General recommendations:
Run OPTIMIZE TABLE to defragment tables for better performance
MySQL started within last 24 hours - recommendations may be inaccurate
Temporary table size is already large - reduce result set size
Reduce your SELECT DISTINCT queries without LIMIT clauses

CPU’s a quad-core Xeon, there’s 8GB of phys RAM, and the disks are 10k SAS
in hardware RAID-1.
Even when used after the 24hr minimum (or 3 days, or 8 days), three things
remain: a fragmented table which always turns out to be attachments; the Key
Buffer Hit Rate; and the temp tables on disk stat. I don’t fully understand
the latter two and haven’t figured it out yet by online docs and commentary,
but i get the impression it’s the nature of our DB structure and queries
used.

Important note about that - the MySQL server on this box is hosting two
DBs. rt3 is one, but an internal DB we use for mumble mumble is also on the
box. I’m not sure if that custom-written DB and its UI used across the
office by all employees is the reason for the key buffer / temp table
alerts; the programmer for that one is working on the two recommendations of
reducing result sets (may not be possible) and making sure there are LIMITs
on his queries wherever possible.

Should I be worried about the constantly-fragmenting Attachments table? Can
i get more fine-grained info about it from within MySQL?

Thanks,
Rob

/chown -R us:us /yourbase

[!!] Total fragmented tables: 1
[–] Reads / Writes: 64% / 36%

Fragmentation is normal, especially with that read/write ratio. It doesn’t
necessarily indicate a problem. You can periodically optimize the
fragmented tables to improve performance, if performance becomes an issue,
which should only be if RT does large ranges on primary keys or full table
scans (naughty!). Otherwise, I’d ignore it.

– ============================
Tom Lahti
BIT Statement LLC

(425)251-0833 x 117
http://www.bitstatement.net/
– ============================

i am incapable of clicking reply-all today
i blame the weather.

All good points. I guess what i need to balance against this is: will the
fragmentation continue to worsen? Is it inevitable that performance will,
at some point, deteriorate?

I have another database (not RT) in mysql InnoDB tables that is about 2
years old and has 15 million records. I have never optimized it.
Fragmentation worsens mainly when you DELETE and then INSERT; this
particular database almost never has DELETE operations, similar to RT.

Makes sense. The idea is to keep all this for stats and history
anyway, no plans on deleting anything.

There’s a daily 4 AM backup of the DB, so after that finishes, once a month
(barring quicker performance deterioration) seems like a good idea. FYI,
the reason i’m asking all these hypotheticals is that the DB is not very
large now, but will increase exponentially over the next year. Data will
increase, usage will increase, and the rate of increase will increase… i
want some best practices in place, however general, before things really
ramp up. As i have a window of a few hours a month when no one will be
using the DB, i see no contraindications to using it - yah?

If I had an installation where I was adding 1,000 tickets a day and almost
all with attachments, I would definitely consider it. I don’t know what
size you’re dealing with.

Not quite that volume, but getting there, with a strong likelihood of
increased growth over the next few years. An attachment-less ticket
will be the exception. Thanks for all the info; a noninvasive monthly
OPTIMIZE seems like it won’t hurt anything, and making a schedule for
it now means it’ll be in place when it matters later, and i won’t have
to get users used to the maintenance window :slight_smile:

/chown -R us:us /yourbase