On Jul 29, 2009, at 7:10 AM, Robert Nesius wrote:
On Tue, Jul 28, 2009 at 3:56 PM, Kenneth Marshall firstname.lastname@example.org wrote:
The main advantage is gained by avoiding I/O through the virtual
disk. The layout of the virtual disk tends to turn most I/O into
random I/O, even I/O that starts as sequential. The factor of
10 performance difference between random/sequential I/O causes
the majority of the performance problem. I have not had personal
experience with using an NFS mount point to run a database so I
cannot really comment on that. Good luck with your evaluation.
You’re trading head-seeking latencies for network latencies,
If this were a standard host environment, that would be true. But in a
virtual environment, there is the overhead of the disk
create/maintenance/update processes of the virtualization engine which
multiply the overhead of the disk. Just run a disk benchmarking utility
inside a VE running under any platform that uses disk images (vmware, xen,
parallels, etc…) and then run those same tests on the host node. The
difference in performance is often an order of magnitude slower for the
Contrast that with NFS performance, which has a small fixed overhead
imposed by the network (even smaller if you use jumbo frames). If you were
using a platform with a robust NFS implementation (Solaris, FreeBSD), I’d
put money on the database performing better on NFS than inside most virtual
machines. If you’re using NFS with Linux, you will certainly have
performance issues that you won’t be able to get past.
If the virtualization environment provides raw disk access to the VE, my
bet is off. Examples of virtualization platforms that [can] do this are
FreeBSD jails and Linux OpenVZ. On several occasions, I have built VEs for
MySQL and mounted a dedicated partition in the VE. Assuming you’ve given
adequate resources to the DB VE, that works as well as a dedicated machine.
When I arrived at my current position, the SA team had put the databases
into the VEs that needed them, along with the apps that accessed them.
Despite having 6 servers to spread the load across, they had recurring
database performance issues (a few times a week), particularly with RT. I
resolved all the DB issues by building a dedicated machine with 4 disks (two
battery backed RAID-1 mirrors) all the databases to it. The databases have
dedicated spindles as does the OS & logging. Despite the resistance to the
"all our DB eggs in one basket approach," the wisdom of that choice is now
plainly evident. All the performance problems went away and haven’t
and those are almost certainly higher. Hosting your database server
binaries and such forth in NFS is possible, though again, not optimal both
from a performance and risk standpoint (NFS server drops, your DB binaries
vanish, your DB server drops even though the machine hosting it was fine).
That’s not how NFS works. If the NFS server vanishes, the NFS client hangs
and waits for it to return. That is a design feature of NFS. The consistency
of the databases is entirely dependency on the disk subsystem of the file
I think hosting databases in NFS can cause serious problems - I seem to
remember older versions of mysql wouldn’t support that. I don’t know if
newer ones do…but I do know in the very large IT environment I worked
in, all database servers hosted the DBs on their local disks or in
filesystems hosted on disks (SANS?) attached via fibre-channel.
I would never host a database server on anything but RAID protected disks
with block level access (ie, local disks, iSCSI, etc). Database engines have
been explicitly designed and optimized for this type of disk backend. That
is starting to change, as a few new DB engines that are designed for network
storage (like SimpleDB). But none I know of are production-ready.
Could solid-state drives side-step the random-access issue with
virtualization, or at least make it suck less?
Haven’t tried it yet, but my guess is no. However, I have put databases on
SSD disks with excellent results.
Based on how many people I know who have said “Wow, my SSD died. I thought
those were supposed to be more reliable?” … I wouldn’t bet my service
uptime on it.
There’s this thing called RAID, that protects against disk failures… It
works quite well with SSD disks and delivers performance numbers for a
couple thousand bucks that would otherwise take a $150,000+ SAN.
Community help: http://wiki.bestpractical.com
Commercial support: email@example.com
Discover RT’s hidden secrets with RT Essentials from O’Reilly Media.
Buy a copy at http://rtbook.bestpractical.com