- Posted in: Admin-tips, Databases
Today I was doing some work on one of our database servers (each of them has 4 SAS disks in RAID10 on an Adaptec controller) and it required huge multi-thread I/O-bound read load. Basically it was a set of parallel full-scan reads from a 300Gb compressed innodb table (yes, we use innodb plugin). Looking at the iostat I saw pretty expected results: 90-100% disk utilization and lots of read operations per second. Then I decided to play around with linux I/O schedulers and try to increase disk subsystem throughput. Here are the results:
Scheduler | Reads per second |
---|---|
cfq | 20000-25000 |
noop | 35000-60000 |
deadline | 33000-45000 |
anticipatory | 22000-29000 |
Notice: The box can’t be restarted to check with clean caches and stuff, but I was doing full reads from this huge table on a machine with 16Gb RAM so all caches were washed out by this load anyways.
As you can see, less work linux does on its side to optimize disk I/O, slower it works 🙂 Actually it was pretty expected, but still – surprising result. The problem (as guys from Youtube explained on the last year MySQL Conf) is there because Linux knows nothing about RAID’s internals and specific drives’ queues so when it is trying to re-arrange requests in its own queue it wastes CPU resources and could potentially prevent RAID controller from doing its own queue optimizations.
After this test I’ve tried it on a few other I/O-bound servers (both read and write bound) and the result was the same – noop (do nothing) I/O scheduler gave me the best results. Long story short, I’ve decided to try this scheduler on all our boxes for a week and look at the results in cacti graphs to see how it works.