This is a good day for benchmarking galore!
I’m trying to collect performance data for my controllers so that I can fine-tune the monitoring based on real measures, not educated guessing. I want to know the actual IOPS and MB per second limits and set the levels accordingly.
Todays victim is a
“Intel(R) RAID Controller SROMBSAS18E”
as found in the SR1550 servers on the active SAS backplane.
It is also very well known as the Dell PERC5…
With Intel servers you need addons for 512MB Ram and BBU. These came included with my server.
Right now we’re only doing readonly tests here. For one, the BBU is utterly discharged.
3x73GB 15K SAS drive in Raid0 config (IO Policy WB, Cached)
4x60GB OCZ Vertex2 in Raid10 config (IO Policy Direct)
Linux Settings: cfq scheduler, Readahead is set to 1MB for both Luns.
Test scenario: Pull as fast off the disks as we can.
Write down the numbers from SAR afterwards.
[root@localhost ~]# dd if=/dev/sdc of=/dev/null bs=1024k count=10240 & dd if=/dev/sdb of=/dev/null bs=1024k count=10240  4198 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 37.0208 seconds, 290 MB/s 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB) copied, 37.0224 seconds, 290 MB/s Average: sdb 1677.58 553577.58 0.00 329.99 1.32 0.79 0.46 77.04 Average: sdc 1867.27 578368.77 0.00 309.74 1.44 0.77 0.45 83.96
Other fun things to test now…
- Switch to SATA SSD Raid0 instead of Raid10
- Look at IO Overhead in Xen domU*
- See how much faster the SR1625 will perform 🙂
- Update the outdated firmware 🙂
- Switch to deadline scheduler