On a recommendation from a friend, I was asked to run iometer to do some benchmarks, to see how well the server would handle server loads instead of just sequential read/writes.

While I realize IOps is a rather arbitrary benchmark, I think if you keep block sizes the same, and the test conditions as close to the same as you can, then it at least has relative perfomance implications. For more details on this subject, read “What Is Good Performace” – Brief by Imperial Technologies

Using 32KB block sizes, 1 worker, on an idle server, 2 processor w/ hyper threading enabled on both processors.

Updated: 2007-03-24 05:52 EDT Added benchmark results of 5×146.8GB 15k RPM HD’s in RAID 5EE (striped + hot spare striped in)

Updated: 2007-03-29 07:45 EDT Added benchmark results of 6×146.8GB 15k RPM HD’s in RAID 5EE (striped + hot spare striped in)

Updated: 2007-03-29 11:27 EDT Added benchmark results of 6×146.8GB 15k RPM HD’s in RAID 0 (yes I checked if something was borked, I used the same install for all my last benches, just did a restore.), Also added 6×146.8GB 15k RPM HD’s in RAID 10 (3xRAID1, then RAID 0 them together)

Updated: 2007-04-31 10:48 EDT Added many benchmarks, and revised the naming conventions

Naming convention, starting with boot array

<array type>-<Raid level>-<NumDisksxDiskSize@DiskSpeed>[, ...] - <Number of partitions in test>[S if synchronized[? if I don't remember]][B if include boot partition] [Additional notes]

Assumption, each partition given it's own worker, all tests against ServeRaid 6i unless otherwise noted ... is used if the array configuration is the same as the last one mentioned. I found out some interesting things:

Make sure your high performance loads are NOT reliant on the boot partition for read/writes Make sure you synchronize any array mode that needs it before you start benchmarking or it’ll perform in degraded mode (AKA bad performance). For the ServeRaid 6i, just load up the ServeRaid Manager software, and it’ll say when it’s synchronized.

Setup IOps Read IOps Write IOps
RAID-0-2×146.8GB@15kRPM, … – 2 6636.96 4905.30 1731.66
RAID-0-2×146.8GB@15kRPM – 1B 2927.29 2161.56 765.73
RAID-0-2×146.8GB@15kRPM – 1 5287.23 3911.03 1376.20
RAID-0-2×146.8GB@15kRPM, …, … – 4B 5366.80 3972.18 1394.62
RAID-0-2×146.8GB@15kRPM, …, … – 3 6967.94 5153.94 1814.00
RAID-0-2×146.8GB@15kRPM, RAID-0-4×146.8GB@15kRPM – 1 5205.37 3852.88 1352.49
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1 1007.43 746.18 261.26
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1S 5311.35 3933.50 1377.86
RAID-5EE-6×146.8GB@15kRPM – 1SB 3532.33 2612.30 920.03
RAID-5EE-6×146.8GB@15kRPM – 1S 5354.84 3966.74 1388.09
RAID-5EE-6×146.8GB@15kRPM – 2S 6993.74 5173.16 1820.58
RAID-5EE-6×146.8GB@15kRPM – 3S 7214.27 5338.27 1876.01
RAID-5EE-6×146.8GB@15kRPM – 4S 7360.53 5444.32 1916.21
RAID-5EE-6×146.8GB@15kRPM – 4S 1024 sector test size 7352.68 5437.56 1915.12
RAID-5EE-6×146.8GB@15kRPM – 5S 7061.73 5225.83 1835.90
RAID-10-6×146.8GB@15kRPM – 1S?B 3119.32 2307.67 811.65
RAID-0-6×146.8GB@15kRPM – 1B 288.84 213.99 74.84
RAID-5EE-6×146.8GB@15kRPM – 1S?B 2923.77 2160.64 763.13
RAID-5EE-5×146.8GB@15kRPM – 1S?B 2920.75 2161.85 758.90
RAID-0-2x36GB@10kRPM – 1B 2756.12 2039.76 716.36
JBOD-0-1x36GB@10kRPM – 1B Onboard LSI 140.94 104.65 36.29
Setup MBps Read MBps Write MBps
RAID-0-2×146.8GB@15kRPM, … – 2 207.40 153.29 54.11
RAID-0-2×146.8GB@15kRPM – 1B 91.48 67.55 23.93
RAID-0-2×146.8GB@15kRPM – 1 165.23 122.22 43.01
RAID-0-2×146.8GB@15kRPM, …, … – 4B 167.71 124.13 43.58
RAID-0-2×146.8GB@15kRPM, …, … – 3 217.75 161.06 56.69
RAID-0-2×146.8GB@15kRPM, RAID-0-4×146.8GB@15kRPM – 1 162.67 120.40 42.27
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1 31.48 23.32 8.16
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1S 165.98 122.92 43.06
RAID-5EE-6×146.8GB@15kRPM – 1SB 110.39 81.63 28.75
RAID-5EE-6×146.8GB@15kRPM – 1S 167.34 123.96 43.38
RAID-5EE-6×146.8GB@15kRPM – 2S 218.55 161.66 56.89
RAID-5EE-6×146.8GB@15kRPM – 3S 225.45 166.82 58.63
RAID-5EE-6×146.8GB@15kRPM – 4S 230.02 170.13 59.88
RAID-5EE-6×146.8GB@15kRPM – 4S 1024 sector test size 229.77 169.92 59.85
RAID-5EE-6×146.8GB@15kRPM – 5S 220.68 163.31 57.37
RAID-10-6×146.8GB@15kRPM – 1S?B 97.48 72.11 25.36
RAID-0-6×146.8GB@15kRPM – 1B 9.03 6.69 2.34
RAID-5EE-6×146.8GB@15kRPM – 1S?B 91.37 67.52 23.85
RAID-5EE-5×146.8GB@15kRPM – 1S?B 91.27 67.56 23.72
RAID-0-2x36GB@10kRPM – 1B 86.13 63.74 22.39
JBOD-0-1x36GB@10kRPM – 1B Onboard LSI 4.40 3.27 1.13
Setup Average Response Time Maximum Response Time
RAID-0-2×146.8GB@15kRPM, … – 2 0.30 18.37
RAID-0-2×146.8GB@15kRPM – 1B 0.34 18.42
RAID-0-2×146.8GB@15kRPM – 1 0.19 18.24
RAID-0-2×146.8GB@15kRPM, …, … – 4B 0.74 19.22
RAID-0-2×146.8GB@15kRPM, …, … – 3 0.43 18.62
RAID-0-2×146.8GB@15kRPM, RAID-0-4×146.8GB@15kRPM – 1 0.19 18.47
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1 0.99 30.16
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1S 0.19 18.40
RAID-5EE-6×146.8GB@15kRPM – 1SB 0.28 18.49
RAID-5EE-6×146.8GB@15kRPM – 1S 0.19 18.09
RAID-5EE-6×146.8GB@15kRPM – 2S 0.29 18.31
RAID-5EE-6×146.8GB@15kRPM – 3S 0.41 18.45
RAID-5EE-6×146.8GB@15kRPM – 4S 0.54 18.70
RAID-5EE-6×146.8GB@15kRPM – 4S 1024 sector test size 0.54 18.70
RAID-5EE-6×146.8GB@15kRPM – 5S 0.71 18.71
RAID-10-6×146.8GB@15kRPM – 1S?B 0.32 18.50
RAID-0-6×146.8GB@15kRPM – 1B 3.46 44.39
RAID-5EE-6×146.8GB@15kRPM – 1S?B 0.34 18.59
RAID-5EE-5×146.8GB@15kRPM – 1S?B 0.34 18.37
RAID-0-2x36GB@10kRPM – 1B 0.36 20.59
JBOD-0-1x36GB@10kRPM – 1B Onboard LSI 7.09 26.09
Setup % CPU Utilization % User Time % Privileged Time % DPC Time % Interrupt Time
RAID-0-2×146.8GB@15kRPM, … – 2 5.14 0.37 4.79 0.75 1.93
RAID-0-2×146.8GB@15kRPM – 1B 3.84 0.19 3.65 0.68 1.02
RAID-0-2×146.8GB@15kRPM – 1 3.49 0.44 3.10 0.50 1.10
RAID-0-2×146.8GB@15kRPM, …, … – 4B 6.16 0.46 5.71 0.89 1.21
RAID-0-2×146.8GB@15kRPM, …, … – 3 5.81 0.39 5.43 0.85 1.61
RAID-0-2×146.8GB@15kRPM, RAID-0-4×146.8GB@15kRPM – 1 3.49 0.14 3.38 0.67 1.29
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1 0.78 0.10 0.70 0.08 0.31
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1S 3.73 0.21 3.55 0.59 1.36
RAID-5EE-6×146.8GB@15kRPM – 1SB 3.97 0.28 3.72 0.72 1.28
RAID-5EE-6×146.8GB@15kRPM – 1S 4.28 0.51 3.79 0.58 1.43
RAID-5EE-6×146.8GB@15kRPM – 2S 5.36 0.40 4.99 0.79 1.98
RAID-5EE-6×146.8GB@15kRPM – 3S 6.14 0.51 5.66 0.69 1.88
RAID-5EE-6×146.8GB@15kRPM – 4S 7.05 0.63 6.48 0.78 1.64
RAID-5EE-6×146.8GB@15kRPM – 4S 1024 sector test size 6.86 0.55 6.29 0.78 1.77
RAID-5EE-6×146.8GB@15kRPM – 5S 7.75 0.72 7.07 0.86 1.60
RAID-10-6×146.8GB@15kRPM – 1S?B 4.44 0.15 4.32 1.02 1.29
RAID-0-6×146.8GB@15kRPM – 1B 0.43 0.01 0.45 0.10 0.17
RAID-5EE-6×146.8GB@15kRPM – 1S?B 4.37 0.08 4.30 1.11 1.23
RAID-5EE-5×146.8GB@15kRPM – 1S?B 8.00 0.28 7.74 1.51 2.05
RAID-0-2x36GB@10kRPM – 1B 5.17 1.29 3.89 0.67 1.07
JBOD-0-1x36GB@10kRPM – 1B Onboard LSI 0.34 0.04 0.32 0.04 0.19
Setup Interrupts per Second CPU Effectiveness Packets/Second
RAID-0-2×146.8GB@15kRPM, … – 2 6903.88 1291.11 629.38
RAID-0-2×146.8GB@15kRPM – 1B 5353.16 763.13 411.99
RAID-0-2×146.8GB@15kRPM – 1 5561.51 1513.28 417.54
RAID-0-2×146.8GB@15kRPM, …, … – 4B 2161.98 870.81 1049.38
RAID-0-2×146.8GB@15kRPM, …, … – 3 4821.06 1199.72 841.91
RAID-0-2×146.8GB@15kRPM, RAID-0-4×146.8GB@15kRPM – 1 5478.78 1492.11 416.31
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1 1273.59 1299.26 418.36
RAID-0-2×146.8GB@15kRPM, RAID-5EE-4×146.8GB@15kRPM – 1S 5587.42 1424.44 411.12
RAID-5EE-6×146.8GB@15kRPM – 1SB 7332.83 889.47 413.65
RAID-5EE-6×146.8GB@15kRPM – 1S 5627.52 1251.87 414.72
RAID-5EE-6×146.8GB@15kRPM – 2S 7267.00 1303.97 635.57
RAID-5EE-6×146.8GB@15kRPM – 3S 5148.84 1175.41 837.93
RAID-5EE-6×146.8GB@15kRPM – 4S 3862.49 1044.00 1041.51
RAID-5EE-6×146.8GB@15kRPM – 4S 1024 sector test size 3859.17 1071.43 1046.56
RAID-5EE-6×146.8GB@15kRPM – 5S 2841.27 911.55 1239.27
RAID-10-6×146.8GB@15kRPM – 1S?B 5866.90 702.51 415.26
RAID-0-6×146.8GB@15kRPM – 1B 616.43 666.05 419.56
RAID-5EE-6×146.8GB@15kRPM – 1S?B 5417.13 669.64 415.93
RAID-5EE-5×146.8GB@15kRPM – 1S?B 8079.69 365.18 3072.45
RAID-0-2x36GB@10kRPM – 1B 5057.38 532.62 404.83
JBOD-0-1x36GB@10kRPM – 1B Onboard LSI 688.81 409.75 395.23

By looking at these charts, it’s pretty apparent that while the prior articles ( Part 1 , and Part 2 ) show the RAID card does not match up to sequential speeds of non-RAID use, under server loads, it really shines.

I guess one could say this is a good example of using the right benchmark for the job. Maybe a comparable analogy would be benchmarking 0-60 times for a regular car, or a semi tractor. Take that semi, and put a box car behind it, and do the same for the car, and then we get the same outcome as using iometer.

For those interested: the iometer configuration file I used

As an aside, later today, I should be getting some 15k RPM drives in. This should make for some interesting additions to this mix.