Go and find me a new host. Keep some money for foods.
So, in march and april I set out to build a *home* server that could handle a Ceph lab, and would behave mostly like real hardware. That equates to disks being slow, SSDs being fast, and RAM being, well, actual RAM. Writing to two disks should ideally also not immediately turn into an IO blender because they reside on one (uncached) spindle.
I think ocver all I spent 30 hours on Ebay and in shops to find good hardware for a cheap price.
This is what I gathered:
- Xeon 2680V2 CPU (some ES model) with 8 instead of 10 cores but same 25MB of cache. It’s also overclockable, should I ever not resist that
- Supermicro X9SRL-F mainboard. There are better models with SAS and i350 NICs but I wanted to be a little more price-conservative there
- 8x8GB DDR3 Ram which I recycled from other servers
- 5x Hitachi SSD400M SSDs – serious business, enterprise SAS SSDs.
- The old LSI 9260 controller
- The old WD green disks
The other SSD option had been Samsung SM843T but their seller didn’t want to give out a receipt. I’m really happy I opted for “legit” and ended up with a better deal just a week later:
The Hitachis are like the big brother of the Intel DC S3700 SSD we all love. I had been looking for those on the cheap for like half a year and then hit lucky. At 400GB capacity each it meant I could make good use of VM cloning etc. and generally never look back to moving VMs from one pool to another for space.
I had (and still have) a lot of trouble with the power supply. Those intel CPUs take very low power on idle, even at the first stage of the boot. So the PSU, while on the intel HCL, would actually turn off after half a second when you had very few components installed. A hell of a bug to understand since you normally remove components to trace issues.
Why did I do that? oh, because the supermicro ipmi gave errors on some memory module. Which was OK but not fully supported. Supermicro is just too cheap to have good IPMI code.
Some benchmarking, using 4(!) SSDs was done and incredibly.
Using my LSI tuning script I was able to hit sustained 1.8GB/s writes and sustained 2.2GB/s reads.
After some more thinking I decided to check out Raid5 which (thanks to the controller using parity to calculate every 4th? block) still gave a 1.8GB/s read 1.2GB/s write.
Completely crazy performance.
To get the full Raid5 speed I had to turn on Adaptive Read Ahead. Otherwise it was around 500MB/s, aka a single SSDs read speed.
One problem that stuck around was that the controller would / will not enable the SSDs write cache, no matter what you tell it!
This is a huge issue considering each of those SSDs has 512MB(ytes) of well-protected cache.
The SSD is on LSIs HCL for this very controller so this is a bit of a bugger. I’ll get back to this in a later post since by now I *have* found something fishy in the controllers’ output that might be the cause.
Nonetheless: Especially in a raid5 scenario this will have a lot of impact on write latency and IOPS.
Oh, generally: this SSD model and latency? not a large concern 🙂