To start with:
MooseFS and LizardFS are the most forgiving, fault-tolerant filesystems I know.
I’m working with Unix/Storage system for many years, I like running things stable.
What does running a stable system mean to you?
To me it means I’ve taken something to it’s breaking point and I learned how it will exactly behave at that point. Suffice to say, I’ll further not allow it to get to that point.
That, put very bluntly means stable operation.
If we were dealing with a real science and real engineering there would be a sheet of paper indicating tolerances. But the IT world isn’t like that. So we need to find out ourselves.
I’d done a lot of very mean tests to first MooseFS and then later LizardFS.
My install is currently spread over 3 Zyxel NAS and 1 VM. Most of the data is on the Zyxel NAS (running Arch), one of which also has a local SSD using EnhanceIO to drive down latencies and CPU load. The VM is on Debian.
The mfsmaster is running on a single cubietruck board that can just so handle the compute load.
The setup is sweating, has handled a few migrations between hardware and software setups.
And, this is the point it has been operating rock-solid for over a year.
How I finally got to the breaking point.
A few weeks back I migrated my Xen host to OpenVswitch. I’m using LACP over two gigE ports, they both serve a bunch of Vlans to the host. The reason for switching was to get sFlow exports and also the cool feature of directly running .1q VLANs into virtual machines.
After the last OS upgrade (system had been crashing *booh*) I had some openvswitch bug for about a week or two.
Any network connection would initially not work, i.e. every ping command would drop the first packet, and then work.
In terms of my shared filesystem, this affected only the Debian VM on the host, which only held 1TB of data.
I’ve got most of my data at goal: 3, meaning two of the copies were not on that VM.
Now see for yourself:
root@cubie2 /mfsmount # mfsgetgoal cluster/www/vhosts/zoe/.htaccess
root@cubie2 /mfsmount # mfscheckfile cluster/www/vhosts/zoe/.htaccess
chunks with 0 copies: 1
I don’t understand how this happened.
- The bug affected one of four mfs storage nodes
- the file(s) had a goal of 3
the file wasn’t touched from the OS ever during that period.
Finally, don’t do a mfsfilerepair on a file with 0 copies left. I was very blonde – but it also doesn’t matter 🙂