On Linux you have a wide range of available filesystems – making a choice is never easy.
I just wanted to summarize what I’ve been telling my class attendees over the last years, what I’ve seen in live setups, and what I’m actually DOING.
- EXT4 – generally, I DO hate ext FS. For me it’s hyped by people who will simply blame your hardware once you lost your data. My rule of thumb is that i’m using ext on recent linux kernels where the block_validity options is available. Beyond this, I’ll also set the following options:
- on error: panic – if we have a read/write error that is persistent or causes a journal abort, just ZAP the box.
- data=journal or ordered, depending on importance of the server. It has shown up to 30% impact for me, but it’s a choice you can make.
- checktime / check interval – both to 0. I rather have trust checksumming and would not resist a full fsck once a year
- possibly also make the journal bigger. Ideally you’d be able to use an external journal – i recommend against it b/c you can never trust devs and it would not be fun to see your fsck not supporting the ext. journal
- journal_checksum is a lot more important but also a work in progress especially if your kernel still starts with 2.6. w/o this option ext doesn’t really notice shit about aborted writes, corrupted journal. But in some versions it’s also plain default. It’s a mess.
- XFS – I noticed this is what I actually use if it’s my own system, meaning I do have the highest trust in xfs. This is kinda funny since it’s a 1996 filesystem with focus on performace. So far we’ve stayed friends. If the system is a 2.6 one I’ll definitely go for XFS. XFS has also turned out most stable for the Ceph devs in their benchmarks, so it’s not just my guts, it’s also quite proven where others have indeed failed. For production use on RHEL, there’s a choice to get the XFS feature channel and thus run XFS with RedHats support
- JFS – JFS is what AIX users know as JFS2, and is the most modern of all prod-grade FS in my comparism here. It’s been new and shiny in the early 2000’s, i think around 2004. It has been proven to be superior in small-file performance. so if 100k’s of files in a directory is something that comes into your use case, JFS is something to look at. The problem is that JFS is badly integrated in most distros. If you find out it’s the best-performing for you and you *need* it in production, my advice is to get your OS support via IBM and let them deal with it.
- VxFS – this is what you commonly seeing used in serious envs looking at integrity and performance. It’s the most scalable and powerful of the lot and has the most features (heh, btrfs, go cry), but it DOES COST MONEY. If you might have use for extra features like split-mirror backups on a different host etc. then it is a good choice and the price acceptable for what you’re getting.
old distros like RHEL/CentOS/OEL(*) or Debian: Consider XFS
new distros and you wanna have somewhat standard setup: Consider EXT4 but _with_ the bells and whistles.
ZFS / Btrfs not included on purpose. If you think you can put your data on those already, then that’s fine for you, but not for everyone. (of course I run them for testing… silly)
VxFS – cool for your prod servers. If you are dealing with real data (let’s say, like a telco’s billing or other places where they move a netflix’ yearly revenue each day) you will most problably end up migrating to VxFS in the long run. So you might just start with it…
If it’s my system, my data – I just grab XFS. Main reason to pick something different was usually if there’s other people that might need to handle and error and who don’t know anything but fsck.
Running Ceph? I just grab XFS, anything else is too shady – one of many many similar experiences.
23:42 < someguy> otherguy: yes, I as well with BTRFS. I moved to XFS on dev.
Prod has always been XFS.
If it’s a prod system with data worth $$$$$$? I’d not think about anything but VxFS.