About Disk partitioning

So, you always wondered why one would have a dozen logical volumes or filesystems on a server? And how it brings any benefit?

Let’s look at this example output from a live system with a database growing out of control:

Filesystem Size Used Avail Capacity Mounted on
/dev/da0s1a 495M 357M 98M 78% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/da1s1d 193G 175G 2.3G 99% /home
/dev/da0s1f 495M 28k 456M 0% /tmp
/dev/da0s1d 7.8G 2.1G 5.1G 29% /usr
/dev/da0s1e 3.9G 1.0G 2.6G 28% /var
/dev/da2s1d 290G 64G 202G 24% /home/server-data/postgresql-backup

I’ll now simply list all problems that arise from this being mounted as a one, singular /home. Note, it would just be worse with a large rootdisk.

  • /home contains my admin homedirectory. So I cannot disable applications, comment /home in fstab, reboot and do maintenance. Instead all maintenance on this filesystem will need to start in singleuser mode.
  • /home contains not just the one PGSQL database with the obesity issues, it also hold a few mysql databases for web users. Effect: if it really runs full, it’ll also crash the other databases and all those websites.
  • /home being one thing for all applications I cannot just stop the database, umount, run fsck, change the root reserve from it’s default 8% – so there’s a whopping 20GB I cannot _get to_
  • /home being one thing means I also can’t do a UFS snapshot of just the database, with ease. Instead it’ll consist of all the data on this box, meaning it will have a higher change volume, leaving less time to magically copy this.
  • /home being the only big, fat filesystem also means I can’t just do fishy stuff and move some stuff out (oh and yes, there’s the backup filesystem. Accept I can’t use it)
  • PostgreSQL being in /home I cannot even discern the actual IO coming from it. Well, maybe Dtrace could, but all standard tools that use filesystem level metrics don’t stand a chance.
  • PostgreSQL being in /home instead of it’s own filesystem *also* means I can’t use a dd from the block device + fsck for the initial sync – instead I’ll run file-level using rsync…
  • It also means I can’t just stop the DB, snapshot, start it and pull the snapshot off to a different PC with a bunch of blazing fast low-quality SSDs for quick analysis.


I’m sure I missed a few points.

Any of them is going to cause hours and hours of workarounds.


Ok, this is a FreeBSD box, and one not using GEOM or ZFS – I don’t get many chances as it stands. So, even worse for me this is one stupid bloated filesystem.


Word of advice:

IF you ever think about “why should I run LVM on my box”, don’t think about the advantages right now, or the puny overhead for increasing filesystems as you need space. Think about what real storage administration (so, one VG and one LV in it doesn’t count) do for you if you NEED it.

Simply snapshotting a volume, adding PVs on-the-fly, attaching mirrors for migrations… this should be in your toolbox, and this should be on your systems.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s