with a lot of help from someone on #ceph I managed to removed all the errors that I made while copying the easy configuration from the ceph wiki.
I had not created some mointpoints and forgot the entries for the MDS nodes!
My test setup consists of six debian lenny vm’s, one per disk spindle in my xen dom0s, each got one 100GB LV to read from.
waxu0026# ceph -s
10.07.17 00:06:43.441934 b73d3b70 monclient(hunting): found mon0
10.07.17 00:06:43.457934 pg v393: 1584 pgs: 1584 active+clean; 4084 MB data, 8176 MB used, 592 GB / 600 GB avail
10.07.17 00:06:43.478316 mds e5: 1/1/1 up, 1 up:standby, 1 up:active
10.07.17 00:06:43.478687 osd e15: 6 osds: 6 up, 6 in
10.07.17 00:06:43.479115 log 10.07.16 23:13:45.107332 mon0 192.168.19.26:6789/0 11 : [INF] mds0 192.168.19.241:6800/3785 up:active
10.07.17 00:06:43.479735 mon e1: 2 mons at 192.168.19.26:6789/0 192.168.19.241:6789/0
10.07.17 00:06:43.499271 b73d46d0 b73d46d0 strange, pid file /var/run/ceph.pid has 7970, not expected 8354
waxu0307:/ceph# df -h .
Filesystem Size Used Avail Use% Mounted on
192.168.19.26:/ 600G 7.7G 593G 2% /ceph
Write performance was horrible as I had not created journaling volumes.
I suspect even later these will be a performance hotspot no matter what.
Read performance was low for a single read (53MB/s). I very much hope that this will scale well above a single disks’ performance.
Here are my test files:
waxu0026# ls -l /ceph/
-rw-r–r– 1 root root 1073741824 Jul 16 23:21 lala
-rw-r–r– 1 root root 1073741824 Jul 16 23:39 lala2
-rw-r–r– 1 root root 1073741824 Jul 16 23:54 lala3
-rw-r–r– 1 root root 1073741824 Jul 17 00:06 lala4
The good thing is that even with this first-try-ever setup I see it scales up very well when multiple nodes are involved, I get roughly 100 – 110MB/s there. Not absolutely perfect considering the lacp trunks should enable cross-node traffic to go about a single GigE ports performance.