I’m still working on my kitchen box that needed substantially more ram and disk space. The disk space is so I can run a Bacula storage daemon in there for offsite backups of other servers. Also I want to do some more tests with cobbler which also take up quite some space. Faster networking (like, above 4MB/s scp speed) was something that was needed and ideally I’d be able to run HVM domUs in the future.
Ok to be honest I don’t really have many reasons for Xen HVM.
- FreeBSD VMs since this is doing realllllly well – you get PV drivers without the bad maintenance of the freebsd-xen PV kernel
- Running a VM with D3D would allow to run my ROSE Online game shop without the PC being on. Not sure if that would really work 🙂
- Being able to test really crazy hardware is easier at home too, like SolarFlare NICs that can create 2048 Xen netback devices for PV domUs. (Stuff like this is why you’ll still me laughing a lot when people think that KVM is anywhere compareable to Xen)
The memory modules from the old box (2GB DDR2 533 Kingston) won’t work in the intel board, it just doesn’t come up. I’ll have to get some replacement for the 4GB I’d given away to my GF 🙂
8GB total would be really needed. Otherwise I’ll have to ditch the whole thing and reinstall it inside ESXi to be able to overcommit. (noes!)
The Transcend Flash module arrived and it’s working fine now.
A horrible lot of kickstart hacking later I can now also safely pick the OS install disk.
- HW vendor (certain servers are flagged as having USB and will never install anyplace other than “their designated media”
- Disk model wins next, if it’s one of the flash devices
- a disk smaller 16GB can win
- find out if we’re talking hw or sw raid
- only mess with the first 2GB of a disk in call cases
- You can define a limit in size
- or you can exclude harddisk models, either will never be touched
…and here, too.
Here you can see them replaced with stock intel cables:
Also visible: The FLOPPY port. While it has no IDE port it to hook up a spare dvd drive when you’d want to install from DVD, it actually comes with a floppy connector. I don’t really get it.
Between looking for some more docs for my PXE installing I actually found intels PXE manuals, should have looked at those many years earlier.
i.e. if you want to set up iSCSI boot there’s a program that can set up the iSCSI target to use. Or even FCoE according to the doc. Incase anyone is really using that instead of FC. tehehe.
Here you can see the whole thing back in it’s place, well integrated with the kitchen utilities but now with 4 disk slots instead of one.
What was left is all softwareish stuff, like for example migrating to Raid10 from the old Raid1.
A sad experience was finding that there’s a plenty of howtos that are actually building things more like a concat of two Raid1’s instead of a fully striped/mirrored Raid10.
On the other hand without reading so much about this is wouldn’t have figured it out on my own.
In the end it’s just been this command:
[root@localhost ~]# mdadm -v --create /dev/md2 --level=raid10 --raid-devices=4 --layout=f2\ /dev/sdd1 missing /dev/sde1 missing mdadm: chunk size defaults to 64K mdadm: /dev/sdd1 appears to contain an ext2fs file system size=1839408K mtime=Tue Aug 30 00:46:17 2011 mdadm: /dev/sde1 appears to contain an ext2fs file system size=1839408K mtime=Tue Aug 30 00:46:17 2011 mdadm: size set to 1465135936K Continue creating array? y mdadm: array /dev/md2 started.
I had unfortunately split my original raid beforehand, and with no bacula backup at hand I decided to re-establish the Raid1 on the two old disks first and do the move to raid10 tomorrow.
I’ll probably not be using pvmove since I already had bad experiences and since get more than 1 weekly hit in the logs here for “pvmove data loss” I DOUBT it’s mess and pretty much assume that pvmove on Linux really sucks as much as I always say it does.
That means a few hours of DD-based volume migration instead, quite a waste of time but I figure I still have scripts around for it from the last time.
After that I can move the disks from the old to the new array. (A visit to mdadm.conf will be in place to be sure it’s not bringing up something old from the past on next reboot)
LVM Filters lvm filters!!!
During testing of one of the less useful howtos, I wanted a small reality check at one point – did I already lose data. Oh and I almost lost data then. I used vgck and as a result it switched paths to the NEW raid device because it still had the old lvm header intact on one member disk. So, for a moment imagine now running pvmove from /old/raid into /new/raid where the data is being read from /new/raid … help! 🙂 Or running pvcreate on /new/raid… might not even be possible.
[root@localhost ~]# vgck
Found duplicate PV I3PllpQdYviurqaON12wuiV2FEEvScEm: using /dev/md4 not /dev/md1
We learn two things here:
- Check your LVM filters or it will f**k up.
- always wipe all metadata areas (MD, LVM and ext) if you re-cycle storage
After this whole chaos I had my old Raid1 and the new Raid10:
Personalities : [raid1] [raid10]
md2 : active raid10 sde1 sdd1
2930271744 blocks 64K chunks 2 far-copies [4/2] [U_U_]
md1 : active raid1 sdb2 sdc2
1462196032 blocks [2/1] [U_]
[>………………..] recovery = 2.2% (32834176/1462196032) finish=983.6min speed=24215K/sec
md0 : active raid1 sdb1
2939776 blocks [2/1] [_U]
In the monitoring it was quite easily visible that I will have change something about the disks though. They are getting quite warm. The first thing I did was turning the server which seems to have worked for the 2 new Seagate disks. I’ll have to replace them with WD Green disks quite soon, as these are a lot more stable temperature – wise.
I also set speed limits for the mdadm resync to further cool things down.
This is done using the proc filesystem and the effect is quite visible.
One stupid detail is that the mdresync speed didn’t go back up when I switched to 100MB/s.
In performance the difference between those disks is HUGE, the Seagates read up to 135MB/s, the WD green seems to top out around 80MB/s. But this being a home box this, in a Raid10 should do quite OK, even when running virtual machines over the network via CIFS or iSCSI.
Otherwise one could consider using –layout=f4 to cut back on disk space for better r/w performance. But since performance has gone up by many times with this upgrade I’ll just be happy!
Whats left after the Raid is done?
- Cable up so I can use LACP (the bond and bridges are already configured)
- Add noise shielding
- and a dust filter