Instead of a new years resolution* I’ve looked into what things to work on next. Call it milestones, achievements, whatever.
- I’ve already cleaned up my platform, based on Alpine Linux now.
- IPv6 switchover is going well but not a prime concern. Much stuff just works and other stuff is heavily broken, so it’s best to not rush into a wall.
- Bacula: I’ve invested a lot of time to into backup management routine again. This paid off and made clear it was stupid to decide per-system which VMs to backup and which not. If you want to have reliable backups, just backup everything and be done with it. Sidequests still available are splitting catalogs and wondering why there is no real operations manual. (rename a client? move clients backup catalogs? All this stuff is still at a level that I’d call grumpy retarded cleverness: “You can easily do that with a script” – yeah, but how comes that the prime opensource backup tool doesn’t bring along routine features that are elsewhere handled with ONE keypress (F2 to the rescue)
- cfengine: This will be my big thing over the next 3 weeks, at home and on holiday. Same goal, coming to grips real well. During the last years I’ve tried puppet, liked but not used Chef and glimpsed at SALT. Then I skipped all of them and decided that Ansible was good for the easy stuff and for the not easy stuff I want the BIG (F) GUN, aka cfengine.
- Ganeti & Job scheduling: In cleaning up the hosting platform I’ve seen I’ve missed a whole topic automation-wise. Ahead scheduling of VM migrations, Snapshots etc. A friend is pushing me towards ganeti and it sure fills a lot of gaps I currently see, but it doesn’t scale up to the larger goal (OpenNebula virtual datacenters). I’ll see if there is a reasonable way for picking pieces out of Ganeti. Still, the automation topic stays unresolved. There is still no powerful OSS scheduler – the existing ones are all aimed at HPC clusters which is very easy turf compared to enterprise scheduling. So it seems I’ll need to come up with something really basic that does the job.
- Confluence: That’s an easier topic, I’m working to complete my confluence foo so that I’ll be able to make my own templates and use it real fast.
What else…. oh well that was quite a bit already 😉
Otherwise I’ve been late (in deciding) for one project that would have been a lovable start and turned down two others because they were abroad. Being at the start of this new (selfemployed) career I’m feeling a reasonable amount of panic. Over the weekends it usually turns into a less reasonable one 😉
But yet also cheerful at being able to give a focus on building up my skills, and I also think it was the right decision. I went into the whole Unix line of work by accident but loved it since “if it doesn’t work you know it’s your fault” – a maybe stale, yet basically bug-less environment where you concentrate on issues that come up in interactions of large systems instead of bug after bug after bug. (See above at platform makeover – switching to alpine Linux has been so much fun for the same reason).
My website makeover is in progress and I’m happy to know I’ll visually arrive in this decade-2.0 with it soon.
Of course I’ve also run into some bugs in DTC while trying to auto-setup a WordPress setup. DTC is the reason for the last Debian system I’m keeping. Guess who is REALLY at risk of being deprecated now 😉
*not causing doomsday worked for 2012 though
This kept me a little busy on Friday night, a long running DDOS hammering at my server, specifically at the VPS subnet, not caring if the IPs were even allocated.
I Reported it to my ISP quite immediately, but didn’t get an answer so far.
At some point I figured this (I guess some few hundred kpps) was just beyond what I could fix on my own, and that this, after all, had not been my weekend plan.
I throttled all traffic to somewhere around 2KB/s and went off to buy Batman Arkham City instead.
This is a weekly RRD that averages the numbers down, but makes better for a comparism. The small spikes are daily backups, a few GB give or take. On the long green one you’ll see how traffic went down after throttling, and you can see it took a full day till the attack finally wore out.
When I looked there was about 5MB/s of incoming SYN with all kinds of funny options, and around 5MB/s of useless ICMP replies from my box. Gotta love comparing this to FreeBSD boxes which simply auto-throttle such an attack right…
- Syncookies are not optional, you WANT them enabled.
- Your kernel will reply to anything it feels reponsible for, thats why I had to concern with the many-MB’s of ICMP replies for the unallocated IP under attack.
- Nullrouting unused IPs was the most helpful thing I did.
- Throttling was the second most helpful, just next time it needs to be a lot more specific.
- IPTables & tc syntax is a complete nightmare when compared to any router OS. I wonder what they took before designing their options. Every single thing it can do is twisted until it’s definitely non-straightforward.
- Methodically working on shapers and drop rules was the wrong thing to do! Either have them prepared and ready to enable, or skip it and look at more powerful means right away. If someone is throwing nukes at you, then don’t spend the last minute setting up your air defences. 🙂
- enabling the kernel ARP filter might be the right thing to suppress unwanted.responses – or it might break VM networking.
- The check_mk/multsite idea of running quite a few distributed monitoring systems is great. Even if I lost livestatus connectivity to the system it still DID do the monitoring, so once I had reasonable bandwidth again all the recorded data was there to look at.
- IMO this is much more cruicial with IDS logs. It’s very rare, but there are cases where a big nasty DDOS is just used to hide the real attack.
- It feels a smart move to plan for real routers on the network. Of course, that has certain disadvantages on the “OPEX” side of things. I got the routers, but rack units are not free
- If you see a sudden traffic spike and spend hours trying to find a software bug or a hacked system, you might be looking at a DDOS probe. Look at this, recorded roughly two weeks earlier:
I noticed this because I had quite well-tuned traffic monitoring already, using the ISP’s standard tools. Even then my guts had been telling me this was someone probing the the targets performance etc. prior to a real attack.
And, finally: I guess I now lost more sleep to playing Batman, even forgot I wanted to go to a party on Saturday. Those damn sidequests 🙂
The hardware on this picture is uncommon even for a tech-geek like me.
I remember when we upgraded a Cell in a HP SuperDome to 96GB in 2004, and now this amount of memory is here on my desk.
The two Opterons (12core, 2.3 or 2.4GHz) would of course love 4 more DIMMs with this to be in full interleaved quadchannel mode. (That way the can hit 50GB/s memory throughput, which is quite far and beyond any normal PC standards and nothing you’ll need at home any time soon…)
Tomorrrow these will all move into their new home and very soon (I hope) this will be the home of a few very lucky VPS customers (since Xen *does* of course allow “bursting” into unused memory. That means the first guy will start his 1GB contract with 45ish GB (NUMA beware) until the second guy moves in.
I hope this concept will not end in users trying to kick each other off the server to get more ressources? 🙂
There was one SQL injection and some more small bugs in DTC that needed immediate attention. I’m kinda happy because these finally pushed a more current version of DTC into Debian. The upgrade would have gone flawless, but suddenly i couldn’t use the admin login anymore. Turns out I still had a password popup open in a different browser tab which meant I didn’t get one in the tab I was in.
I was just getting afraid there would be an issue in mod_security but no, it’s doing it’s job like a champ!
More info / advisory at:
Another update of DTC in GPLHost repository will be made to fix the
issue (probably version 0.32.11), but I do not plan to fix the Lenny
version for a so tiny issue (without much consequences) that is easily
fixable by hand.
I was thinking What the fuck they’re not fixing it? for 30 minutes now until I understood, the stats will be broken unless the patch is applied. Reverse check: The security issue is gone if your stats are gone, too.
I just found a very powerful option in DTC in the menu for renewals management:
I think this proves you should always pay your bills if your hoster is using DTC.
Last night I was running some more benchmarks to verify the infiniband links are stable and checking if there is any negative impact if you add a 4xSDR (10Gbit) node to the other 4xDDR (20Gbit) nodes.
I was mostly looking at the rdma bandwidth with connected mode, as this is what should apply to glusterFS. I normally turned off the firewall, but I think technically it doesn’t matter with RDMA.
I noticed when changing the iterations in ib_rdma_bw to 200000 the bandwidth average displayed would drop from 2.8GB/s down to 40-50MB/s.
What had happened?
I decided to run multiple tests over all the connections (A to B, B to C, A to C, …) and found the error kept coming up once I ran a longer test than the default.
After that it was either a bug in ib_rdma_bw or my switch. I found it unlikely to be the switch, at those data rates an error should show almost immediately, like within 20GB, not after a few 100)
Turns out there was an overflow in ib_rdma_bw. Problem solved 🙂
That’s a older picture of the Cisco SFS3504 switch / gateway:
Right now there’s 3 infiniband cables going in and no gigabit cable coming out.
I disabled the gateway function until I know how to use infiniband partitions correcty. They’ll be mapped on ethernet VLANs so that non-IB hosts can access them via IPoIB, too. But when I just enabled this without using VLANs etc. it meant that the IB hosts would see their own and forgeign IPs twice – via IB and Ethernet. A lot of chaos resulted 🙂
I did that yesterday and wanna provide help to anybody else trying.
If you try, some of the following issues should be expected:
- Grub2 can’t handle the /boot volumes that worked with grub. It tries to embed itself right behind the partition start and somehow can’t, although grub is doing just fine…
- Grub2 will at times not be able to grub-update
- You’ll have to manually install the new xen kernels & utils (thats fixable)
- Xen “Default Utils Path” handling will break until you installed all xen pieces (a debian cludge) (thats fixable)
- If you created a /usr/bin/pygrub symlink to work around idiocy, adjust it for xen 4 (thats fixable)
- You’ll not have nic firmware even though you have non-free in your apt sources.list. (firmware-realtek is the package you need) (thats fixable)
- Grub2 is per default misconfigured and will not boot into Xen. See the order of stuff in /etc/grub.d, you have to take care that your xen kernel (20_linux_xen) is called before the standard linux kernel (10_linux). Doesn’t that rock? over 30kB of Grub2 config scripts and in the end you have to rename files to work around logic failures.
- Setting GRUB_CMDLINE_XEN=”noapic” in /etc/default/grub takes no effect.
- Hetzners vKVM really rocks, but you can’t avoid noticing that KVM is still a piece of crap, so Xen can’t boot up in it due to buggy hardware emulation (apic bugs), which means you can’t look into Xen boot issues.
- Unless you’re a real genious you might wanna hardcode a non-automagic boot entry in grub.conf and default to that. (heavily recommended)
- If you start/stop xendomains and it uses xm save to supsend the VMs, a few times, debian lenny VMs might hang due to clock backstep issues. (fixable: turn off suspend)
- I couldn’t find a trace of tmem, might that be due to the pv_ops dom0 kernel? Does it not have any *features*? (not fixable unless you wanna use non-distro kernel)
- xm new (proper way of creating vm’s) is broken.
- if you get an incomprehensibly stupid error upon xm create of a HVM domU, install the qemu drivers (xen-qemu-dm-4.0) (thats fixable)
Needless to say most of these issues already had accompanying bug reports, some dating back to 2008, most include details on solving the issue that are being ignored.
The curious mind might also enjoy googling for grub2 disaster or grub2 nightmare, the number of results is pretty impressive for a software this young.
Myself, I’ll ditch this host for one using Oracle VM quite soon using koan –replace-self. The 1% performance gain over a pvops kernel is just a cherry on top.
Please look at my newer Post ‘squeeze + xen” – the Debian Wiki now has info on a larger number of the issues I encountered.
How much had we been waiting for that – Oracle VM is getting the big update! .32 kernel, Xen4,and so on!
Just found out that the beta period for Oracle VM beta 3 had started in December, unfortunately their Beta testing is already full.
And yes, there will be new UI.
Am I pissed off not being in and waiting even longer or happy now?
Both – I had forgotten to re-apply for the beta in September. d’oh.
I hope around 3.5 they’ll switch to applying sanity patches to the dom0ified mainline kernel.
Tagged as “firma” because a supported Xen4 / Oracle VM is just awesome.
I just figured out how to build a ceph appliance.
Charming, elegant design.
If you know how to go the next steps and kick 3par & lefthand’s ass, then contact me.
Present server status on front page, ideally embedded in Confluence
Use escalations to not display “normal noise level errors”
Some stuff I need to test more in-depth
- Nagios IRC Bot -> basic testing done, was fun to use after hating the lack of documentation, but I should see if it even communicates acknowledges. Screenshot and URL will be added when I get around to it. I like the charm of having this in one window of my irssi. Of course it doesn’t have a livestatus backend
- Nagios Google Gadget -> Looks a bit lame, still interesting for adding it to Confluence as a live data source
of course (hey lars 😉 ) a good NagVis map will totally beat these solutions, I just lack the knack to make a nice picture that will give a really good insight to the infrastructure’s status. Still, until last week I had no idea how detailed a map could be.