Notes from IBM OpenStack workshop I was at.
- i haven’t seen a single thing that was exciting when you’ve seen and used multiple solutions.
Many sensible features (i.e. deployment hosts like Oracle VM had) are just being added and *lol* with the same naive approach. (Oh, let’s add a dedicated server for this. Oh, lets run all deployment jobs in parallel there, so they trash the disks and we cripple the benefit it could have brought)
- I haven’t seen a single thing that is done much better than what’s in OpenNebula (and there it would be much much more efficient to work with)
- There is a chance that OpenStack with it’s different components will be better at doing one thing, and doing that one thing good: From what I’ve seen it has a lot less issues than OpenNebula “when something goes wrong”, but on the other hand everything is under a pile of APIs and services anyway.
So, from a birds eye view: What you can do can hardly go wrong, but you also can’t really do a lot. Especially for people coming from VMWare the current state of (opensource) affairs is insulting.
Some detail points:
hybrid cloud: generally not considered workable, except for “extremely dumb” workloads like web etc. For those on, most people will be better served with a CDN type setup
Some (cloud vendors) sales people are actually running around selling a hybrid cloud that look like this: you/they add a backup active directory controller at their datacenter.
This of course is not AT ALL hybrid cloud or “bursting” but poses a problem. Knowledgeable people saying “sorry, but hybrid fully dynamic cloud is still tricky” will not be closing any sale. Instead the completely clueless sales drone will do, since he promises that it will work. since neither he or the customer knows the original idea, this works out
why doesn’t it work so far:
api elasticity including buying of new vms etc. was said to be rarely working, much less so if bare metal bringup is involved (adding hosts to remote cloud etc)
shrink down is also apparently a pretty ugly topic.
(The focus in this question was OpenStack to OpenStack bursting mostly)
Misunderstandings are expectations:
Can my VM move to the cloud, from vCenter to an openstack at the ISP?
General answer: no
General expectation: why not?
I wonder: why not just provide a good P2V tool (i.e. platespin) so this is on the list?
Sadly the relation between data lock in (meaning safe revenues) and lack of workload portability did not come up as a topic
This is a downward spiral – if you buy infrastructure, you can save some admin money. yet, that takes away the skill level you’d need to simply overstep those portabilty restrictions. Any seasoned admin could plan and handle an iSCSI based online migration from Cloud A to Cloud B.
But running off an IaaS (or MSP) platform, you might not have that as an in-House skill any longer.
Also tools that handle cloud federation DO exist, but are completely unknown.
Examples are Panacea and Contrail (this isnt Contrail related to SDN).
It is around for much longer, probably works but nobody knows (of it).
Sad so many millions were spent there, spent on the right thing but ultimately nothing came of it so far.
I think this would need unusual steps, i.e. on every 10m invested in openstack there need to be put 100k into marketing rOCCI / OCCI.
A nice hack was using OVF (sick format nonetheless) to avoid cloud-init style VM contextualization.
On the infrastructure front, it was shocking to see the networking stack to higher detail (we worked in a “smaller” multi-tenant example with a few 100 vlans). The OpenStack people decided to keep distance from any existing standard (like qinq, S-VLAN, PBB-TE) and instead made a large pile of shit with a lot of temporary / interconnecting vlans / vswitches.
The greatest shit was to see what they did for mac adresses:
When Xen came out, the Xensource Inc. guys went to the IEEE and got their own mac prefix of 00:16:3e.
Someone figured the best way to use use fa:16:3e. Of course they didn’t register that.
Probably thought he’s the most clever guy in the universe, except he simply didn’t get it at all.
All cross-host connections are done using on-the-fly GRE tunnels and all hosts are apparently fully meshed. I suppose this whole setup + OpenVSwitch are so ineffecient it doesn’t matter any more?
There are other mode selectable, and it seems to me that flow based switching will be less bullshit than what Openstack does by default.
I hope they don’t find out about DMVPN.
Problems of datacenter extension and cross-site VLANs were a no-concern topic.
Having a flat L2 seems to be oh so pretty. I am in fear.
What else did I dig into:
Rate limiting in this mess is a neccessity but seems to be possible. workable.
There are some hints at offloading intra-host switching when using Emulex CNA or Mellanox. It seems not possible with Solarflare.
I’m pretty sure someone at Emulex knows how to do it. But it is not documented any place you could just find it.
Considing this would have massive (positive) performance impact, it’s just sad.
I would try to only use SR-IOV HBAs, ensure QoS is enforced at ingress (that means, on the vm hosts, before customer traffic from a VM reaches the actual wires)
IP address assignments. One thing we didn’t get to was that creating the network required setting up IP ranges etc.
I’m not sold on the “IaaS needs to provide XXX” story at all.
In summary, I want to provide customers with a network of their own, optionally providing dhcp/local dns/load balancers/firewall/etc.
But by default it should be theirs – let me put it like this:
When you say “IaaS” I read infrastructure as a service. Not infrastructure services as a service. I’m sure a medium sized dev team can nicely integrate an enterprise’s IPAM and DNS services with “the cloud” but I doubt it will provide any benefit over using their existing management stack. Except for the medium sized dev team of course.
What I see is cloud stacks that are cluttered with features that bring high value to very small, startup like environments (remember i.e. the average OpenStack install is <100 cores). It’s cool to have them, but the thing is: If you’re expecting people to use them, you’re doing it wrong. They’re trivial, puny and useless (i.e. “yes we can assign ipv4″, “yes we can assign ipv6″ – but what happens if you ask about dual stack? subnets?) and it’s becoming a bad joke to expect companies that do more than “something on the internet” to spend considerable time on de-integration of those startup convenience features.
Another interesting side note:
Softlayer is also Xen based. That’s the cloud service that made IBM suddenly the number one of the market.
Among Amazon, Rackspace, Linode, OrionVM and Softlayer using Xen, a 9x% VMWare share in the enterprise market (which is probably a lot bigger than cloud), I’m once again puzzled at the hybris of KVM community thinking they are the center of the universe. People tell me about oVirt / RHEV while it has NO RELEVANCE at all.
The only really cool KVM based place I know is Joyent. And they don’t even use Linux.
Oh, and, coming back to cloud, I’m still puzzled by the amount of attention Microsoft Azure gets in Germany. It seems the competitors (especially the higher end ones like, HP, IBM, Profitbricks, etc who actually offer SLAs worth the name) simply can’t get a shot at the Microsoft-addicted SMB and medium enterprise crowd.
That said (enough ranting) they are cool to have in a demo style setup like the one we played with.
IBM’s solution seems a nice middle ground – config adjustments are easily done, yet the deployment is highly automated and also highly reliable.
They’re going the right way, selling a rackful of servers with one usb stick to install the management server from. Wroooom[*].
Here’s your cloudy datacenter-in-a-box
p.s.: Wrooom was taking a little over an hour. Pretty different to what I’m used to with CFEngine now.