Ready to serve


The hardware on this picture is uncommon even for a tech-geek like me.

opteron 6100, 8gb dimms

I remember when we upgraded a Cell in a HP SuperDome to 96GB in 2004, and now this amount of memory is here on my desk.
The two Opterons (12core, 2.3 or 2.4GHz) would of course love 4 more DIMMs with this to be in full interleaved quadchannel mode. (That way the can hit 50GB/s memory throughput, which is quite far and beyond any normal PC standards and nothing you’ll need at home any time soon…)

 

Tomorrrow these will all move into their new home and very soon (I hope) this will be the home of a few very lucky VPS customers (since Xen *does* of course allow “bursting” into unused memory. That means the first guy will start his 1GB contract with 45ish GB (NUMA beware) until the second guy moves in.

I hope this concept will not end in users trying to kick each other off the server to get more ressources? 🙂

 

 

Advertisements

3 thoughts on “Ready to serve

  1. hi!

    looks like you’re going to be the first “public cloud(++) service provider” who makes up his mind about the righteous resource usage of his customers ?!? => very reasonable and absolutely *makes-sense* in a vWorld (consciously used this because som’thin like vCloud already has been ® (but terms are ‘siempre igual’…) => maybe try to register the latter for your own?!?;)),,,

    but comin to the clue, what i wanted to say: where more and more services are going to be virtualised, the idea *and* solution to transfer these into any (and why not “your”) cloud, could probably be the best thing what we as system responsibles can do in near future, to concentrate on our own business and *not* to suffer from the basics (what you’ve already done at last;))…

    so, whish you good luck in starting this thing and ever keep your serive evil! (erm, service-level;)).

    SAsch

    • Thanks for the wishes!

      And the “cloud-bursting” idea is in fact ideal for corporate IT.
      – spinning up more service “nodes” with increasing demand, and immediately flatlining them once the spike is over to save cost. This is dependant mostly on some very smart scheduler that allows you to do this rule-based. First 3 instances should be on-site, but with oversubscription / load of 10, bring up 5 more off-site.
      – being able to offload-compute jobs quickly without having 10gbe internet uplink. This would mean you need to be able to “latency-tag” your virtual machines and data. Depending on the ratio of bandwidth and volume, it will be a lot more efficient to book 20 more cloud instances and re-generate the data set instead of “copying it up and down the sky”.
      – Last, a provider that is able to connect YOUR virtual machines to YOUR net, instead of to the internet cloud api blah blah. This is where the current cloud compute power market places still fail, since none of the cloud providers bothered to think of such feature sets in their APIs.

      In Germany, we have the luck that the industry group “deutschewolke” has gathered people who actually *think* about stuff like that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s