Why virtualisation hasn’t slowed the growth of data
Right across the road from me they’re building a skyscraper. I can hardly ignore it, and even though the developers have been very nice (I’ve only come close to a fist-fight with one smug subcontractor), there’s no getting away from the fact that this is one seriously non-virtual chunk of masonry.
There’s lots of banging stuff together and trucks arriving all the time during agreed working hours. There’s a tower crane, which, if my maths is correct, reels in well over a kilometre of steel cable to lift everything that will make up the main body of the tower.
In order to think about “cloud computing”, we’ve all had to get our heads around ”virtualisation”
And blokes in clean vans, with hundreds of little drawers in racks and painfully tidy optical workbenches turn up in the small hours to lay and splice together the fibre cabling. It’s a massive, and very physical, undertaking.
I was continually reminded of this scenario while looking around VMworld Europe in Copenhagen this year.
Perhaps it’s mostly a problem of vocabulary: in order to think about “cloud computing”, we’ve all had to get our heads around ”virtualisation”, but these two concepts are only tangentially related to one another.
A vast army of over-stimulated salesmen has galloped away with the c-word, and at the same time they’ve also prostituted “virtual” to the cause of Mammon.
The overwhelming lesson I took away from VMworld is that while you may not be able to stub your toe on a virtual server, you surely can do a lot of damage to your tootsies on a piece of “virtual storage”.
The VMware that so excited me and Jon Honeyball during the late 2000s is no longer the ravening beast it was back then. From VMware’s point of view, in the target market it set itself, its job is done.
More than 50% of data centre workloads are now virtualised – provided you accept VMware’s definition of “data centre” and “workload” of course – and something like 85% of data centres are using vSphere.
That certainly is a lot, and I don’t think it would be unfair to say the heavily competitive behaviour VMware engaged in throughout the earlier years has been replaced by mild bemusement, by the idea that to keep moving forward no longer requires “more and better”, but “different”– a far more challenging frame of mind.
The keynote speech from Raghu Raghuram was all about this change of thinking, and some of the rather surprising conclusions that flow from a genuinely virtualised world-view.
One of Raghu’s slides showed a definitive answer to those who are understandably peeved by the recent changes in licence pricing from VMware: gone are the days of per-server licensing.
VMware now wants to allocate licence costs according to how much memory is available in each virtual host, which has the effect of penalising customers who don’t buy into the latest generation of servers.
A strange decision to take in a recession perhaps, but one that becomes understandable once you realise that Raghu and the VMware team expects a typical 2015 server to have 16 cores spread across two sockets, several terabytes of memory, and to be running 320 VM guest processes.
This makes eminent sense if you’re running a data centre, the idea being that, as everyone remembers from the early days of virtualisation, most businesses run their servers at a low level of utilisation – their CPU charts, over the course of a typical working day, rarely nudge above 10%.