The truth about extreme networking
One sentence is often all it takes to set me off, and in this case that one sentence was during a quiet chat with the Microsoft evangelist for Hyper-V.
Like a lot of the new contacts I’ve made at Microsoft recently, he’s at pains to tell everyone that he hasn’t been inside the Collective since kindergarten, and his credentials when it comes to virtual machines and working with Hyper-V didn’t start with the earlier incarnations of the Microsoft Virtual Computing architecture.
He had been, so he said, working a lot with Xen Server and VMware, and – in line with a few comments that I’ve made here in the past – the overwhelming majority of the work he’d been doing wasn’t with hulking great enterprise-grade servers. No, most of his projects had been done on recycled top-end workstations.
If you look at the big picture below and you’ll see one such machine, an HP xw8600. I’m going to talk about what’s inside that machine in more detail later on, but for now I just want to use it to clear up confusion that seems to be quite widespread, judging from some of the conversations I’ve been having with readers on the PC Pro website.
The xw8600 has a motherboard about the size of an extra-large pizza box, and if computers were indeed pizzas this would be an extra-fat, super-cheesy, ultra-spicy one with 12 different toppings. When you actually get into the spec of the xw8600 it’s noticeably fatter than most of the servers that are commonly seen in the wild, for businesses of whatever size.
In fact, it was looking at this machine that got me thinking about extreme networking. By “extreme” I don’t mean “large”, and in fact I want to impose my own particular meaning on the word in the following discussion.
I believe that the practice of redeploying a workstation as the host machine for your virtualised servers counts as “extreme” because it requires a high degree of technical knowledge in building the machine, plus a great deal of knowledge of the business, to figure out whether the projected loading on the server will be sustainable.
It also requires quite a bit of pondering over parts catalogues and eBay auctions – at least for me it does – to bring the machine up to a level where it will do what you want it to. As an activity this is unquestionably a lot more extreme than what typically happens, when people simply buy off the page and aim for the highest-speed CPU they can find within their budget.
I’ve been complaining for a long time about the way purchasing is done inside those businesses that are classified as “mid-size” or “large” in the sales blurb for servers and networks. I’ve walked down plenty of those aisles of top-end enterprise servers, all turned on and humming, and all with their massive 50kg steel cases almost entirely empty of extra parts.
It doesn’t take long when looking at such machines to figure out that the company’s purchasing model is deeply flawed, given that each of these 7U boxes contains just a single CPU, one or at most two sticks of memory, a couple of disks, and beyond that just a lot of hot air.
That isn’t “extreme networking” in my book – merely extreme extravagance. It won’t be much of a surprise to you that I believe the extremes are to be found most often at the very top of the size band and the very bottom.
If you’re a truly enormous international corporation then the limits of the standards used to build your network architecture lie pretty close to your day-to-day performance needs, which gives us one extreme in the shape of 10Gb networks with Layer 3 switching, SANs containing drawers full of drives in their hundreds, all the really headline-grabbing stuff. That’s easy enough to recognise as extreme. Then there’s the lower and lowest end of the market.