But what are the downsides of this approach? It would take a lot of outbound data connectivity to push those server images out every night, plus someone at the far end, who knows how to bring up the VMs and ensure that they’re running correctly. This is clearly a value-added service, of course, which would be chargeable.
Where this could really work well is in local WAN networks – around a business park, for example. I was eyeing up a unit in the same light industrial/technology park as Merula and it suddenly struck me – why not run 100Mb Ethernet around the park itself, plugging straight into the back of the ISP? Then shipping images would be really easy. Of course, not every business park has a handy ISP on site, but there’s no reason why a group of businesses couldn’t share high-speed connectivity between them and get the capability to provide VM-based support and failover between buildings.
The great advantage of bringing up a VM is that it can be done in a completely unit-based fashion, and in a hands-off way, too. Routing support will need to be provided by the upstream ISP, but this is where the value-add flexibility of a mid-sized ISP really shines. Certainly, if I were running my business from a business park I’d be interested in the ways risk could be mitigated around the park, both locally on the ring and in conjunction with an ISP. We don’t all need off-site disaster recovery, and sometimes with a little more cunning in the planning and implementation we can reap good results for minimal outlay, while still providing the comfort factor of a solution that will actually work when problems arise.
And always remember the old, but true, saying that nothing beats the data transfer rate of a motorcycle courier with his backpack full of optical discs, tapes or hard disks. Delivering a set of pre-baked VMs to a DR site is going to be one of the quickest ways of doing DR and business continuity on the planet.
Welcome back to my part of the Server Room column, where this month I’ll be looking at how you can monitor the performance of your new Windows Server 2008. Performance monitoring has traditionally been a chore that takes huge amounts of time to set up and even more time to evaluate. However, it’s essential, and fortunately it’s also easier than it used to be thanks to the new Windows Server 2008 tools. Having just spent the better part of a month trying to track down the source of significant slowdowns in a Windows Server 2003 Terminal Services environment, I was feeling a little jaded at the prospect of examining yet more PerfMon (Performance Monitor) counters whizzing up and down on my screen, but I’d have to say I feel a lot more relaxed about the whole thing after spending time looking at Windows Server 2008’s offerings.
Performance monitoring via PerfMon is now done through a single Microsoft Management Console (MMC) called the Reliability and Performance Monitor. Actually, that isn’t strictly accurate: you can view the Reliability and Performance Monitor (RPM) inside Server Manager, and if you have a big enough screen then go for it, but otherwise I think you’ll find it easier to view in standalone mode.
It can be found, as you might expect, lurking with all the other Administrative Tools. Fire it up and the familiar MMC interface appears. You start off being presented with a view called Resource Overview, and at first glance you might think you’re looking at a standard Task Manager Performance pane on a quad-core computer of some type – until you notice that one of the processors looks a little odd.