Staging an update
The clock is ticking and you’re running out of time. You need to get your Windows Update staging server infrastructure in place, tested and ready for the onslaught that’s about to come. The big problem that’s looming over the horizon is Internet Explorer 7, and it’s reached Release Candidate 1 stage now, so release isn’t far away.
Microsoft claims a significant number of customers in the SME marketplace have successfully moved over to the Windows Update Server technology, a fact it can presumably see by monitoring its end of the downloading tunnel.
IE7 is going to be pushed out as a mandatory update, the logic for this being that IE6 is a horrible security mess and IE7 will be so much better. Have you noticed how the marketing men only resort to honesty about old products once they have something new to flog? Personally, I think that rushing out IE7 is going to be just as dangerous as leaving IE6 in place. The last thing we need now is a wave of post-release updates to fix just-found security holes. But I’m sure everything will be just tickety-boo on that front.
So get that staging server in place, and modify your Active Directory settings to ensure all desktops and servers get their updates from your server, and only from your server. Then pull down all the updates, making sure you’re only pulling down those that are necessary: Finnish might be a fun language, but it’s a waste of space if you only speak Essex. Otherwise, think about the problem you’re facing – every desktop machine will doubtless need a download measuring tens of megabytes. Multiply that by your number of seats, and watch your internet connectivity wilt under the strain.
Heat is on
After the recent heatwave, it’s a good idea to take 15 minutes to sit down and calmly re-evaluate what happened in your server room over the summertime. For many organisations, especially in the SME environment, servers are things that get stuffed into a cupboard, wherein they can nicely flambé themselves or any foodstuffs you care to cook on them.
Large organisations, of course, don’t have “server cupboards” or even “server rooms”, but have invested in a complete data-centre-style infrastructure, where cooling is carefully considered, measured and accounted for on a grand-capacity plan, and the same goes for mains power availability too. You might think such grown-up environments were thus immune from human frailties, but you’d be wrong, as witnessed in one corporate machine room I saw this summer, where the air conditioning ran out of headroom because the organisation had switched from several racks of 2U and 3U aged Pentium servers to a brand-new installation of 1U and blade servers. Despite several warnings (backed up in writing), they were sure their air conditioning and power feeds would cope just fine. When the Day Of Reckoning came, the room temperature rose ever higher. You know something is amiss when your RAID boxes start failing over to their hot-spare disks because one of the RAID drives has overheated.
The same goes for power supply, where even the best-laid plans can go awry. I’ll cheerfully admit that I got things wrong here last week when we had a four-hour power outage. The lights went out, the machines kept running just fine, and all their APC UPSes started their cheerful beeping to tell me something was wrong. I even had all the internet access I needed, except for one small hiccup. To solve a “right now” business need, I’d put a Gigabit switch in the middle of the feed between the lab and the machine room (which are some 100m apart), and in my haste I forgot to put a small desktop UPS on the mains feed to that switch, so that when the power failed I lost access to the rack-mounted servers. Hence, I had no DNS server to resolve names to numbers and, although I was connected to the internet, it was in a somewhat numerical fashion only.