Dealing with a backup that backfired
This is a story with its roots in Hollywood, but not the one in California, the one in Florida. This one’s just up the road from Orlando, and last year I briefly cast myself in the role enjoyed by Don Johnson in Miami Vice, by blasting up the freeway in a rented Corvette on a spare afternoon to try to find out what on earth had been going on in the backup business.
Hollywood (FL) is the home of the Veritas (now Symantec) development and support team for the venerable Backup Exec range of products. These products may not have quite achieved that degree of part-of-the-language recognition enjoyed by Biro and Hoover in their own respective marketplaces, but they come pretty close if you’re any sort of network professional or admin.
The job Backup Exec does is even, at least on the surface, rather similar to that performed by a vacuum cleaner – namely, to suck up all your dirty old data and stash it away hygienically into a compressed backup store.
The job Backup Exec does is even, at least on the surface, rather similar to that performed by a vacuum cleaner – namely, to suck up all your dirty old data and stash it away hygienically into a compressed backup store
Except that, in a remarkably well-documented incident at a site in the UK, a then-current release of Backup Exec did rather the opposite thing: had it actually been a vacuum cleaner then one of its fan blades would have snapped off, penetrated the casing at Mach 1.5 and taken out your flatscreen TV, your cat and a passing air ambulance.
This particular install of Backup Exec had been instructed to place an agent onto a new Exchange server remotely, but once the process of remote installation had completed, all that was left on the server wasn’t a running, complex, powerful corporate email system, but a wholly non-bootable boat anchor.
This is the sort of accident that quite understandably induces an extreme panic condition in pretty much everyone involved, all the way from the guy who clicked that last mouse-click, up through the whole company that owns the server, through to its local equipment sales and support firms, and finally all the way up through national and international hierarchies to the developers and vendors of the satanic software in question.
Especially, so it would seem, when a member of the press (in this case named Cassidy) happened to be lurking somewhere nearby…
Frequently the response to such a crisis is a state of complete paralysis, because every decision made under the glare of media attention will feel like a bad one, or one that’s vulnerable to later review, or one that seems inadequate compared to what rivals in your business might do if put in the same position.
Therefore doing nothing seems the safest bet, or even claiming there was never any need to take any of the options in the first place, because nothing could possibly be wrong!
Personally, I feel somewhat conflicted when exposed to incidents of this severity. These days, my business cards may identify me as an editorial fellow, but that status was achieved mostly by being a techie, and the lore of the techies – we who actually have to work out what’s gone wrong, and how to fix it – states that being wrong is all just part of the natural process of diagnosis.
Only a particularly snide and insecure breed of writer would pick out only those “wrong” parts, and portray them as a disastrous catalogue of errors, ignore all the smart fixes and clever deductions, and deposit them on the cutting-room floor.