I’ve just returned from the opening day of Apple’s Worldwide Developers Conference in San Francisco, where Apple unveiled the last key pieces in its technology strategy: the new Mac Pro workstation, sporting quad Xeon CPUs (as two twin-core chips), and the new Xserve 1U rack-mount server, which shares the same processor architecture and horsepower. The company also demonstrated some key features of the Leopard version of OS X 10.5, due to ship next spring. It’s obvious there are immediate comparisons to be made between Apple’s OS offering and what Microsoft, in conjunction with its partners, is planning to offer for Vista when it ships.
For example, Apple decided not enough people were doing backups of their critical data, as its research suggested only 26% of users did any backup, and of these only 4% used an effective automated and managed backup strategy. To remedy this, Apple has created a backup solution called Time Machine. It records a snapshot of the computer’s state, including all the files on the hard disk, and then over time records all the changes made to the file system – you have to dedicate an external drive or network share to accommodate this process.
Obviously, I was tempted to ask some awkward questions. How often does Time Machine do a full snapshot rather than running incremental backups? Can this period be varied according to the data type? The Apple spokespeople were only prepared to smile and say those details would be forthcoming in due course. What was eye-popping, though, was the way they’d integrated the recovery process into the desktop: using a set of high-speed 3D visuals reminiscent of SS Enterprise going into warp drive, you were almost literally “flown back in time” to look for the files you were searching for.
Compare and contrast this with the backup solution in Vista, a program that does its backups to a conventional device in a conventional way, and will thus continue to be ignored by almost all users if history is any guide to future behaviour.
Let’s take another example: Apple also announced that Leopard, especially the Server version, would allow for Spotlight queries to be run across multiple machines (Spotlight is the name for OS X’s disk-search and indexing engine). Entering awkward mode again, I asked the relevant people whether such searches were effected by moving indices together and then aggregating them? Or did it issue a separate query to each target in turn and then build the answer from those? As I expected, Apple has taken the low-end, easy route by sending out a sequence of queries to each machine and then bolting the answers together. There’s nothing inherently wrong with this approach except that it doesn’t scale, so you can’t feasibly spray out the same query to 500 machines over a LAN or 100,000 machines via a global WAN.
This clearly illustrates the major difference between Microsoft and Apple. Apple tries to solve the here-and-now problems that are affecting its home users, SoHo clusters and small businesses; Microsoft, on the other hand, aims for huge architectural solutions and won’t accept any trick that doesn’t scale up to global WAN size, grimly surfing the crest of the wave of increasing processor power conferred by Moore’s Law to keep abreast of the explosion of data found on such WANs.
Apple shrugs its shoulders and says “this will work fine for finding some pictures in a few gigabytes of local storage”; Microsoft struggles with attempt after failed attempt, nails itself to the cross of WinFS and even then has to perform a humiliating climb-down as Beta 2 approaches. These contrasting approaches can be observed time and time again when comparing equivalent products and technologies from the two companies, which is why Microsoft will always be unassailable in the business and corporate space but Apple will continue to saw it off at the knees in the home/SoHo arena.