In the future, everything’s going to be awfully floppy. Not just terafloppy – that’s so last year! – but petafloppy, too. This truth was drummed into me at the Intel Developer Forum in Shanghai last month: the theoretically punchy marketing tagline for this one was “milliwatts to petaflops”. This reflects the company’s new-found love affair with Atom-processor-based handheld devices at one end of the scale, and its continuing push into massively parallel many-core computing at the other.
The talk in the keynotes was not just of petaflops, but exaflops and zettaflops. A petaflop computer – it should be written “petaFLOPS computer” but everyone’s bored of that – is one that can churn through one thousand trillion operations per second. A teraflop computer, by comparison, can only manage an embarrassing trillion.
So far, no-one’s officially managed to build a petaflop machine – the fastest certified supercomputer on the planet is currently IBM’s BlueGene/L, which with its 200,000 PowerPC processors manages a peak of about 0.6 petaflops. Chances are that, with the publication of the next official top500 supercomputing list (www.top500.org), the petaflop barrier will be broken. And after that the exaflop wall will fall and the zettaflop one, too, at which point we’ll have computers capable of executing a billion trillion operations per second.
And that’s all fine and dandy for supercomputers that have dedicated staff to run them and write programs for them. The potential problem is that this type of computing power will soon be sitting on your desktop, courtesy of the coming generation of massively multicore CPUs. That raises the spectre of everyday developers needing supercomputer- programming skills. A phrase from a talk I once attended by a man from the NSC (National Supercomputing Center) springs to mind: “Any fool can build a supercomputer, but it takes a genius to program one.”
The problem is acute for the poor developers who have to write the software to run on the six-core, eight-core and more-core CPUs of the future. Intel isn’t oblivious to the issue, and used IDF to promote its upcoming Ct compiler technology, which – if the claims are to be believed – is a silver bullet that will solve the problem at a stroke. But I spoke to Intel’s James Reinders, its chief product evangelist. He was enthusiastic about Ct, but injected a few sanguine notes: first, Ct is still at the research phase, although he speculated that a public test version might be available this year. Second, he says it’s great for “certain classes” of problem but won’t magically parallelise arbitrary code.