Technolog

In the future, everything’s going to be awfully floppy. Not just terafloppy – that’s so last year! – but petafloppy, too. This truth was drummed into me at the Intel Developer Forum in Shanghai last month: the theoretically punchy marketing tagline for this one was “milliwatts to petaflops”. This reflects the company’s new-found love affair with Atom-processor-based handheld devices at one end of the scale, and its continuing push into massively parallel many-core computing at the other.

Technolog

The talk in the keynotes was not just of petaflops, but exaflops and zettaflops. A petaflop computer – it should be written “petaFLOPS computer” but everyone’s bored of that – is one that can churn through one thousand trillion operations per second. A teraflop computer, by comparison, can only manage an embarrassing trillion.

So far, no-one’s officially managed to build a petaflop machine – the fastest certified supercomputer on the planet is currently IBM’s BlueGene/L, which with its 200,000 PowerPC processors manages a peak of about 0.6 petaflops. Chances are that, with the publication of the next official top500 supercomputing list (www.top500.org), the petaflop barrier will be broken. And after that the exaflop wall will fall and the zettaflop one, too, at which point we’ll have computers capable of executing a billion trillion operations per second.

And that’s all fine and dandy for supercomputers that have dedicated staff to run them and write programs for them. The potential problem is that this type of computing power will soon be sitting on your desktop, courtesy of the coming generation of massively multicore CPUs. That raises the spectre of everyday developers needing supercomputer- programming skills. A phrase from a talk I once attended by a man from the NSC (National Supercomputing Center) springs to mind: “Any fool can build a supercomputer, but it takes a genius to program one.”

The problem is acute for the poor developers who have to write the software to run on the six-core, eight-core and more-core CPUs of the future. Intel isn’t oblivious to the issue, and used IDF to promote its upcoming Ct compiler technology, which – if the claims are to be believed – is a silver bullet that will solve the problem at a stroke. But I spoke to Intel’s James Reinders, its chief product evangelist. He was enthusiastic about Ct, but injected a few sanguine notes: first, Ct is still at the research phase, although he speculated that a public test version might be available this year. Second, he says it’s great for “certain classes” of problem but won’t magically parallelise arbitrary code.

There are still bigger chinks in the multicore armour, too. One of the biggest is the issue of client-side web applications. If there’s a single aspect of everyday computing that needs accelerating it’s the web app. Google Docs is the main one for me: the Spreadsheets app replaced Excel about a year ago as my first resort for quick lists or rough-and-ready finance calculations. But there’s almost nothing being done to help outrageously sluggish JavaScript run faster on multiple cores. You might be sitting screaming at your web app, but that won’t stop seven of the eight cores in next year’s desktop processors from sitting around twiddling their thumbs as you fume. Reinders is pragmatic about the issue, though. He sees a combination of factors, including developer education, better compilers and a new generation of developers growing up and learning to program for multiple cores right from the start. And he’s persuasive.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos