From the world of CPUs and GPUs, expect a new type of processor to hit the headlines before long: the XPU. What does the X stand for? Anything you like. For lo! I’ve recently seen the future, and it’s heterogeneous.

It turns out – forgive the gloating – that a point I’ve returned to a few times over the past couple of years is true, and the industry is starting to concede it in public. Stacks of identical cores on a single processor is a brilliant idea in theory, but it’s almost impossible to use them effectively because parallel programming is extraordinarily difficult and there’s no magic technological fix for it. It simply requires very clever programmers, and if you’ve been watching Big Brother recently you’ll know clever people are in short supply.
And so in a briefing session with AMD last month, the company finally admitted the current multicore CPU trend isn’t the answer to everything. A senior spokesperson for Intel’s nemesis was heard to say that, “extending homogeneous cores beyond a handful is not the way to go”. Intel itself hasn’t been quite so upfront about it, but at an Intel Developer Forum (IDF) keynote in Beijing a few months ago a slide briefly materialised that appeared to show a schematic of a 36-core system, each core designed for a specific task: six for the UI, a couple for security and a bunch more taking care of raw number crunching. Intel has a long history of revealing its upcoming plans in devious camouflaged ways at IDF, so it’s fair to assume that what we saw was an approximate outline of the processor architecture codenamed Larrabee, due to replace Nehalem some time around 2010.
If you’re questioning the need for ever more powerful processors, the answer no longer lies in exotic yet-to-be-imagined applications; it’s far more prosaic than that. Just start your web browser and open a few of the popular portals such as Yahoo or MSN, while watching your CPU usage: from out of nowhere, the web has become a processor-murdering fiend. Every popular commercial site has, at the very least, one video advert, and my browser regularly comes to a graceless halt with a few tabs open. It’s not as surprising as it might seem: consider that, pretty recently, playing just a single video was the realm of a high-end PC.
Video isn’t by any means the only web-related performance problem. JavaScript – the only language able to run natively in your browser – is an interpreted language and hideously slow to execute in comparison to compiled languages such as C. That hasn’t stopped practically every site on the web using Ajax techniques and embedding huge, unwieldy masses of JavaScript in its pages – the Web 2.0 revolution is founded entirely on these scripting monstrosities. When you open a Google Docs spreadsheet, you’ve got a full-blown spreadsheet application written in JavaScript, being loaded as a text file into your browser and executing locally in interpreted mode. You couldn’t devise a slower, more inefficient way of running code if you tried, and there’s no way to make a JavaScript application run in parallel on multiple cores.
And so the case for heterogeneous CPUs looks clear: there are several clearly definable tasks that gobble up a lot of processor power, so you assign a certain percentage of your cores just to those tasks, tuning these cores to make “task X” many, many times faster to execute serially. The parsing and execution of XML, JavaScript and embedded video are just a few of the areas ripe for acceleration. There are many others, ranging from encryption to HD video encoding.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.