Intel Research Day: pick of the projects
I’ve already written about Dispute Finder, a neat little service which is up and running – albeit shakily – right now. But Intel’s Research Day in Mountain View, California hosted some far more ambitious and long-term projects too. Here are my favourite projects from the rest of the show: research being what it is, some of them will probably never be heard of again, but others may well find their way into real-world products in the next few years.
Oh, and just to ramp up the excitement, I’ll take you through my top seven in reverse order.
7. Location Awareness with LED Visible Lighting
This is one of those ideas whose appeal lies in its sheer simplicity. In short, it’s a system that warns you when you’re too close to a car in front – or when a car is too close behind you. The clever part is that it works out the distances involved by triangulating the beams from LED headlights and tail-lights.
It’s a fast system, and accurate – the showcase stand included a live demo with some toy cars, tracking their locations in real time to a precision of under an inch. It can even track multiple cars at once.
Sadly, there’s one big catch. Just watching regular headlights is apparently too imprecise, so Intel’s system requires them to pulse on and off at a rather alarming 20MHz. The effect is invisible to the human eye – it would need to be about half a million times slower for you to see it – so it doesn’t directly affect safety. But it does mean the system is effectively useless until everybody in the world fits a 20MHz modulator to their headlights. That’s a pretty tough sell, especially for a benefit that you don’t actually experience yourself (having your headlights adjusted only helps other people to see you, not vice versa).
It’s also worth noting that this isn’t actually a problem that many people would admit to. Reverse parking, yes, that’s an area where some electronic guidance can be very helpful. But when you’re out on the open road, one would hope you can detect the cars around you without the aid of a high-tech sensor. Still, it’s a fun demo.
6. Energy-Efficient, Scalable I/O
It’s a bit of a mouthful, but EESIO is another simple idea. At present, the high-speed buses that ferry information around inside a PC can consume up to 10W before the system’s done a single calculation. For a low-power PC or notebook, that’s not small change; and as transport speeds ramp up, I/O power demands will continue to rise while other components become more energy-efficient. The likes of QPI and PCI Express could thus end up being among the most power-hungry parts of a system.
What’s the solution? Well, on Tuesday Mario Paniccia suggested Light Peak could be used as an internal high-speed bus – and the idea does have some merit. But converting every bit into laser light and back again is hardly an energy-efficient approach.
Enter EESIO, a technology which does away with all such back-and-forthing. It appears as an unobtrusive grid of contacts on the outside of a chip package, which can be connected, via a ribbon cable, to another EESIO-compatible chip. The two units can then communicate directly, without having to involve the motherboard at all. It’s a seamless way to link a CPU to a GPU, to a bank of DIMMs or even to another CPU in a multiprocessor system.
In truth, EESIO isn’t quite as straightforward as it sounds. You can’t simply hard-wire chips together, not least because they probably won’t be running at the same speed. The chips therefore require integrated EESIO controllers, and that incurs a certain cost in terms of complexity and power consumption. Happily, Intel engineers claim a 10Gb/sec EESIO link still requires only 10% of the power used by current internal buses.
Since EESIO requires support inside the CPU package, it will probably take a degree of investment to get it off the ground. Sooner or later, though, some sort of high-speed, low-power bus is going to be needed, and once EESIO is in place the overheads are attractively low.
5. Simple Energy Sensing
I don’t know about you, but every time I get an electricity bill I take a stiff drink, then solemnly determine that I must find a way to reduce my power consumption. Then I take another drink and somehow lose interest in the idea. The fact is, getting to grips with your energy usage is a difficult, boring project.
That’s where Simple Energy Sensing comes in. Simply plug the device into a socket and it’ll keep track of how much energy your various appliances are consuming, along with a running tally of your electricity bill.
Now hold on, you’re probably saying. That’s not innovative at all. Indeed, it’s not. What is innovative, and terrifically clever, is that it can identify individual appliances by their electrical signatures. For example, turning on a lightbulb causes a distinctive fluctuation on the power line, which the system can identify as a lightbulb-type pattern. Turning on a television produces a much more complex pattern, as its various components kick in at slightly different rates, which again can be recognised. The system can thus keep track not only of your total consumption, but of exactly which appliances are contributing to it at which times.
I must confess, at first I was sceptical as to whether this system could really distinguish between, say, a toaster and a hair-dryer. But the sampling resolution is extremely high, enabling it to catch tiny fluctuations lasting for a thousandth of a second or less, and the developers seem certain that this is more than sufficient to distinguish between the range of appliances in an average home.
Perhaps the cleverest bit is the interface. Yes, you can monitor your usage on a computer, as above, and see which appliances ought to be unplugged, replaced or used only in the dead of night when electricity is cheaper.
But Intel realises that only geeks will do that on a regular basis; so they’ve also put together a stylish tablet-type console that would look at home in any kitchen or hallway. Suddenly the idea goes from a nerdy proof of concept to an attractive lifestyle upgrade that could quickly pay for itself.
4. Oasis: Smart Computing on Everyday Surfaces
On the podcast, we recently commented that the idea of having a touchscreen computer in the kitchen is a nice one… but that it would immediately get covered in flour and grease. Intel’s Oasis project uses a projector and a 3D camera to turn your work-surface into a large, virtually indestructible tabletop touchscreen.
It’s a 3D camera because that allows Oasis to tell when your fingertip touches an icon, without getting confused when you simply move your hand above the surface. The shadows cast by your arms as you use the “screen” can be a little intrusive, but you can’t have everything.
And it gives Oasis an impressive ability to recognise not only fingertips but any sort of physical object. When Intel’s Beverley Harrison (the mastermind behind the system) placed a green pepper on the work-surface, a context menu automatically appeared next to it, enabling her to view recipes involving peppers or add peppers to her shopping list. Adding a piece of steak to the work-surface brought up a recipe for steak with green peppers, while setting down a tub of ice cream caused an automatic countdown to pop up, warning us against leaving it out of the freezer for too long.
These, of course, are just demonstrations. The system has huge potential beyond the kitchen, and to be honest the next challenge is probably working out what to make of it. The ability to work with physical objects is cute, but it doesn’t seem to open many doors: you can’t back up a pepper for later, or email some steak to a friend. But even if that part of the project is a dead-end, the combination of a 3D camera and a projected display could make Oasis an affordable and extremely robust alternative to large-scale touch-screen displays.
3. Wireless Energy Resonant Link
Call me a nerd, but the idea of domestic wireless power gets me excited – I love the idea that my phone could be charging whenever I’m at home, even while it’s in my pocket. Without power cables trailing everywhere my home would be a lot tidier, and it would be cleaner too as I wouldn’t have to remember to plug the Roomba in.
Intel’s latest breakthrough doesn’t make all of that a reality, but it’s a step in the right direction. Currently, near-field wireless power systems (such as you’d use in the home) typically work by generating a magnetic field which induces a current in a remote receptor. The problem is that the receptor has to be directly in front of the induction coil to get the benefit.
Intel’s new system, demonstrated to me by Josh Erickson, is able to sweep the “focus” of the induction coil across a wide area – without physically moving the coil – and automatically lock on to locations at which the energy is absorbed, indicating the presence of a receptor. This expands the usable scope to almost 180°, effectively turning wireless power from a directional technology into an ambient one – though if two devices are discovered at widely different locations the system can’t power them simultaneously but must pan between them.
Erickson explained that the system was currently able to send a charge of two watts over a distance of around four feet, which ought to be just about enough power to charge a mobile phone. For the time being, though, the size of the coils is a stumbling block: the coils in the demonstration each had a diameter of around nine inches, but if you were to shrink the receptor down to the size of a pocket device, it seems the induction coil would need to be several feet across.
Full disclosure: I’m not an expert in this field, so forgive me if I’ve got the terminology slightly wrong. But the basics of what Intel has achieved are easy to understand, and it’s clearly a very promising step.
2. Resilient Computing
Most processors are capable of running above their stock speeds. A 2.2GHz processor might in fact be able to run at 3GHz, but it’s deliberately throttled back to provide what Intel calls a “guardband” – a generous degree of tolerance that guarantees error-free performance even at high temperatures and heavy load.
The Resilient Computing project has no truck with guardbands. Their project statement declares that running CPUs at such cautious speeds “leaves performance and power on the table.” They run their chips at the very limits of their abilities, achieving an advertised 40% improvement in performance from the same execution cores.
Surely, you would think, this leads to horrifically unstable systems? Well, it probably would if these were ordinary desktop processors. But the team has modified them to detect when the execution pipeline has been unable to keep up with the overclocked core, and to cleanly resume processing at the next clock cycle. This eliminates the most common cause of overclocking-related failure, at the cost of a few wasted ticks.
In fact, “a few” is an understatement: a demonstrator showed me that a “1GHz” chip running at 1.4GHz was in fact losing more than three million clock cycles per second (it’s the meter at the top of the pile in the picture) to pipeline misses. But that still translates to an overall performance benefit of 39.7% with no loss of stability. Going the other way, it’s also possible to cut the power going into the CPU by as much as 20%, and use the same resilient logic to keep the system stable.
The thing I love about the resilient approach is that it simply makes more effective use of a capability that’s already there – much like the Turbo Boost technology in Core i5 and Core i7 processors. Since it gives such a large benefit, at such a low cost, I wouldn’t be surprised to see it appearing in real products sooner rather than later.
1. Single-chip Cloud Computing
In truth we’ve seen this project before, but it’s still my favourite of Intel’s current research projects. Remember how Larrabee was supposed to combine 32 x86 cores into one all-powerful parallel computing card? Well this project – informally referred to as Rock Creek – has 48 cores on a regular CPU die. And, unlike Larrabee – gosh, I seem to be saying that a lot lately – it actually works.
The difference in approach is simple. “The idea with Larrabee,” explained Intel’s Jason Howard, “is that all those cores were supposed to be fully cache coherent. And we said, that’s a stupid idea, because making that happen is almost harder than doing the actual computations.”
“So for this chip we manage cache coherency in software. Yes, there is an overhead, but it’s just a lot easier to do.”
In reality, the overhead seems very small: Mr Howard demonstrated multi-threaded benchmark scores scaling almost linearly as he permitted them to run on more and more cores.
Surprisingly, he revealed that the execution cores are based on the Pentium design – not the Pentium 4, but the “classic” Pentium, launched in 1993, shrunk down from the original 0.8µm process to 45nm. (Foolishly, I neglected to ask whether Rock Creek therefore suffers from the notorious FDIV bug.) The choice, he said was simply because the Pentium core can run more or less all modern code, while remaining compact enough to etch 48 times onto a single die.
Does the whole thing therefore run at 60MHz, I asked? Apparently clock speeds haven’t been decided – always a hazard with prototype hardware. Mr Howard did reveal, though, that they’ve had chips working in the laboratories at speeds from 125MHz all the way up to 1.3GHz.
You’d imagine that running 48 cores at 1.3GHz must eat up a lot of power, but with all cores active the processor idles at around 75W, and even at full tilt draws only around 125W.
“And we can shut down cores in blocks of four when they’re not needed,” added Howard. “That’s done in software too – the whole thing is designed to be managed in software.”
It’s fair to ask what practical use there is for Rock Creek. After all, most desktop applications benefit more from single-core speed than multi-core parallelism. Howard himself didn’t suggest a killer application for it, though the official project title obviously hints at the idea of offloading tasks to a “cloud” of local CPU cores.
But with its native x86 support, one possible role for Rock Creek is to provide an accessible alternative to stream processing, as popularised by Nvidia’s CUDA – and Intel has certainly aimed it at the same markets.
“We’ve already given a hundred of these to researchers and academics,” Howard revealed. “You know, just so they can start considering how they might program for it.”