Intel’s RealSense camera: seeing the world like a human

3D photography is coming to tablets, smartphones and PCs – and this time, it actually works.

Intel this week revealed its RealSense camera, which uses multiple sensors to add depth to images, allowing a host of applications, from adjusting the focal point of an image to gesture recognition and augmented reality.

The system will first be available in the Dell Venue 8 7000 series tablet, arriving in more devices early next year, said Dr Achin Bhowmik, the CTO of Intel’s perceptual computing division.

“The focus for this programme has been adding natural senses, almost human-like sensing,” he told PC Pro at the Intel Developer Forum. “It’s pretty much the ability to see and understand the world around us.

“If you look at every device today, it comes with a 2D camera… all this is good for is snapping pictures or doing a video call,” he added. “But there’s so much more that we do with our 3D sensor – the eyes do much more than snap a 2D picture.”

The RealSense camera has a CMOS sensor as well as an infrared one, plus a MEMs (micro-electro-mechanical) device that projects an invisible pattern of light across a scene, to help measure depth. The system also includes a new chip from Intel – “it’s not an Atom or a Core processor, it’s a very specific processor we’ve developed for this product,” he said.

RealSense Cameras

There are two versions of the hardware: a front-facing model and a rear-facing model. The first is designed for all-in-ones and laptops, to be used for gesture recognition or for video conferencing – having the depth information means you can drop out the background, so that callers see only you; rather handy for when you’re conducting a meeting at home rather than the office.

Dr Bhowmik showed this off by running in and out of the scene while his colleague was on a video conference; when he stepped up behind the caller, he was visible, but when he took a few steps back, the system edited him out.

The rear-facing version is what Dr Bhowmik calls “world facing”. It’s used more like a standard camera, to take photos where the focal point can be edited after the fact, make measurements or be used for augmented reality. It can also make 3D images, so could be used to capture objects for 3D printing.

Intel pointed out that the apps it’s created so far are only the beginning. “When you add human-like sensing, we can have the devices in real-time understand the visual world around it,” he said. “That allows us to build a lot of new applications that have never been done.”

When you add human-like sensing, we can have the devices in real-time understand the visual world around it

Those that have used the Leap Motion controller may scoff at the idea of using cameras for gesture navigation through Windows, but Dr Bhowmik points out that system only looked for your fingertips. His own system analyses the whole hand and scene, giving better accuracy; he said it offers 99% accuracy at a distance of 4m, adding “as it gets closer, it gets better”.

Indeed, he admits 3D imaging isn’t new. “We haven’t invented 3D imaging systems, but what we’ve done is taken these big devices on the market – such Microsoft Kinect – and we’ve minaturised them to go into more devices,” he said.

Intel’s also cut the price, saying the hardware isn’t expected to add to the overall price of a PC or tablet. “It’s cheap,” he said. “Three years ago when I started on this, it was big cameras: big, bulky and costing more than $300. We wanted to make it orders of magnitude cheaper and much smaller.”

The extra processing will drain the battery more than a standard camera, but he said it should only be noticeable for continuous-use applications, such as video conferencing.

What’s next?

Tablets and laptops are only the beginning, he said. “What could benefit from human-like vision? Anything that requires autonomous vision. Think of robots, drones – we will have a lot of exciting things to show in the next few months.”

As an example, he described a project that used drones to monitor animals in the Amazon jungle. The drones’ vision doesn’t manage depth well, so “they’re bumping into them”.

“What if drones had human-like vision? That allows them to not bump into them,” Dr Bhowmik said. “We use our eyes to get around in the world, machines should also be able to do that.”

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos