Oculus Rift project maps your facial expressions onto an avatar

When Facebook bought Oculus Rift in 2014, CEO Mark Zuckerberg made it clear that his interests lay in bringing the virtual-reality headset to a mass social audience. “Immersive virtual and augmented reality will become a part of people’s everyday life,” Zuckerberg said, pointing to a future where VR and AR are firmly integrated into our social interactions.

Now comes an indication of what shape that future might take, with news that researchers at the University of Southern California and Facebook’s Oculus division have teamed up to develop a way to capture facial expressions with the VR headset.

The setup uses a 3D camera attached to the headset with a short boom. To monitor the parts of the face hidden from view, strain gauges are added to the foam padding on the inside of the headset. Put together, these two data sources form an accurate 3D representation of the user’s facial movements – something that can then be used to animate a virtual avatar.

The video shows off the technology with a number of human avatars, as well as a chimpanzee, suggesting that there’s scope to map your expressions onto both human and non-human faces. Being able to make avatars that respond to facial movements has obvious implications for gaming, but it’s the social potential of the technology that’s emphasised by assistant professor Hao Li, who led the project.

“To get a virtual social environment, you want to convey this behaviour to other people,” Li told MIT Technology Review, emphasising that “this is the first facial tracking that has been demonstrated through a head-mounted display”.

Screen Shot 2015-05-21 at 10.51.19 AM.png

Facial expressions are a crucial part of body language, so communicating with users in a virtual environment would hinge on how convincing these movements are. Less essential but nevertheless important to creating a believable virtual model are physical details such as hair. Li has also worked on a project that allows users to make a 3D replication of their hairstyle using only a single photo.

The face-tracking project was set up purely as a research exercise, but Li suggests that a commercial product wouldn’t be too hard to develop. “If people think this is really central to important killer applications, you could get it into production relatively quickly,” he told MIT Technology Review.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.