This strange AI “camouflage” can stop you being identified by facial detection software

Researchers have created a DeepDream-esque mask that can block neural network object recognition

Thomas McMullan
12 Dec 2017
Advertisement

The scope for facial detection to be used for large-scale surveillance is only just beginning to be realised.

In September this year, Moscow hooked up its CCTV network to a facial-recognition system. New York is planning to roll out facial detection across bridges and tunnels. London’s facial-recognition database has been criticised for going “far beyond custody purposes”, and China is taking all of this to a whole new level of total state surveillance.

But the invention of the ship also led to the invention of the pirate. A number of projects have launched showing how these detection systems can be spoofed, sidestepped or hijacked. The latest is a piece of computer vision research from the University of Illinois, using camouflage to fool neural-network object detection.

This method hinges on “adversarial examples”; a way to attack a machine-learning system with slightly modified data – often imperceptible to a human –that are different enough for the system to misclassify. In their paper, Jiajun Lu, Hussein Sibai and Evan Fabry explain that, “if adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles.”

To demonstrate this, the team created “three adversarial stop signs”, designed to block the sort of object detection used by autonomous cars. When these distorted images were printed and stuck onto actual stop signs, only the most extreme example was not detected by the object-detection system.  

The researchers had better luck in spoofing facial detection, using a Google DeepDream-esque mask to distort a subject’s features. This digital attack involved overlaying the camouflage on a pre-existing piece of video, as the paper describes: “We apply our attacking method to a training set of videos to generate a cross view condition adversarial perturbation, and apply that perturbation on this test sequence to generate the attacked sequence.”

Because the camouflage involves training the attacking system on a specific video, it would presumably be used to doctor footage to make certain persons undetectable – rather than block a person in real-time. Others have been investigating the latter, however. Last year, researchers at Carnegie Mellon University managed to create facial-recognition-fooling frames for glasses, which look a bit like something Timmy Mallett would wear.

Those colourful glasses might be imperceptible to facial-detection surveillance, but they’re far from invisible to everybody else in the room.

Images: Jiajun Lu, Hussein Sibai and Evan Fabry