The AI that mistook a turtle for a gun

When is a turtle not a turtle? When it’s a gun. No, that’s not an awfully unfunny joke, it’s a depressing fact that Google’s image recognition AI can be so easily fooled to believe that a 3D printed turtle figurine is, in fact, a rifle.

The AI that mistook a turtle for a gun

This isn’t the first time something like this has happened, the 3D printed turtle manages to confuse Google’s AI because it’s an “adversarial image”. These images are deliberately designed to trick image recognition software by using special patterns that make AI systems confused and therefore believe they’re seeing something else.

That may not sound alarming, but if AIs end up looking after the safety of humans, such an error could have catastrophic consequences. What if someone tricks an AI into believing the gun it’s seeing is actually nothing more than a turtle?

“In concrete terms, this means it’s likely possible that one could construct a yard sale sign which to human drivers appears entirely ordinary, but might appear to a self-driving car as a pedestrian which suddenly appears next to the street,” write Labsix, the team of students from MIT who published the research. “Adversarial examples are a practical concern that people must consider as neural networks become increasingly prevalent (and dangerous).”

Labsix’s research is the first case of these adversarial images working in 3D form, with the AI being confused by the 3D print from multiple angles and lighting conditions. Usually these images can be overcome by simply rotating them so the AI recognises it, but because this works from multiple angles, it could be a real danger to AI research in the future.

However, it’s worth noting that Labsix’s hack isn’t as straightforward as you’d think. As The Verge notes, Labsix’s claim of it working from every angle isn’t quite correct as there are a handful of angles Google’s Inception-V3 image recognition AI isn’t fooled. The adversarial 3D objects also require Labsix to see how Google’s tool recognises the object first to identify its weaknesses so they can exploit them – something many people wouldn’t be able to do when creating these images.

Despite my earlier warning that something could go seriously wrong if this became a more widespread problem, it’s actually unlikely something like this could ever happen. Not only are people hard at work trying to eliminate these issues, it’s unlikely someone would put a product out into the wild that has serious safety implications if this problem isn’t solved beforehand.

Still, it’s always amusing to see that AI’s aren’t as smart as we all think they are.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos