Meet Norman, the world’s first “psychopath” AI trained using only gruesome and violent Reddit images

In a fascinating study, scientists from the Massachusetts Institute of Technology (MIT) have created what is being dubbed the first “AI psychopath”, Norman.

Meet Norman, the world’s first “psychopath” AI trained using only gruesome and violent Reddit images

Named after the central character in Hitchcock’s Psycho, Norman Bates, the AI was trained to perform image captioning, a routine deep learning method. However, there was a significant twist – Norman was trained using only the captions from gruesome images of death and violence from a notorious subreddit.

READ NEXT: Sex robots: The end of empathy and love or a force for good?

The result was that when Norman was asked to identify Rorschach inkblots – the images used by psychiatrists to identify personality disorders in patients – it provided captions like “man gets pulled into dough machine,” and “man killed by speeding driver”, where regular AI saw images of a small bird and a close up of a wedding cake.

It might seem obvious that Norman could only offer gruesome interpretations of images when that was all the AI had been taught, but the study serves to show exactly that – that an algorithm’s behaviour can be significantly influenced by the data being used to teach it.

“So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it,” explains the project’s site. “The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.”

It’s a compelling idea, that an algorithm is only as good as the people and indeed the data that have taught it, however subtle the flaws might appear to be. Google’s image captioning software came under fire in 2015 when the company’s Photos app categorised computer programmer Jacky Alciné and his friend as “gorillas”. The company defended its algorithm, claiming it was not racist, and revealed at one point it had a “problem with people (of all races) being tagged as dogs”.

Facebook’s AI has also been met with its fair share of criticism after a former employee admitted its “Trending” section (which should directly reflect the topics that are popular on the site) was taught to supress conservative content. As a result, this Trending section is being scrapped in favour of a more transparent method of news reporting on the social network.

You can see how the psychopath AI interpreted different inkblots on the project’s site and there’s also an option to “help Norman fix himself” by filling a survey that asks you to describe what you see in the images.

Image credit: norman-ai.mit.edu

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.