Ringing bright red alarm bells for anyone who’s ever sat through an apocalyptic sci-fi movie, scientists in New Zealand recently announced that they are developing the world’s angriest artificial intelligence.
The team at The Touchpoint Group have – in what might prove to be the opening chapter for a dystopian novel – spent more than £230,000 on a project that will see two years’ worth of irate phone calls funnelled into a machine. The aim? To make an AI robot that’s able to mimic the abuse spewed at call-centre workers and help data scientists calculate the best possible responses.
“The end goal is to build an engine that can recommend solutions to companies – and we’re talking about the people at the front line here – how they can improve particular issues that customers are facing,” Frank van der Velden, chief executive of The Touchpoint Group, told The Australian. “This will be possible by enabling our AI engine to learn right across a whole range of interactions of what has and has not worked in past examples.”
Okay, so it’s less of a cold killing machine and more of a practice dummy. But the news nevertheless taps into fears that have been brewing in recent months. Stephen Hawking recently warned “full artificial intelligence could spell the end of the human race”, Elon Musk has described AI as humanity’s “biggest existential threat”, and Bill Gates has admitted he too is “concerned about super intelligence”.
If films like The Terminator and 2001: A Space Odyssey have taught us anything, it’s that you don’t need to worry about emotional AI. It’s the emotionless ones you’ve got to keep an eye on.
I’ll leave you with the words of Dr Stuart Armstrong, a research fellow at the Future of Humanity Institute at Oxford University, who explained in The Telegraph how an identifiable emotion such as anger would actually help humans know how to handle an advanced AI:
“Everything in our evolutionary background prepares us to deal with angry entities and knowing whether or not to trust them. If we get a robot that’s angry in the classically human sense, we know so much more about how to deal with it than a robot that does not exhibit anger of any sort but may have goals that are very dangerous. The dangerous ones are the ones that do not correspond to anything that we can classify on a human scale – the ones that are indifferent to some crucial aspect of the world.”
Whether or not AI proves to be a danger to the human race, one thing for certain is that AI with anger issues is a lot more interesting to listen to than unemotional robots.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.