Reality check: robot killing machines – will they rebel?

If, like us, you’ve been following robotics news closely over the past few months, you may have noticed a somewhat worrying trend. If you haven’t, here’s some headlines:

Are you thinking what we’re thinking?

Over the past few months, scientists have been working on a robot that simulates anger; a robot that’s a demon with a sword; a robot that is super agile. Should we be worried – or is jovially predicting a robotic apocalypse for every technological advance a kind of journalistic tick?

There’s no straight answer to that, and opinion is divided even in the scientific community, but the threat certainly seems more real than it did a decade ago. Lest we forget, in the 1990s, this was the closest most of us came to a learning robot:image02

Nowadays, however, the majority of the western world keeps a computer in their pocket that could – with the right programming – outwit them at every opportunity.

“Work on artificial intelligence has been underway since the 1950s, but serious concern for control slipping away was virtually non-existent in those early days”

The threat has been viewed as sufficiently significant for big names to sit up and take notice. Earlier this year, an open letter acknowledged the risks of machinery overtaking mankind, and called for safeguards to be put in place. “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do” proved a solid (or vague, depending on your cynicism) enough sentiment that robotics professionals, MIT professors and experts from Microsoft and IBM were among the signatories.

Bill Gates, Elon Musk and Stephen Hawking have all expressed reservations about the development of AI – indeed, the latter two were also co-signees of the open letter.

If you’re thinking this fear of robotic sentience has sprung up on us, you’re not alone. Work on artificial intelligence has been underway since the 1950s, but serious concern for control slipping away was virtually non-existent in those early days, and has been accelerating only in recent months.

As Anders Sandberg, a futurology expert from the University of Oxford, explains in a Reddit Q&A, this was partly down to the kookiness of the whole notion at a time when robots were so unthreateningly primitive: “The more ‘weird’ a risk is, the more embarrassing it is to study it. But it’s only by looking at weird things that we can figure out whether it is indeed so low probability or incoherent that it is not worth looking more at, or a hidden real problem.” In other words, the fact that it’s a science-fiction trope doesn’t necessarily mean it should be dismissed out of hand.

This embarrassment is compounded by the slightly naive belief that if we were in a situation where killer robots were intent on taking us down, we’d just be able to switch them off again. As author of Superintelligence: Paths, Dangers, Strategies Professor Nick Bostrom says in his TED talk, that’s not necessarily the case:

Using an example similar to the paperclip maximiser, Bostrom imagines a scenario where a robot tasked with making humans happy deduces that the most efficient way to force the smiles it seeks is with electrodes. “You might say that if a computer starts sticking electrodes into people’s faces, we’d just shut it off,” Bostrom argued. “This is not necessarily so easy to do if we’ve grown dependent on the system, like, where is the off switch to the internet?”

“Why haven’t the chimpanzees flicked the off switch to humanity? Or the neanderthals? They certainly had reasons. The reason is that we are an intelligent adversary. We can anticipate threats and plan around them. But so could a super-intelligent agent and it would be much better at that than we are”.

While there are undoubtedly folk calling for caution, others dismiss the threat as imaginary or overblown – led, not surprisingly, by those who work in the industry – something that makes it tricky to prise apart the soothing reassurance of expertise from the red flag of vested interests.

Andrew Ng, an AI veteran who founded Google’s first Deep Learning team, is keen to highlight the frustrating conflation between intelligence and sentience: “Computers are becoming more intelligent and that’s useful in self-driving cars, speech-recognition systems or search engines. That’s intelligence,” Ng explained to Fusion. “But sentience and consciousness are not something that most of the people I talk to think that we’re on the path to.”

“I don’t work on preventing AI from turning evil for the same reason that I don’t work on combating overpopulation on the planet Mars. Hundreds of years from now, when we’ve hopefully colonised Mars, overpopulation might be a serious problem and we’ll have to deal with it.”image01

Alan Winfield, a professor of electronic engineering at UWE Bristol writing in The Guardian, also sees any threat as a long-shot. “If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI – accidentally or maliciously – starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.”

“If these experts seem a touch on the cranky side, it’s understandable. They personally aren’t working on anything with which they see risks – but can they speak for everyone else?”

Demis Hassabis, co-founder of the Google-owned DeepMind AI research company, giving a rare interview in the most recent issue of Wired dismisses concerns about AI’s future as “unsubstantiated hype from people who are smart in their own domains, but don’t work in AI.”

“These are people who aren’t actually building something, so they’re talking from philosophical and science-fiction worries, with almost no knowledge about what these [technologies] can do.’’

“Of course we can stop it – we’re designing these things… I wouldn’t purport to lecture Stephen Hawking on black holes: I’ve watched Interstellar but I don’t know about black-body radiation to the extent that I should be pontificating about it to the press.”

If these experts seem a touch on the cranky side (and each interview write-up is interjected with phrases like “he sighs” and “wearily”), it’s understandable. They personally aren’t working on anything with which they see risks – but can they speak for everyone else? And might there be a touch of hubris here? It’s difficult to be objective when it’s your life’s work that’s being criticised.

Arguably, that’s the biggest risk at play here. As Sandberg concluded in his Reddit chat, an element of overconfidence could be the greatest enemy: “Most robots, after all, have a hard time not falling over… When you spend your days trying to make it navigate from one end of the room to the next and it fails, then it’s hard to imagine a robot uprising.”

“In synthetic biology a lot of people seem to think that they’re going to change the world, but many of the same people also think their organisms are perfectly safe – based on their experience with current, non-world-changing organisms. This is, of course, a mistake: anything that can change the world can be [a] risk.”

So do we trust the experts who work with AI every day, or the less specialised intellects shuffling uneasily at the pace of development? After refusing to engage in one journalistic tick earlier, I’m going to fall back on another tried-and-tested one: it’s simply too early to tell, but the enhanced scrutiny should mean that if robots do end up killing us, we might at least see it coming.

Images: Campaign to Stop Killer Robots and Penyri Herrera used under Creative Commons 

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.