Why Stephen Hawking is wrong about killer robots

When geniuses such as Elon Musk and Stephen Hawking call for a ban on something, it’s hard to disagree. When they’re demanding a ban on killer robots, it’s even harder to question their judgement. Yet, here I am, about to say they and a thousand of their clever colleagues are wrong. 

Why Stephen Hawking is wrong about killer robots

To be clear, I don’t disagree with the gist of their warning. The collection of academics and business leaders argued in an open letter that autonomous weapons, those that “select and engage targets without human intervention”, are feasible within years, not decades. That’s a concern, they argue, as while they might reduce the number of human soldiers required to risk their lives in wars, they also lower the “threshold for going to battle”. 

“While it’s hard to argue against a ban, it’s ultimately futile: governments will break it, just like restrictions on every other harmful facet of technology.”

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” the letter reads. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”

Terrifying indeed, and while it’s hard to argue against a ban, it’s ultimately futile: governments will break it, just like restrictions on every other harmful facet of technology. There’s simply no way the world’s militaries will stop developing automated weaponry because of an open letter from Hawking and his friends, or even because of international law.stephen_hawking

“Those who published stories on snooping were dismissed as cranks. Now we know their “paranoia” was nothing of the sort.”

Take snooping, for example. For years, decades even, security researchers, tech experts and even journalists pointed out the inevitability of mass internet communications being used to snoop on citizens. Those who published stories on snooping programmes such as Echelon were dismissed as cranks. Now the Snowden revelations prove their “paranoia” was nothing of the sort.  

In the end, the letter does little more than assuage researchers’ own guilt that the technology they’re building will be used for immoral means – when it inevitably happens, they can at least say they tried to warn us. As the letter notes: “Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.”

So what does this much stupider person expect these clever folks to do? They’ve taken the first step by identifying the problem and talking about it, rather than leaving the debate in closed rooms filled with military strategists. But now is time to go further and take action, rather than wait until the AI fighters are commonplace. 

“You’re smart enough to see the problem before it’s stomping around the globe slaughtering people, but do more than write letters.”

Again, look to surveillance: after the Snowden revelations, researchers doubled down on efforts to come up with anti-surveillance tools. There’s now smartphones designed to block snoops, such as the Blackphone; more websites have moved to full encryption; and even the tech behind Tor is being improved, meaning it could soon become the de facto method of surfing to avoid surveillance. If only we’d seen the value of such work sooner. 

So, to the signatories of that letter: you’re smart enough to see the problem before it’s stomping around the globe slaughtering people, but do more than write letters. Come up with systems to guard against killer robots, to prevent automated drones from targeting living beings, or for us to protect ourselves from such terrors. My tiny brain can’t even imagine what that involves, but I desperately want such protections to exist – because I’m pretty sure it’s going to take more than a strongly worded letter to stop AI weaponry from killing people.

READ THIS NEXT: Robot killing machines – will they rebel?

Images: Campaign to Stop Killer Robots and Lwp Kommunkacio used under Creative Commons

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.