Will killer robots make us safer?

In 1942, Isaac Asimov published the short story “Runaround”, containing the now-iconic Three Laws of Robotics. Coming in at number one is: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Will killer robots make us safer?

Fast-forward to August 2017, and more than 100 AI experts, led by SpaceX founder Elon Musk, have signed an open letter to the United Nations Convention on Certain Conventional Weapons, raising concerns about weaponised AI. “Lethal autonomous weapons threaten to become the third revolution in warfare,” they argued. “Once developed they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

These concerns may seem like they belong to the realm of science fiction, but lethal autonomous weapon systems (LAWS) – dubbed “killer robots” – are already available and in use today. Along the demilitarised zone between North and South Korea, for example, the Samsung SGR-A1 sentry gun scans the horizon for trespassers. In Russia, unmanned and autonomous weapons are being developed, such as the seven-tonne Soratnik robot tank.

samsung_weapon

(Above: Samsung’s SGR-A1 sentry gun. Source: Wikipedia/MarkBlackUltor)

At the moment, autonomous systems are both rare and expensive. However, as manufacturing techniques improve and production costs go down, autonomous weaponry could become a larger part of how we handle security. There are those who argue that this could actually save lives.

“Losing a drone is better than losing a person”

“We can reduce casualties in conflicts,” explains Dr Peter J Bentley, an honorary professor at University College London. “Losing a drone is better than losing a person.” Simply put, no matter the cost of an autonomous weapons system, it will never match the cost of a human life. Furthermore, if a human soldier is killed or captured, this carries political ramifications, whereas a lost robot is unlikely to buckle under torture.

Autonomous systems also carry with them the potential to be far more precise and controlled than bombing runs in previous conflicts, where entire towns have been flattened. “Smarter weapons can provide far more accurate and targeted responses, instead of the indiscriminate bombing of historical conflicts,” says Bentley.

But not all agree that these benefits outweigh the problems.

Since losing an automated drone is better than losing a person, this could lower the threshold for war. “It’s much easier to send an autonomous weapon system into a hostile area to carry out a mission, rather than to send real people, who, if they get captured or killed, may have political implications for the government that is instigating the action,” says Professor J Mark Bishop, director of the Tungsten Centre for Intelligent Data Analytics at Goldsmiths. The fact a robot is more disposable may mean that countries are more likely to risk the use of force.  

There’s also the issue that LAWS may not be vulnerable to hacking by malicious parties. There have been instances, for example, where drones have been spoofed (fed false information) into landing. In 2011, it was claimed that an American surveillance drone was forced to land in Iran. “In August 2017, America banned the use of DJI drones because there was a security risk,” notes Bishop, referring to the US Army’s ban on drones made by the Chinese manufacturer over unspecified cybersecurity risks.

Complex war games

While AI-based systems may surpass human accuracy in some instances, this doesn’t necessarily make them consistently precise. There have been incredible developments in the field of artificial intelligence, but generally in areas that are inherently ordered and structured. Unfortunately, life is not like that. There has been much said about the recent successes in Go by AI players, for example, where AI beat the human reigning Go champion in a series of matches. However, AI did not fare as well in the complex strategy game StarCraft.

starcraft_2

(Above: StarCraft 2. Source: Blizzard)

AIs are more capable at structured games, like Go, where there are only so many moves that can be made. In instances where there are more variables, AIs lose their effectiveness. This is because they lack contextual decision-making. For example, there was the time when a suicidal robot security guard drowned itself in an ornamental pond. Apparently, an algorithm failed to detect an uneven surface. A year earlier, another robot security guard managed to run over a toddler. The real world is chaotic, which is something AI often struggles to respond to.

“A particular concern I have is the unanticipated consequences of relatively stupid AI”

“A particular concern I have is the unanticipated consequences of relatively stupid AI,” says Bishop. “In 2011, two AI bots got into a pricing war over a book, The Making of a Fly by Peter Lawrence, and they bid this book to over $23,000,000. That would never happen with a human in the loop.” While automated systems can recognise the number, they cannot appreciate the value of the sum. Now imagine if this were two automated weapon systems competing with each other.

We also have to consider the issue of what happens when autonomous weapon technology proliferates into our society. There have already been instances of commercial drones being used to smuggle illegal contraband into prisons. There has also been an instance where a commercial drone was fitted with a loaded firearm that could be fired remotely. There’s the danger that, if LAWS become more widespread, they could be usurped and used by criminals or terrorist organisations.

“Under human control”

Whilst the British Army is currently exploring the possibilities of autonomous systems, such as using driverless cars in supply convoys, the Ministry of Defence has stated that it has no intention of developing or acquiring fully autonomous weapons systems.

There are, in the MOD’s words, a “limited number” of defensive systems that can operate automatically, but these always have a human involved when setting operational parameters.  “It’s right that our weapons are operated by real people, capable of making complex decisions and even as they become increasingly high-tech, they will always be under human control,” said an MOD spokesperson.

Nonetheless, there is a real danger that we could be heading towards an AI arms race. In 2015 a number of leading AI experts presented an open letter at the International Joint Conference on Artificial Intelligence, warning that the prospect of wide-scale, AI-based warfare is a very real prospect. “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control,” they concluded.

Despite the improved accuracy and response times of autonomous weapon systems, they lack the contextual decision-making and deductive reasoning abilities of their human counterparts. They could reduce the number of casualties, but there’s also the chance they could also make war more dangerous than it is now, especially if this technology were to fall into the wrong hands.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.