Stephen Hawking and Elon Musk urge for a ban to prevent “virtually inevitable” robot arms race
Stephen Hawking and Elon Musk have had their issues with artificial intelligence in the past, and now they’re raising their heads above the parapet once again. Their names appear among a list of 1,000 academics, researchers and public figures who have signed an open letter calling for a ban on “offensive autonomous weapons beyond meaningful human control.”
In the letter, the signatories argue that without international intervention, “autonomous weapons will become the Kalashnikovs of tomorrow.”
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” the letter argues. “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”
The letter claims that the timeline for deploying AI in this way, practically if not legally, is only a matter of years away.
This chilling vision of an unmoderated future is shared by Apple co-founder Steve Wozniak, philosopher Noam Chomsky and Stephen Goose, the director of the arms division of Human Rights Watch.
The United Nations debated a global ban on lethal autonomous weapons just this year. The UK was opposed to any restrictions, arguing that “at present, we do not see the need for a prohibition on the use of laws, as international humanitarian law already provides sufficient regulation for this area.” The signatories of the letter will hope that their intervention will make this kind of complacent response harder to justify in future.
Those who have put their names to the letter are keen to point out that they’re not opposed to AI development in more general terms, arguing that they “believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so.” They don’t rule it out in wartime technology either, stating that “there are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”
This view makes it significantly more difficult for those with vested interests to dismiss the letter as the work of luddite cranks or peacenik hippies.