Elon Musk thinks AI is more dangerous than North Korea

Elon Musk’s latest warning on the dangers of AI comes in a new statement on Twitter comparing it to the communist nation of North Korea.

With tensions rising between the US and North Korea over potential nuclear war, Musk chimed in to remind us all that AI is really the bigger risk. His post came shortly after his own AI development managed to beat pro Dota 2 players ahead of The International tournament.

Musk’s concerns are valid, and his message rings true – in the grand scheme of things, AI is genuinely more dangerous than North Korea. However, if North Korea does decide to bomb the US in the next week or so, I don’t think AI is really something to be concerned about.

This is far from the first time Musk has been vocal about his concerns surrounding AI. He’s already teamed up with Stephen Hawking to advocate the or sensible development and safeguarding around robotics and AI, and speaking at a meeting of the National Governors Association last month, Musk again explained why he believes people should be worried by the march of AI.

“I have access to the very most cutting-edge AI, and I think people should be really concerned about it,” he explained before describing it as “the biggest risk we face as a civilisation.”

His solution? Regulation, especially proactive regulation. According to Musk, governments aren’t doing enough to plan for the future and create regulations for those currently developing technologies.

“AI is a rare case where I think we need to be proactive in regulation instead of reactive. I think by the time we are reactive in AI regulation, it’s too late… AI is a fundamental risk to the existence of human civilisation in a way that car accidents, aeroplane crashes, faulty drugs or bad food were not.”

“AI is a fundamental risk to the existence of human civilisation”

His specific fears around AI aren’t on the Terminator and Skynet scale of sci-fi nonsense, they’re actually remarkably grounded and – for anyone keeping abreast of the US’s political situation – quite close to home.

“[AI] could start a war by [creating] fake news and spoofing email accounts and fake press releases and just by manipulating information.” If that AI has any understanding of anger or malice, it could easily develop those tendencies and use it against us to bring humanity down. It’s not too far-fetched either: a team in New Zealand is already working on developing the world’s angriest AI.

An AI would also be a formidable adversary, easily able to predict and outsmart our own behavioural patterns – thus thwarting any attempt to stop it. As Musk argues, without proper, responsible regulation in place, a rogue AI could be the end of us all.

Seeing as Musk has a somewhat libertarian view on the world, it’s surprising to see just how strongly he believes the government should be intervening on AI development. He wants regulators to have the power to halt AI developments until they’ve been checked for safety – but understands it has to happen to all those involved or shareholders in these businesses will get tetchy.

“You kind of need the regulators to do that for all the teams in the game. Otherwise the shareholders will be saying ‘why aren’t you developing AI faster? Because your competitor is.'”

Musk doesn’t believe that researchers are setting out to build a malicious AI hell-bent on taking out mankind. But it seems he thinks that we, as humans, just don’t truly understand what it is we’re in the process of creating.

However, there is one issue with Musk’s warning – he’s only talking to the US. AI development is happening all over the world, with Google’s own DeepMind project actually taking place in the UK. If only US developers are held accountable, what does that mean for the rest of the world?

You can see the full interview with Musk in the video below – the AI chat kicks off around 48 minutes in.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.