Growth of AI could swell security threats, report warns

The unprecedented development of artificial intelligence could expand existing security threats and introduce whole new dangers to cybercrime, physical attacks and political disruption within the next five years. That’s according to a wide-ranging report from a collection of 26 experts from around the world, who warn that the unchecked growth of AI could change the security landscape.

Growth of AI could swell security threats, report warns

Written by authors spread across academic, industry and the charity sector, the Malicious AI report calls for a culture of responsibility and transparency from AI researchers, and even the scope for increased policy intervention to ensure AI and machine learning is developed for the public good. It warns that without these measures, AI is likely to lead to a changed equilibrium between malicious actors and security forces.

“As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the expansion of existing threats, the introduction of new threats and a change to the typical character of threats,” the report outlines.

In terms of digital security, the possibility of using AI to automate tasks is expected to increase the threat associated with labour-intensive cyberattacks such as spear phishing. The experts also predict the growth of attacks that exploit “human vulnerabilities”, such as using AI impersonation tools to synthesise a person’s speech.

READ NEXT: Will killer robots make us safer?

One of the scenarios explored in the report relating to physical threats is the ability of terrorists to repurpose commercial AI systems such as drones and autonomous vehicles, either to deliver explosives or causes crashes. In a more general sense, AI automation means that previously high-skill tasks could be much easier to perform. The experts give the example of a self-aiming, long-range sniper rifle, which could use AI image recognition to reduce the expertise needed to wield it.  

The political realm is also susceptible to changing threats due to AI, with the report covering the ability for AI and machine learning to impact surveillance, targeted propaganda and the spread of misinformation. Social manipulation is pitched as a major reason for these threats, with an improved capacity to analyse human behaviours, moods, and beliefs from collected data. “These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates,” the report notes.

The paper comes several months after a number of digital ethics initiatives were announced in the UK, namely the government’s Centre for Data Ethics and Innovation and the Nuffield Foundation’s Convention on Data Ethics. These bodies are being set up to wrangle an ethical framework from rapidly emerging technologies such as AI, but there have been concerns about whether they will be able to catch up with the pace of development.

“You’re trying to catch up with something that’s always faster than you,” said Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford, when I spoke to him last year. “How do you catch up with it? You go where it’s going to go. You don’t try to follow it. It would be silly to catch a train by chasing it as it leaves. It’s better to be at the station where the train is coming.

“If you think ahead strategically, then you will be where things are coming, and then you catch the right train. But this is something nobody wants to hear. Not the businesspeople, because it means looking beyond a quarterly report; not the politician, because it may go beyond the next elections.”

The report calls for more openness between policymakers and AI researchers. It also acknowledges that AI security systems are amongst our best hopes to combat malicious AI, but adds that “AI-based defence is not a panacea, especially when we look beyond the digital domain”.

“More work should be done in understanding the right balance of openness in AI, developing improved technical measures for formally verifying the robustness of systems, and ensuring that policy frameworks developed in a less AI-infused world adapt to the new world we are creating.”

Image: Russia’s Nerehta tank, an unmanned ground vehicle capable of carrying a grenade launcher or a machine gun

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.