Will artificial intelligence wipe us all out?

Cambridge researchers investigate potential downsides of innovation - including human extinction

Nicole Kobie
26 Nov 2012

Is technology going to kill us all? A leading scientist and philosopher have teamed up with a tech industry luminary to find out.

The trio of academics have set up the Centre for the Study of Existential Risk at Cambridge, hoping to uncover if sci-fi predictions of robots and artificial intelligence destroying human kind will come true.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge, had the idea after meeting up with Jaan Tallinn - one of the founders of Skype.

"He [Tallinn] said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease," Price said. "I was intrigued that someone with is feet so firmly on the ground in the industry should see it as such a serious issue, and impressed by his commitment to doing something about it."

Tallinn said that in his pessimistic moments he felt he was more likely to die from an AI accident than from cancer or heart disease

Price said that in the next century, we could face a major shift in human history: "when intelligence escapes the constraints of biology".

Aside from artificial general intelligence (AGI) - which brings with it the eventual ability for computers to write their own programs and develop their own technologies - the centre will look at bio- and nanotechnology, as well as extreme climate change.

"Nature didn't anticipate us, and we in our turn shouldn't take AGI [artificial general intelligence] for granted," he said. "We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous."

Price admitted it was unlikely that any threat could be predicted with complete certainty, but said "with so much at stake", something must be done.

Serious investigation

While the idea may sound like sci-fi - we direct Price to the Terminator film series, or at least the first two - he said such concerns should be brought into the fold of "serious investigation".

"The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence - in a way that they simply haven't up to now, in human history," he said. "We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones."

To start the Centre for the Study of Existential Risk, Price also invited Lord Martin Rees, former master of Cambridge's Trinity College and president of the Royal Society, who has written extensively about catastrophic risk.

Cambridge added that academics from a host of fields - science, policy, law and computing - had already started to sign up to the project. The centre will be formally launched next year.