Google DeepMind founder on future of AI: “As humans, we must remain completely in the loop”
At the TechCrunch Disrupt conference in London, co-founder of Google DeepMind Mustafa Suleyman spoke about the importance of transparency when designing artificial intelligence, and the risk of not knowing what our technological systems are doing.
Speaking about the DeepMind Partnership on AI – a cross-company partnership between Google, Facebook, Amazon, IBM and Microsoft – Suleyman talked about the group’s aim to address the societal dangers that could emerge from advanced AI.
While noting that the full extent of the partnership wouldn’t be announced until January 2017, Suleyman outlined a core issue for the group: knowing why artificial-intelligence systems are making the decisions they do, and ensuring that this process is as transparent as possible.
“We’re decades away from the kinds of risks that board initially envisioned,” said Suleyman. “And so we’re putting in place a variety of other mechanisms that focus on the near-term consequences. One of the priorities of the partnership is to look at the question of algorithmic transparency. Where in the network sits the representation that we’re using to deliver a particular recommendation – to take a particular decision? This is a really important question.”
Suleyman was asked about the ‘black box’ effect of machine learning processes – that although we can identify what data is being picked up but the system, and what the outcome is, we cannot currently know for sure why the AI is making its choices. Does this present a significant issue for the future of society, given our increasing reliance on AI-based processes within our infrastructure? Researchers from Google Brain, for example, conducted an experiment where they taught three neural networks are able to develop a system of encryption – independent of humans.
“To put it in context, I think we have this issue across the board,” answered Suleyman. “Many of our most complicated software systems are incredibly difficult to debug, and when they go wrong they cause massive impacts – where it’s in airports or hospitals or in transport systems. In general, we have this broader question of how we verify what our technical systems are doing, and how we scrutinise them and ensure they’re transparent, and ensure that we have control over them. As humans, we must remain completely in the loop.”
As well as the risk of detaching AI processes from human involvement, Suleyman also touched on the dangers of AI learning from humanity’s more regrettable social structures. When asked about a ProPublica article published in May, which examined a piece of software designed to the likelihood of prisoners committing a future crime – and which was shown to be biased against black people – Suleyman said the effect of human prejudice on AI systems is “one of the most important questions of our day”.
“We are destined to project our biases and our judgements into our technical systems”
“The way I think about these things is: we are destined to project our biases and our judgements into our technical systems,” said Suleyman. “If we don’t think consciously as designers and technologists about how we are building those systems, then we will unwittingly introduce those same biases into those systems.”
Whether or not the choice of the word “destined” is an erroneous one, Suleyman was hopeful that it is possible for a human society to develop technological systems without its own prejudices. In a note of utopianism, he claimed that this in fact presents a way for us to “rebuild our world”.
“The exciting thing about the technology is that it presents an opportunity for us to critically reflect on how we are designing systems that interact with the real world,” he said. “We should constantly try to do that in an open and transparent way, and in some sense rebuild our world with fewer of those biases and judgements as we move forward as a species.”