Chasing trains: The UK talks a good AI game but is it losing pace?
In the scramble to promote sectors of British excellence before Brexit, government and industry have galvanised around artificial intelligence. We are the country, after all, of Alan Turing. We are the country that birthed AlphaGo creator DeepMind – bought by Google in 2014 and now a world leader in the field. We are the country that has nurtured companies which have brought greater machine learning to Twitter and taught Amazon’s Alexa how to talk.
But sowing a flower bed for AI in the UK is more than a matter of selling startups to internet giants, and there is a concern that the UK is resting on a few lush laurels. At the techUK Digital Ethics Summit in London, the conversation is turning on how the country needs to cultivate its soft power in AI; leading by example in how it interrogates the ethical dilemmas posed by new technology.
“Clearly there is a global competition happening in terms of AI,” says Antony Walker, deputy CEO of techUK. “China in particular is very focused. People are coming to realise that your fundamental values need to be embedded in your approach to AI, and doing this with the democratic tradition in Europe and the US is something we need to get right.
“At a time when the US government may not be as focused on this as it would have been under previous administrations, a company looking for a place with a right approach might increasingly look to the UK.”
In this year’s autumn budget, the government announced plans for a new Centre for Data Ethics and Innovation, described as “a world-first advisory body to enable and ensure safe, ethical innovation in artificial intelligence and data-driven technologies”. It’s not alone. Earlier in the year the Nuffield Foundation agreed to set up a “Convention on Data Ethics”, which will work with the Royal Society, the British Academy, the Turing Institute and the Royal Statistical Society.
So there is opportunity, and there have been movements to take decisive steps in firming up the UK’s AI credentials, but are they anywhere near enough? Luciano Floridi, professor of philosophy and ethics of information at the University of Oxford, tells me these are healthy signs of interest, but there is also a danger that we’re running on the spot.
“Imagine if we had done what we said we are going to do [in 2018] two years ago, when we almost launched the Council of Data Ethics, during the second Cameron ministry. If it’s cutting-edge now, it would have been visionary in 2016.
“We’re still ahead of the pack, but they are coming close.”
“We’re still ahead of the pack, but they are coming close. For example, with driverless cars. Germany set up a committee, the committee worked, issued recommendations, and the industry is working on those recommendations. We’re still talking about setting up the centre or the convention. We should be careful about the self-congratulatory attitude.”
While the government centre and the Nuffield Convention are signs that the UK is ready to make public gestures around AI and data ethics, the fact two separate initiatives have been set up – not one – may itself be a disadvantage. “How are we going to see these two initiatives play with each other?” Floridi asks. “Are they going to have a channel of communication? Are they going to be complementary? As an oversimplified picture: if they agree, one of the two may be redundant. If they disagree: that’s bad for anyone who needs guidelines. I hope we will find the balance between redundancy and inconsistency.”
The prospect of wrangling two ethical bodies is made more unwieldy by the sheer pace of technological development. “You’re trying to catch up with something that’s always faster than you,” emphasises Floridi. “How do you catch up with it? You go where it’s going to go. You don’t try to follow it. It would be silly to catch a train by chasing it as it leaves. It’s better to be at the station where the train is coming.
“If you think ahead strategically, then you will be where things are coming, and then you catch the right train. But this is something nobody wants to hear. Not the businesspeople, because it means looking beyond a quarterly report; not the politician, because it may go beyond the next elections.”
(In October, DeepMind’s AlphaGo taught itself to beat humans at the game of Go. Credit: AlphaGo/DeepMind)
Walker is more optimistic about the two centre’s potential to lay an ethical framework for constantly evolving tech: “I think the government centre will have to do some train chasing, because of the fact it is set up by government,” he notes. “But if we can strategically use the Nuffield Convention to get ahead at some of the stations, then maybe we can head off some of these problems.”
The good news is that these strategic questions are being pondered, and that the UK has the resources and intellectual institutions to position itself as a leader in digital ethics – not only for future AI, but for the ethical questions and unintended consequences technology is raising today.
“There probably was a moment when it could have been foreseen”
“I don’t think Facebook, when they set out to create a social media platform, thought it would have the potential to shift the outcome of a US election within ten years,” says Walker. “I genuinely think that was unforeseen. But there probably was a moment when it could have been foreseen. If we’d a body like this new centre in place, could it have spoken to the electoral commission?”
Whether or not the UK mobilises as a breeding ground for AI, and whether or not it can hold a stick to the practices of internet giants like Google and Facebook, the country first needs to translate good intentions into actions. The conversations at the Digital Ethics Summit are hopeful, but we don’t want to be having the same conversation next year.
“I want to be optimistic. There are immense opportunities to do the right thing and avoid doing the wrong one, but we must start making progress, and do so with real expertise, leadership and a shared vision” says Floridi.