House of Lords: UK must “lead the way on ethical AI”

Artificial intelligence should be subject to a cross-sector code of practice that ensures it does not diminish the rights and opportunities of humans, according to a new report by the House of Lords.

House of Lords: UK must

In the comprehensive report, released this morning, the House of Lords Select Committee said the UK is in a “unique position” to help shape the development of AI on the world stage. 

“The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences,” said Committee chairman Lord Clement-Jones in a statement.

“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous startup ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.”

READ NEXT: The UK talks a good AI game but is it losing pace?

The 13-member Committee, which includes journalist Baroness Bakewell and the Lord Bishop of Oxford, was tasked with assessing the economic and social impact of artificial intelligence on July 2017.

After almost 10 months of consultation, 223 pieces of written evidence, and visits to companies such as DeepMind and Microsoft, the panel has now proposed a set of principles that will be used to form the basis of a code of practice; something it hopes will be embraced internationally.

“Not without its risks”

AI should be developed for the “common good and benefit of humanity”, as well as operate on principles of “intelligibility and fairness”, the committee’s report states.

There should also be restrictions on any AI systems that attempt to “diminish the data rights or privacy of individuals, families or communities”, and each citizen should be given the right to be educated to a level where they can “flourish mentally, emotionally and economically” alongside an AI system.

The report calls for a ban on the development of any AI that has the potential to “hurt, destroy, or deceive human beings”.

“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these,” said Clement-Jones. “An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”

He added that it was the Committee’s aim to see that the UK remains at the cutting-edge of research, achieved in part by providing greater support for technology startups. In order to do this, the Committee has urged the creation of a “growth fund” for SMBs, and changes to immigration laws that make it easier to source skilled overseas talent.

“We’ve asked whether the UK is ready willing and able to take advantage of AI. With our recommendations, it will be,” said Clement-Jones.

READ NEXT: AI versus machine learning

This will potentially go some way to alleviate concerns that the UK is significantly behind when it comes to the amount invested as a percentage of GDP, which currently stands at 1.7% but is due to rise to 2.4% by 2021/22.

“I am particularly pleased to see the suggestion of an SME fund – support and funding schemes for UK SMBs working with AI will provide much needed education and clarity about how adoption of this technology will supercharge the growth of all industries,” said Sage’s VP of AI, Kriti Sharma, who is one of many AI industry experts to give evidence to the committee. “We hope that the government will get behind this and look at reframing incentives for SMBs in particular to invest in technology which enables them to take advantage of AI.”

Intel’s industry technical specialist, Chris Feltham, said the Lords’ report has come at the “perfect time”. Last year, Intel released its first AI Public Policy white paper.

“There is huge appetite for the rapid acceleration in AI development, but as we work to develop new methods for integrating AI capabilities into the fabric of society, a public policy conversation is essential more than ever,” said Feltham. “The industry must work together to regulate and send a clear message in committing to ethical deployment of AI.”

Minimising disruption

At a recent panel event on AI, CEO of innovation charity Geoff Mulgan said the UK has made a “massive strategic error” on funding, particularly within the public sector, and criticised the lack of strategic programmes to help mobilise the nation’s talent.

In response, the report has also called for greater investment in skills and training, designed to ensure any disruption to the workforce from the introduction of AI is kept to a minimum.

Sue Daley, head of programme for AI at technology industry lobby group techUK, described the report as an “important contribution to current thinking”.

“At a time when some are questioning the ability of politicians to keep pace with tech this report proves that policy makers can get to grips with big issues like AI,” said Daley. “It is particularly impressive that members of the Committee spent time learning to programme deep neural networks. Politicians across the pond should take note.”

Given the often negative perception of AI among the public, the tech industry has been urged to lead to the way in establishing “voluntary mechanisms” for informing the public when AI is being deployed, according to the report, although it’s not clear precisely what these mechanisms will look like.

On the subject of the use of data by AI, the committee believes individuals should be given greater powers to protect their data from being misused. While GDPR will deliver on this to some extent, further action is needed, such as the creation of ethics advisory boards, the report said.

The government and the Competition and Markets Authority have also been tasked with ensuring that large technology companies do not hold a monopoly on the availability of data, and that greater competition is encouraged.

“Implementing a universal code of ethics for AI is an extremely good idea and is something we have independently implemented at Sage to educate our people and protect our customers,” added Sharma. “This step will be critical to ensuring we are building safe and ethical AI – but we need to think carefully about their practical application and the split of responsibility between business and government, specifically when considering their application to specific industry sectors and ensuring buy-in and rapid adoption from the business community.”

In last year’s autumn budget, the government announced plans for a new Centre for Data Ethics and Innovation, described as “a world-first advisory body to enable and ensure safe, ethical innovation in artificial intelligence and data-driven technologies”. Earlier in the year the Nuffield Foundation also agreed to set up a “Convention on Data Ethics”, which will work with the Royal Society, the British Academy, the Turing Institute and the Royal Statistical Society. 

These attempts to hone an ethical framework have nevertheless been met with concern that the nation is resting on a few laurels, falling behind the rapid technological development elsewhere in the world. 

“Clearly there is a global competition happening in terms of AI,”  Antony Walker, deputy CEO of techUK, said at the time. “China in particular is very focused. People are coming to realise that your fundamental values need to be embedded in your approach to AI, and doing this with the democratic tradition in Europe and the US is something we need to get right.”

Indeed, it is perhaps telling that China’s SenseTime recently became the world’s most valued AI startup. SenseTime is behind the Chinese government’s vast, facial-recognition surveillance technology.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.

Todays Highlights
How to See Google Search History
how to download photos from google photos