A lack of diversity in tech is damaging AI

The tech industry has a diversity problem, that’s nothing new. However, this diversity problem has damaging implications for the future of artificial intelligence development, argues World Economic Forum AI and machine learning head Kay Firth-Butterfield.

A lack of diversity in tech is damaging AI

Speaking at an event in Tianjin, China, Firth-Butterfield flagged the issue of bias within AI algorithms, calling on the need to make the industry “much more diverse” in the West.

“There have been some obvious problems with AI algorithms,” she told CNBC, mentioning a case that occurred in 2015, when Google’s image-recognition software labelled a black man and his friend as ‘gorillas’. According to a report published earlier this year by Wired, Google has yet to properly fix this issue – opting instead to simply block search terms for primates.

“As we’ve seen more and more of these things crop up, then the ethical debate around artificial intelligence has become much greater,” said Firth-Butterfield. She also noted the rollout of General Data Protection Regulation (GDPR) in Europe, claiming this has brought ethical questions about data and technology “to the fore”.

READ NEXT: Elon Musk warns that AI is “outside of human control”

The dominance of “white men of a certain age” in building technology was signalled as a root cause for bias creeping into the algorithms behind AI. Training machine-learning systems on racially uneven datasets has previously been noted as a problem, particularly within facial-recognition software.

An experiment undertaken earlier this year at the Massachusetts Institute of Technology (MIT), for example, involved testing three commercially available face-recognition systems, developed by Microsoft, IBM and the Chinese firm Megvii. The results found that the systems correctly identified the gender of white men 99% of the time, but this success rate plummeted to 35% for black women. The same was true of Amazon’s recognition software, misidentifying 28 members of US Congress as criminals.

Dr Adrian Weller, programme director for artificial intelligence at The Alan Turing Institute, told Alphr: “Algorithmic systems are increasingly used in ways that can directly impact our lives, such as in making decisions about loans, hiring or even criminal sentencing. There is an urgent need to ensure that these systems treat all people fairly – they must not discriminate inappropriately against any individual or subgroup.

READ NEXT: Britain is facing an AI brain drain as talent moves abroad

“This is a particular concern when machine learning methods are used to train systems on past human decisions which may reflect historic prejudice.”

Weller noted that a growing body of work is addressing the challenge of making algorithms fair, transparent and ethical. This outlook is similar to that of Firth-Butterfield, who emphasised that the World Economic Forum is trying to ensure AI grows “for the benefit of humanity”.

Human diversity might not be the only issue facing AI bias. A recent study by Cardiff University and MIT found that groups of autonomous machines can demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.