An AI has been taught to make moral decisions like humans

I love a good thought experiment, purely because it forces us to cast an uncomfortable eye over our ethics and moral judgement. Looking at the 1.3 million people who took part in MIT’s Moral Machine project last year, I’m not alone.

An AI has been taught to make moral decisions like humans

The Moral Machine put a 21st-century twist on the trolley problem, placing participants in the driver’s seat of an autonomous vehicle to see what they’d do when confronted by a difficult moral dilemma. Would you run down a pair of joggers instead of a pair of children? Or would you hit a concrete wall to save a pregnant woman, or a criminal, or a baby? These were the types of grisly questions that participants were asked. Now researchers have created an AI from the data, teaching it the most predictably moral human thing to do.

The AI has been led by a collaboration between Carnegie Mellon assistant professor, Ariel Procaccia and one of MIT’s Moral Machine researchers, Iyad Rahwan. Described in a paper, the artificial intelligence is designed to evaluate situations in which a self-driving car is forced to kill someone and to choose the same person as the average Moral Machine participant. Whilst interesting, it does raise the question as to how exactly a machine can decide something so complex, having hundreds of millions of variations, with the use of only eighteen million odd votes cast by 1.3 million, internet-using responses.

“We are not saying that the system is ready for deployment, but it is a proof of concept showing that democracy can help address the grand challenge of ethical decisions in making an AI,” said Procaccia.How do Google's driverless cars work?

Still, teaching a computer to make ethical decisions based on a crowdsourced survey, in itself, flags up some moral and ethical problems.

“Crowdsourced morality doesn’t make the AI ethical,” Professor James Grimmelmann from Cornell Law School told The Outline. “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

Whilst humans want to believe they’d do the right thing, when push comes to shove, by design or by evolution, humans are selfish creatures. Rahwan found, for example, that in previous research despite many people agreeing that a self-driving car should sacrifice its own passengers when confronted with a trolley problem, they’d understandably prefer not to ride in the cars themselves. After all, the crowdsourced million in the study also have their own prejudices and biases – they’re privileged enough to have unfiltered internet access for one thing.

Not to mention that these types of problems are rare and are unlikely to come up compared to a real moral issue like driving more slowly to save fossil fuels.

The AI comes as only this summer, Germany released the world’s first ethical guidelines for the artificial intelligence of autonomous vehicles, stating that self-driving cars must prioritise human lives over animals, whilst also restricting them from making decisions based on age, gender and disability. Other organisations have instead proposed placing a selfish-dial in the car so that the passenger can decide for themselves what they’d like to do.

So while crowdsourced morality takes us closer to self-driving cars fit for consumer use, it’s not the most perfect solution. But is there ever a perfect solution when it comes to morality? Just taking a look at what the AI deems moral is quite frightening, with it choosing to run over a homeless person instead of someone who isn’t. Is that moral? I’ll let you decide.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.