5 problems artificial intelligence needs to overcome for all our sakes

As a general rule of thumb, the law and government are slow-moving and deliberate. That’s really handy for important things that you have to get right, but the trouble is that disruptive technology tends to move much faster. That’s bad enough if the disruptive technology you’re talking about is (say) the sharing economy, but it’s more serious when it’s something that Elon Musk once described as “potentially more dangerous than nukes”.

5 problems artificial intelligence needs to overcome for all our sakes

That thing is artificial intelligence. And while it’s hard to feel too threatened when it’s your Amazon Echo failing to understand you saying “Play REM” for the tenth time in a row, the threat – potentially – is a real one. Intelligent people such as Bill Gates, Elon Musk and Stephen Hawking wouldn’t be scared of something that wasn’t worth at least considering.

But let’s assume for a moment that the worst-case “HAL 9000” scenario doesn’t come to pass, and AI is broadly a positive thing for humans. There are still a number of issues that the technology needs to overcome and deserve serious discussion.

1. Human beings make for lousy teachers

5_problems_ai_needs_to_overcome_-_1

The idea that we’d create a perfect, unbiased AI is a utopian but unrealistic ideal. The problem is that while an AI is a blank canvas, it’s learning from citizens of a society that already has biases and prejudices running through it, like letters through a stick of rock.

Obviously you can see this in the way internet trolls managed to make a Microsoft chatbot start espousing Adolf Hitler, but that’s a pretty facile illustration given the bot was merely parroting words without context. A more nefarious, subtle example is the AI that judged a beauty contest and was overwhelmingly in favour of white skin, or how another AI found black names to be less “pleasant” than white ones.

These AIs didn’t reach their conclusions because white supremacists were right all along – they did so because the data we feed deep-learning machines is shaped by our society, with all its imperfections brought along for the ride. For a truly objective artificial intelligence, we need a way of filtering out these biases – but how do you go about fixing that without bringing in more biases along the way?

2. Deep learning often involves trampling over privacy

To learn, artificial intelligence needs data, and lots of it. Even with the best intentions (not even thinking about those with bad intentions), that data can be lost and potentially used against you, even with safeguards in place. That’s quite a big deal when AIs have access to everything about you from your daily routine tracked by smartphone to your private health records.

Laws can regulate this to a degree, but that in turn impinges on process, because the more data an AI can absorb, the more effective it will be. It’s an awkward catch-22.

3. What will we do without work?

5_problems_ai_needs_to_overcome_-_2

As artificial intelligence improves, the more likely it is to be able to do humans’ jobs – and because robots don’t draw a salary, it’s a no-brainer for businesses. Some experts believe that there won’t be much work left to do for humans by the year 2050.

What will we do when there aren’t enough jobs to go around? If inequality is a problem right now, it will be several hundred times worse if we get to a stage where the unemployed outnumber the salaried.

That’s an enormous problem, and currently there are two solutions, neither of which sit too well with the way our society runs at the moment. The first, endorsed by Elon Musk, is some kind of universal basic income (UBI). In other words, the state pays every citizen a basic wage merely for existing. Politically, that’s pretty toxic right now, with the public consistently stating that our current benefits systems are too generous. Currently, in the UK, UBI is only backed by the Green Party – and the issue was smashed 77% to 23% in a referendum in Switzerland last year.

So what’s the alternative? Bill Gates favours a robot tax on companies that replace workers with artificial intelligence – the idea, presumably, either to disincentivise the process, or to fund more generous safety nets for the humans that miss out. This is also not an ideal solution – the slogan “no taxation without representation” may have originated in the 1750s, but it still echoes today.

And neither of these answer the more basic problem: money or no money, what will humans do with all that free time?

Still, legal experts will have work for a while. For one thing, they’ll need to figure out…

4. Who do you sue when a robot kills you?

5_problems_ai_needs_to_overcome_-_3

Ever without the doomsday scenario, it’s highly likely that AI will directly or indirectly cause someone’s death. That may sound pessimistic, but take the example of driverless cars: a car’s AI driver could plausibly be in a situation where it has to choose between killing or injuring the passenger, and pedestrians.

If a doctor or driver kills you, it’s pretty clear who your grieving relatives should chase, but with artificial intelligence it’s another matter entirely. Who is ultimately responsible? Is it the user of the software? Is it the person who developed the AI? Is it the person who trained it with the data it needs to function? Is it the vendor who sold it to you? Is it the government for legalising it in the first place?

This becomes even more complex with so-called “black box” learning, where the AI’s “thought process” is hidden from view and impossible to decipher. If we can’t establish how an AI has reached its decision, then blame is even harder to assign.

Which brings us to…

5. How do we maintain control?

5_problems_ai_needs_to_overcome_-_4

Right now, it’s pretty easy for humans to stay on top of AI – which is perhaps why those closest to the subject get a touch defensive when nightmare scenarios are outlined. But the second artificial intelligence outsmarts a human, the game is changed forever.

As Tim Urban writes on Wait But Why: “A chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it’s beyond him to realise that anyone can build a skyscraper. That’s the result of a small difference in intelligence quality.

“We will never be able to even comprehend the things a [superintelligent AI] can do, even if the machine tried to explain it to us – let alone do it ourselves. It could try for years to teach us the simplest inkling of what it knows and the endeavour would be hopeless.”

In short, if we can’t understand not only what an AI is doing, but also why it’s doing it, then the hopes of humanity staying in control are pretty remote.

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.