Whether it’s for restaurants or Trump, bots have gotten pretty good at shilling

For all the huge potential of artificial intelligence, bots still have a long way to go to pass as human. You don’t know whether I’m a dog or not, but you can at least be reasonably confident that I’m not a bot.

Whether it’s for restaurants or Trump, bots have gotten pretty good at shilling

But then I’m writing articles of between 300 and 3,000 words: there’s plenty of room to slip up – especially if you’ve been trained through machine learning, rather than speaking, reading and writing in English for more than 30 years. In the realm of short-form social media and comments sections, where grammar and syntax are both more fluid and less closely scrutinised, it’s far easier for bots to blend in, as a study from the University of Chicago found out last week. The bot they’d trained to review restaurants was astroturfing with the best of them.

“My family and I are huge fans of this place,” the bot gourmet wrote in one sneaky review on Yelp. “The staff is super nice and the food is great. The chicken is very good and the garlic sauce is perfect. Ice cream topped with fruit is delicious too. Highly recommended!” Not only were these reviews effective enough to pass the spam filter, they were flagged as helpful by other human users, who hopefully weren’t left too underwhelmed by the (admittedly hard to get wrong) ice cream topped with fruit.

Sociological implications

Offering phoney restaurant advice isn’t the only use for bots prevalent in the art of short-form, and Twitter is, if anything, even better suited for the task in hand. Strict 140-character limits have even grammar pedants writing like simpletons, with nuance reduced to hashtags and emoji. In this environment, it’s even easier for bots to slip through the net. On the surface of things, the benefit of doing so is less obvious. If you have bots writing positive reviews for your restaurant (or negative reviews for your rivals’), the rewards are obvious. But what do you gain from an army of Twitter bots chattering aimlessly?

That question may once have raised a shrug, but the answer in our current political climate is depressingly clear: propaganda. Take the recent far-right rally in Charlottesville, where fake accounts from Russia were sowing mischief, flooding #PhoenixRally, #Antifa and #MAGA with tweets amplifying a message that once would have been niche. Now I’m not sure exactly how niche they are, and that’s why, for all their limitations, bots are an extremely handy political tool. Far-right rhetoric seems to be on the rise on Twitter, but how much of it is real, and how much is fake? I have no idea.

That’s the main advantage of outsourcing an unlimited array of AI-led accounts. The purpose of an army of far-right bots isn’t to make you or me reconsider our founding political beliefs and give Nazism a second look – it’s to alter our perceptions of other people’s beliefs, and make us believe that we’re in the minority. “What they want to achieve is the impression of a false social consensus,” said Mike Hind, an investigative journalist who has been tracking troll accounts both human and bot, to LBC. “That’s why they flood the online world with this information because anyone happening along to see that – and that includes politicians and policy makers and other journalists – they want them to believe that this is the social consensus.”

A place for the human touch

Bots are often easy to spot, though. What about the evangelically political social media accounts that are clearly human, but rigidly insistent on arguing about their political point of view? Possibly nothing – everybody knows someone who loves arguing about politics, but it’s certainly possible that they’re part of the propaganda machine as well. Take the strange case of @DavidJo52951945 – a fiercely pro-Brexit account purporting to be located in Southampton, but following the telltale patterns of the St Petersburg troll factory, right down to the 8am-8pm Russian time zone posting (which would be 5am-5pm for people actually living in Southampton). As the piece explains, employees of the Troll Factory spend 12 hour days sharing articles and arguing with others on Twitter. While @DavidJo52951945 claims it’s an example of the kind of “fake news” he’d be complaining about between 5am and 5pm every day, his account is now mysteriously private after the story was picked up by The Times.

A Russian troll factory uncovered by The New York Times allegedly has a monthly budget of “at least” $400,000 (~£308,600) per month. Distributing propaganda is big-money business, clearly – there’s a high incentive for bots to get better at human tasks, potentially leaving professional trolls another unemployment casualty of the AI revolution.

For now, though, when it comes to spreading disinformation on the internet – be it fake reviews or dangerous propaganda – you just can’t beat the human touch. It’s easy to block a bot with a tendency to say the wrong thing (the limitations of the Yelp bot become apparent through another post: “I had the grilled veggie burger with fries!!!! Ohhhh and taste. Omgggg! Very flavorful! It was so delicious that I didn’t spell it!!”), but ignoring a human feels like hiding from discourse and retreating to your ideological echo chamber.

Still, it’s definitely worth giving intent a second thought the next time you get caught in a battle of words with a suspiciously persistent keyboard warrior. “The best defence against this is digital media literacy,” Hinds explains. “What’s important is we recognise it when we see it so we can ignore it, block it or ideally both.”

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.