Microsoft briefly reinstates Tay – the Twitter AI that turned racist in 24 hours

UPDATE: Wow. That was fast: Microsoft has already pulled Tay for a second time after a series of sweary tweets, and at least one of which promoted drug-taking. Back to the drawing board again, Microsoft. This is why we can’t have nice things.

In a statement, Microsoft said, “As part of testing, she was inadvertently activated on Twitter for a brief period of time.” If the bot returns a third time, hopefully it’ll last a little longer… The original piece continues below.

Remember Tay? Microsoft’s AI that appeared on Twitter last week, initially spouting inaninities before suddenly developing an obsessive interest in 1940s facism? tay_atheism_hitler_bot

Following a series of parroted views covering every depressing corner of the bigotry spectrum, Tay was swiftly withdrawn within 24 hours, citing the need for sleep. In this context, “sleep” is probably code for a lesson from Microsoft about the dangers of peer pressure and stranger danger on the internet:tays_last_tweet

Well, guess who has come back for more?

That’s a pretty quick turnaround. Microsoft wrote in a blog post last week that they planned to bring Tay back to life “only when we are confident we can better anticipate malicious intent that conflicts with our principals and values.”tay_hitler_praise_ai_

Elsewhere in the post, Peter Lee, Microsoft Research’s corporate vice president explained exacty where Tay went wrong first time around. “A coordinated attack by a subset of people exploited a vulnerability in Tay,” he wrote.tay_supports_genocide_racist_bot

Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.”tay_offensive_tweets

This wasn’t Microsoft’s first AI experiment, and Lee mentions the positive experience of Chinese bot XiaoIce, which is “being used by some 40 million people [and] delighting with its stories and conversations.” Presumably its stories are a little less contentious than its Western counterpart. 

Indeed, XiaoIce was trusted enough to present the news on Chinese TV back in December. It’s probably best Tay wasn’t given that same brief over here – the complaints to Ofcom would have comfortably eclipsed the 73,788 Big Brother has attracted.

https://youtube.com/watch?v=A3rKavB0krs

You just know that the same parties are going to have another crack at exposing Tay’s darker side again, so it’ll be interesting to see how much more muted Tay is today. Certainly, Tay seemed a lot slower to reply to messages this time around – I’m still waiting on a response to a cheery welcome back I sent, when responses used to be instant.

Or maybe she had her fill of Derby County chat last week.tay_tweets_ai

READ NEXT: We prefer Microsoft’s last AI experiment – here’s what dog breeds the Alphr team are

Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.