After Microsoft’s last chatbot experiment Tay was killed after turning racist within 24 hours of being unleashed into the public, It’s now about to launch Zo – but this time it has safeguards in place to stop a repeat of events.

Just like Microsoft Tay, Zo learns about language and how people use words and emotions together by engaging in conversations with humans. This time around, though, it’s not open to absolutely everyone via Twitter – Microsoft has decided to use Kik messenger as its platform, and users are only accepted via invite. Microsoft is also accepting applications to talk with Zo via Facebook Messenger and Snapchat, so it’s likely Zo will be expanding to other platforms in due time.
According to MSPoweruser’s Mehedi Hassan, via IT Pro, Zo has normal conversation nailed. However, Microsoft’s chatbot isn’t too hot at deeper, more meaningful conversations or handling topics about politics or anything that requires deep knowledge of a subject.
Microsoft is being far more careful with Zo’s rollout than Tay, so it’s unlikely we’ll see it take quite the same rapid turn towards becoming racist due to online trolls. Still, it’s just as open to abuse as before, since many Kik users are Reddit folk – the initial Tay trolling did start up from a set of Reddit users.
Microsoft has had some success in the AI chatbot space before though. In Japan it has a chatbot on Twitter, built in a similar vein to that of Tay. Whereas Tay turned racist within 24 hours, Japanese users made Rinna an anime-loving otaku schoolgirl. Unfortunately, as soon as Tay began to fall, Western trolls discovered Rinna and – while trying to make it racist – ended up creating a somewhat depressed and bipolar AI.
Let’s hope Microsoft can catch a break this time around with Zo.
Disclaimer: Some pages on this site may include an affiliate link. This does not effect our editorial in any way.