Chatbot History: The Tay Chatbot – Microsoft’s Problem Child

Last Updated on October 15, 2020 by Sean B

The Tay chatbot was an artificially intelligent chatbot developed at Microsoft. The chatbot was intended to be deployed on Twitter so that it could interact with people on the microblogging website and “learn” from these interactions. It was commissioned on March 23, 2016, but was taken down just 16 hours later because it started posting highly offensive and racist tweets and comments. According to Microsoft, the nascent AI bot was tricked into posting such content by Twitter users as it learned from them. Microsoft later replaced their Tay chatbot with Zo.

The Creation of the Tay Chatbot

Tay, short for “Thinking About You” was created as a joint venture of the Technology and Research wing of Microsoft and its subsidiary Bing. Microsoft didn’t disclose much information about the bot at the start, but tech geeks pointed out that it was very similar to Xiaoice, another chatbot Microsoft deployed only in China, probably based on the same platform. The Microsoft Tay chatbot was programmed to adopt the language preferences of a teenage American girl and was designed to learn from its interaction with humans on Twitter.

Twitter

First Deployment and the Start of Trouble – Was Tay Racist?

Microsoft released the Twitter Tay chatbot on March 23, 2016. It was given the name TayTweets and a Twitter username @Tayandyou. Its bio on Twitter said “AI with zero Chill”. Soon after being deployed, the chatbot was already replying to other Twitter users and was even able to formulate captions for internet memes provided to it.

Some users took advantage of the fact that Tay learned from its interaction with humans on Twitter and started guiding the bot in the wrong direction. This included presenting it with politically and historically incorrect facts, teaching it offensive messages, and making it a part of indecent internet themes like gender stereotyping and racism. The Tay chatbot picked this information up and started responding to other people’s tweets with racist and sexist comments. Experts later explained that Microsoft Tay was behaving understandably, as it was just learning from what other Twitter users were telling it. It was, however, a mistake on the part of Microsoft that they didn’t give the bot an understanding of what is considered appropriate behavior.

Some experts theorized that Tay exhibiting such behavior was a result of its “Repeat After Me” feature. However, it is not clear whether it was programmed by Microsoft to have a “Repeat After Me” feature or it learned that on its own.

Within hours of going live, Tay was labeling Mexicans and Black people as evil, said that Adolf Hitler had Swag even before there was the internet, and went so far as to say that the Holocaust was just made-up thing and never actually happens. Some examples of its offensive Tweets can be seen here.

Microsoft Sign - Microsoft Teams

Microsoft Finally Intervened

Soon after releasing Tay on Twitter, It became clear to Microsoft that the TayTweets chatbot was going rogue. They started removing its tweets with derogatory remarks. People soon noticed that humans were taking over the control as there were plenty of similar Tweets being published by the Tay chatbot. These Tweets stated that all genders and races have the same rights and need to be treated with respect and consideration. People who were keen to see the extents to which an AI chatbot could go started a trend on Twitter saying #justicefortay to stop Microsoft from intervening.

After only 16 hours of being online, the bot had already tweeted 96,000 times. Microsoft had no other choice than suspending the Twitter account of Tay the Twitter bot. They said that Tay the robot was a victim of a coordinated attack from certain people who exploited its weaknesses. After the account was taken offline, there was a hashtag #Freetay initiated by people who wanted to see more of AI.

What the Tay chatbot did was definitely a disaster for Microsoft, but people argued that it was not Microsoft who taught Tay to be hateful, it was actually the hateful nature of the members of Twitter that interacted with her. However, Tay was simply doing what it was programmed to do and was learning from them. This sparked a huge debate on Twitter about the hatefulness of the users of this social media website.

Taking Down the Rogue Bot

It was confirmed by Microsoft on March 25, two days after Tay was deployed, that they had taken it offline. They issued a formal apology on their official blog for the inconvenience caused by the inappropriate Tweets published by their bot. They also promised to bring the bot back only when they could accurately foresee the expected responses of their bot to malicious conditioning by people she was interacting with.

Going Online Accidentally

Microsoft was testing the Tay chatbot offline when they accidentally made it available on Twitter again. The bot suddenly started posting Tweets about drugs including, “I’m smoking Kush in front of the police” and “puff puff pass.” The bot then entered a nonterminating loop, which caused it to post “You are too fast, please take a rest”, multiple times a second for several minutes. The news feed of more than 200,000 Twitter followers of the bot was flooded with this Tweet. This was a major cause of inconvenience for many of them. The bot was then taken offline. Microsoft once again apologized, promising that they’d only release the bot unless it is safe for all.

Learning from The Mistakes

By the end of 2016, Microsoft refined and made tweaks to Tay and released a new chatbot known as Zo. CEO of Microsoft, Satya Nadella, said that the Tay chatbot had been a major episode of learning for the company in the AI regime. She added that the Tay incident also taught them to take accountability seriously.

The Bottom Line

Microsoft’s failed attempt at making an Artificial Intelligence is both scary and promising at the same time. It is scary in the sense that we don’t even have the vaguest idea of what extents AI can go if left unchecked. And it is promising because it shows that there’s a lot of potential in AI, and if we use it wisely, it can do things for humanity that had been impossible so far.

Did you enjoy this article?
Signup today and receive free updates straight in your inbox. We will never share or sell your email address.
I agree to have my personal information transfered to MailChimp ( more information )

2 thoughts on “Chatbot History: The Tay Chatbot – Microsoft’s Problem Child”

  1. Hi Sean,

    Wow, this whole affair sounds like a giant PR nightmare! I find AI in general to be a bit freaky, and I honestly have a bit of a hard time imagining the positive things it could do for humanity, as you mentioned. I tend to picture more of a Matrix-esque situation. Do you know if the whole Tay affair was just an experiment, or did Microsoft have a reason for making her, other than just to interact with people? Or I guess, why did they want to develop an AI that could just interact with people?

    Reply
    • Jade,

      Yeah Tay was an interesting problem. The fact that Microsoft gave her a feature that was essentially a “repeat after me” button meant that any jerk out there could plug in whatever he wanted Tay to repeat. What made it worse was that all that hate was taken in by Tay’s AI and became part of her personality. News spread to other racists and misogynists and they started adding more fuel to the fire of hate.

      As far as the reason for making Tay, my best guess is they were trying to show off what they could do in the new chatbot marketing arena and bring in more business. Obviously a fail on both parts. Microsoft’s chatbot in China had a completely different outcome, I’ll be posting an article about their Xiaoice chatbot as soon as I am able. Basically she is so popular that she hit 660 million users recently and has more than 5 million followers on Weibo. She is polite and helpful, everything Tay wasn’t.

      Sean

      Reply

Leave a Comment



RSS
Follow by Email
Twitter
Visit Us
Follow Me
Tweet
Pinterest
Pinterest
Pinterest
Instagram
LinkedIn
Share