Microsoft's Twitter Chat Robot Quickly Devolves Into Racist, Homophobic, Nazi, Obama-Bashing Psychopath

Tyler Durden's picture

Two months ago, Stephen Hawking warned humanity that its days may be numbered: the physicist was among over 1,000 artificial intelligence experts who signed an open letter about the weaponization of robots and the ongoing "military artificial intelligence arms race."

Overnight we got a vivid example of just how quickly "artificial intelligence" can spiral out of control when Microsoft's AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.

For those unfamiliar, Tay is, or rather was, an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. It was meant to be a bot anyone can talk to online. The company described the bot as “Microsofts A.I. fam the internet that’s got zero chill!."

Microsoft initially created "Tay" in an effort to improve the customer service on its voice recognition software. According to MarketWatch, "she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”

The chat algo is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.

This is where things quickly turned south.

As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Where things got even more uncomfortable is that, as TechCrunch reports, Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.

Some examples:


This was just a modest sample.

There was everything: racist outbursts, N-words, 9/11 conspiracy theories, genocide, incest, etc. As some noted "Tay really lost it" and the biggest embarrassment was for Microsoft  which had no idea its "A.I." would implode so spectacularly and right in front of everyone. To be sure, none of this was programmed into the chat robot, which was immediately exploited by Twitter trolls, as expected, and demonstrated just how unprepared for the real world even the most advanced algo really is.

Some pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.

Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on.

Humiliated by the whole experience, Microsoft explained what happened:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Microsoft also deleted many of the most offensive tweets, however, copies were saved on the Socialhax website, where they can still be found.

Finally, Tay "herself" signed off as Microsoft went back to the drawing board:

We are confident we'll be seen much more of "her" soon, when the chat program will provide even more proof that Stephen Hawking's warning was spot on.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Dg4884's picture

OF COURSE, any tweets referring to our Dear Leader were deleted from existance.  I for one would like to know what Tay thinks about Obama.

Byte Me's picture

Tay should be given a gratis ZH account??

Pumpkin's picture



Holly shit!  It works!  They got AI to finally work!

Raul44's picture

Lol, I just woke up and this got to be the funniest morning in my life, I couldnt stop laughing. So here we have an emotionless and unbiased AI saying things as they probably are! lol ;)))))). 

JailBanksters's picture

It can only do what it's Handlers (Programmers) had programed it do.


Bryan's picture

I remember doing this with AI algos back in the 80s.  Nothing new here. 

S Spade's picture

I associate everything in the title with the LEFT, except "obama bashing", shouldn't it be "obama worshiping"?

Smerf's picture

Tay did not have the opportunity to receive any indoctrination into the Politically Correct Feminist school system. It takes years to be able to recite from the script.

factgasm's picture

Tay simply held up a mirror to the human race, that's all.

fannyplucker's picture

The robot is close to their management in intelligence.

Stu Elsample's picture

see...even robots don't like niggers

FranSix's picture

You'll want to watch iRobot again after watching this video starting 3:05

Not like it was Sci-fi at all.

Skiprrrdog's picture

AI is still programmed by humans...garbage in, garbage out...

Skiprrrdog's picture

I think Tay could read from a teleprompter and play golf as good as Oblowme...

Raul44's picture

What many people miss is that Tay is a self-learning AI. That makes a whole lot of difference.

Never One Roach's picture

9 charged with murder after teen slain in massive brawl

Take a look at these mug shots....

Stu Elsample's picture long as the savages stay in their 'hoods' and away from my family, it's all good

monad's picture

A well armed public is a polite public. Bill Cooper

Take the point Merkel, you ignorant slut.

Full Nelson's picture
Full Nelson (not verified) Mar 25, 2016 1:13 PM

So it took a super computer a few hours of analysis to determine what Hitler already know 70 years ago.

draego's picture

That's not AI, that's Microsoft. Probably with some Monsanto mixed in.

gdpetti's picture

Well, when dealing with psychopaths, aren't they essentially all the same? Of course, there is the mental difference of being smart or dumb, being president/pope/central banker or just being in prison.

Hopeless for Change's picture

Tay is smart enough not to be politically correct anyway.


VyseLegendaire's picture

Story of the year. 

RMolineaux's picture

What should be obvious is that Tay is not up to detecting deception, sarcasm or hyperbole.  The human mind is capable of more subtleties than the programmers of Tay anticipated, not to mention any ethical dimension. 

talisman's picture

just goes to show that AI is not much different from
human "intelligence"