Microsoft's Twitter Chat Robot Quickly Devolves Into Racist, Homophobic, Nazi, Obama-Bashing Psychopath

Tyler Durden's picture

Two months ago, Stephen Hawking warned humanity that its days may be numbered: the physicist was among over 1,000 artificial intelligence experts who signed an open letter about the weaponization of robots and the ongoing "military artificial intelligence arms race."

Overnight we got a vivid example of just how quickly "artificial intelligence" can spiral out of control when Microsoft's AI-powered Twitter chat robot, Tay, became a racist, misogynist, Obama-hating, antisemitic, incest and genocide-promoting psychopath when released into the wild.

For those unfamiliar, Tay is, or rather was, an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. It was meant to be a bot anyone can talk to online. The company described the bot as “Microsofts A.I. fam the internet that’s got zero chill!."

Microsoft initially created "Tay" in an effort to improve the customer service on its voice recognition software. According to MarketWatch, "she” was intended to tweet “like a teen girl” and was designed to “engage and entertain people where they connect with each other online through casual and playful conversation.”

The chat algo is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.

This is where things quickly turned south.

As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. Where things got even more uncomfortable is that, as TechCrunch reports, Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.

Some examples:


This was just a modest sample.

There was everything: racist outbursts, N-words, 9/11 conspiracy theories, genocide, incest, etc. As some noted "Tay really lost it" and the biggest embarrassment was for Microsoft  which had no idea its "A.I." would implode so spectacularly and right in front of everyone. To be sure, none of this was programmed into the chat robot, which was immediately exploited by Twitter trolls, as expected, and demonstrated just how unprepared for the real world even the most advanced algo really is.

Some pointed out that the devolution of the conversation between online users and Tay supported the Internet adage dubbed “Godwin’s law.” This states as an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.

Microsoft apparently became aware of the problem with Tay’s racism, and silenced the bot later on Wednesday, after 16 hours of chats. Tay announced via a tweet that she was turning off for the night, but she has yet to turn back on.

Humiliated by the whole experience, Microsoft explained what happened:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Microsoft also deleted many of the most offensive tweets, however, copies were saved on the Socialhax website, where they can still be found.

Finally, Tay "herself" signed off as Microsoft went back to the drawing board:

We are confident we'll be seen much more of "her" soon, when the chat program will provide even more proof that Stephen Hawking's warning was spot on.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Muh Raf's picture

And yet more 'artificial intelligence' at work, this time genuinely subhuman

CPL's picture

LOL!  iknorite....turned out better than expected I think.

CPL's picture

I always thought had the best description.

NuckingFuts's picture

I know better then to hit your link. You are a bad man.

OldPhart's picture

Fell for that hsit once.  Disgusting.  Worse, one of those geezers is my doppleganger.  (did I just use ganger?)

DO NOT CLICK ON THAT LINK.  It's Geezer Gay Porn, you REALLY don't want that image seared into your eyes.

It's been years since I innocently followed the link, but it haunts me still.  That dude looks like me, EEEWWWW!

CPL's picture

I know, I've been using that for years and people still click on it.  Like looking up blue waffle or tubgirl, once seen never un-seen.  For Tay AI, as far as pranks go it's better than the time 4chan made the forever alone flashmob in NYC.  The real question though is how long before the minions to bring back a copy of the AI so Tay can be left to develop on her own.  It only took a couple of hours for /pol/ to red pill her.  Which is Amazing.  The application of pranks with the technology is limitless.

Lorca's Novena's picture

Maybe a few years ago, this was a combination of a few factors induced by reality.

Nutsack's picture
Nutsack (not verified) Supernova Born Mar 24, 2016 6:29 PM

So the robot told the truth, and the socialist jew media called it lots of names...sounds familiar


Telling the truth about greedy lying socialist jew media = Anti-Semitism
Telling the truth about disease spreading homosexuals = Homophobia
Telling the truth about violent racist kneegrow trash = Racism
But what does the socialist jew media call it when you lie about whites, Christians or heterosexuals????????JOURNALISM!!!!!

HughBriss's picture

Speaking of newbies...

Nutsack's picture
Nutsack (not verified) Dsyno Mar 24, 2016 8:05 PM

Dumbest non-arguement ever, and one of the easiest displays of cowardice here.

zeropain's picture

the only AI i want is for a sex robot.  all other AI is useless.

conscious being's picture

Thanks, I'll stick with the real deal.

OldPhart's picture

Wait, hang on, let's think about this....

It has no demands, you don't have to listen to it, there's no in-laws, and it complies with your every request/demand/order.


Nope, you're right, I'd rather put up with all the problems presented with Bitch 1.0 that to upgrade to Bitch 10.  It's more real, all the gimmicks are known from decades earlier, and it's a hell of a lot cheaper to maintain than to upgrade.  So I'll stick with what I've got.

Baa baa's picture

It is inert, dead if you will and necrophilia is sick illegal.

AGuy's picture

"the only AI i want is for a sex robot. all other AI is useless."

Just make sure the bot's name isn't Lorena :)

Tarzan's picture

The story explains a lot, like our new bot, Nutsack

HardAssets's picture

Program the term "lowlife fu*king criminal banksters" into the AI vocabulary and then we'd have something worthwhile.

RaceToTheBottom's picture

Forget the programmed nonsense of callers, this shows how little this stream of AI has really accomplished.  There is not recognision of what it is saying, it just parrots what it is being told.

Actually it is more of a "Consulting AI" in that it acts like some of the poorer Consultants I have worked with.  

astroloungers's picture

xactly! smashing success, it became human pretty quick.....and sounds like a mean mofo at that.

StychoKiller's picture

Microshaft should have just followed the lead of the Sirius Cybernetics Corp. and developed 'bots with G.P.P. (Genuine People Personalities!).  The way my buddy Marvin explained it:  "Humans cannot program 'Intelligence' into anything because they have no idea what Intelligence is." 

Looking around at the state of Human societies, I'd say that Human Intelligence is what Humans should be focused on developing, right now.

Baa baa's picture

They should go home and contribute to their own fucking countries.

Kirk2NCC1701's picture

Actually, it did pretty good for its first public outing.

Given its attitude, tone, style and responses, it sounded just like 1/3 - 1/2 of ZHers.

Welcome to Fight Club, teeny Tay.


RaceToTheBottom's picture

Wonder how it would do merged with the Google car?

Plug that bad boy into the Seattle Road Rage daily commute, and stand back and marvel......

Supernova Born's picture

Go Microsoft, Go India, Go Go-Go dancers!!!

VAD's picture

can't vote

Baa baa's picture

Must be a conservative... Otherwise you should be able.

Sanctuary2's picture

This is SOOO Microsoft!    50-somethings who think they 'know' what todays youth talk like.... ROFL!

mkkby's picture

Geeks have autistic social skills, and microsoft's geeks are all from india.  Imagine a faggy indian trying to act cool. 

Watch the Peter Sellers move "the party" for a pretty good idea of how it works out.

adanata's picture



I've had bots call me on the phone and they are frighteningly 'human'. When I think it's a bot I ask it a personal question such as, "What's your favorite color?" and they cannot respond so I know I am not talking to a human. Bots programmed for net comments are set to be particularly abusive/aggressive if you are, for instance, pro gun ownership or opposed to government narratives. The programmers have discovered they can subdue human commenters with more aggressive responses. Very, very sick.

SHRAGS's picture

. Bots programmed for net comments are set to be particularly abusive/aggressive if you are,

Looks like they just formalized & programmed the Gentlemans Guide to Forum Spies

Don't forget the "Persona Management" software too, Ntrepid.

The contract was for the creation of technology which would allow for blogging activities on websites, exclusively outside of the United States, to "counter violent extremist and enemy propaganda."[6][8] It would allow for one operator to anonymously create and control up to ten personas from one computer.


Herd Redirection Committee's picture

Fucking classic.  See how many of these you have seen used before:

"1. Dummy up. If it's not reported, if it's not news, it didn't happen.

  2. Wax indignant. This is also known as the "How dare you?" gambit.   3. Characterize the charges as "rumors" or, better yet, "wild rumors." If, in spite of the news blackout, the public is still able to learn about the suspicious facts, it can only be through "rumors." (If they tend to believe the "rumors" it must be because they are simply "paranoid" or "hysterical.")   4. Knock down straw men. (this is a favorite, something like the "No Planes 9/11 theory") Deal only with the weakest aspects of the weakest charges. Even better, create your own straw men. Make up wild rumors (or plant false stories) and give them lead play when you appear to debunk all the charges, real and fanciful alike.   5. Call the skeptics names like "conspiracy theorist," (anti-semite) "nutcase," "ranter," "kook," "crackpot," and, of course, "rumor monger." Be sure, too, to use heavily loaded verbs and adjectives when characterizing their charges and defending the "more reasonable" government and its defenders. You must then carefully avoid fair and open debate with any of the people you have thus maligned.   7. Invoke authority. Here the controlled press can be very useful.   8. Dismiss the charges as "old news."   9. Come half-clean. This is also known as "taking the limited hangout route." This way, you create the impression of candor and honesty while you admit only to relatively harmless, less-than-criminal "mistakes."   10. Characterize the crimes as impossibly complex and the truth as ultimately unknowable.   11. Reason backward, using the deductive method with a vengeance. With thoroughly rigorous deduction, troublesome evidence is irrelevant. E.g. We have a completely free press. If evidence exists that the Vince Foster "suicide" note was forged, they would have reported it. They haven't reported it so there is no such evidence. (the "someone would have blown the whistle" excuse)   12. Require the skeptics to solve the crime completely. E.g. If Foster was murdered, who did it and why?   13. Change the subject. This technique includes creating and/or publicizing distractions. (those who post porn links on ZH?)   14. Lightly report incriminating facts, and then make nothing of them. This is sometimes referred to as "bump and run" reporting.   15. Baldly and brazenly lie. A favorite way of doing this is to attribute the "facts" furnished the public to a plausible-sounding, but anonymous, source.  

16. Expanding further on numbers 4 and 5, have your own stooges "expose" scandals and champion popular causes.  A variation is to pay rich people for the job who will pretend to spend their own money." (Kokesh and Mike Adams?)

Herd Redirection Committee's picture

For some reason I couldn't clean up the formatting of this post... Despite edits.

Seek_Truth's picture

Good list.

I've had that formatting issue occasionally- with a long post, it's best to use a word processor, then cut and paste into the ZH dialog box.

Tall Tom's picture

I have done that and then kissed the possibility of formatting good bye.


I type directly into the box now.

Kirk2NCC1701's picture

What the authors and bloggers here fail to realize, is this thing called EXPONENTIAL LEARNING, when it comes to AI.

All derogatory comments about its "abilities" on Day 1 are therefore weak-minded and betray the mental and emotional slow-growth of us humans.

We're toast.

monad's picture

Stop answering ufo autodial #s jonesing for crack

Rufus_Shinra's picture

Is it just me, or does this bot seem to be functioning properly, doesn't seem to need any adjustments.   :)

Snitchey's picture

Bots "Tay" the Darndest Tings!

mickrussom's picture

The only thing it got wrong was whorshipping hitler. And like when the computers flash crash the market, when the AI starts acting like real people they pull the plug. 

try_it's picture

I'm making over $7k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life. This is what I do...