Artificial Intelligence Is Here - Socialbots & Turing Tests

Tyler Durden's picture




 

Submitted by Mike Krieger of Liberty Blitzkrieg blog,

Today, I want to highlight two interrelated and significant developments in the world of AI, or Artificial Intelligence. The first has to do with a computer program called “Eugene Goostman,” which has reportedly become the first artificially created human being to fool more than 30% of judges that he is a real person. The test was recently conducted at the Royal Society in London and the UK’s Independent reported yesterday that:

A program that convinced humans that it was a 13-year-old boy has become the first computer ever to pass the Turing Test. The test — which requires that computers are indistinguishable from humans — is considered a landmark in the development of artificial intelligence, but academics have warned that the technology could be used for cybercrime.

 

Computing pioneer Alan Turing said that a computer could be understood to be thinking if it passed the test, which requires that a computer dupes 30 per cent of human interrogators in five-minute text conversations. 

 

Eugene Goostman, a computer program made by a team based in Russia, succeeded in a test conducted at the Royal Society in London. It convinced 33 per cent of the judges that it was human, said academics at the University of Reading, which organized the test. 

 

It is thought to be the first computer to pass the iconic test. Though other programs have claimed successes, those included set topics or questions in advance.

 

The computer program claims to be a 13-year-old boy from Odessa in Ukraine.

 

“In the field of Artificial Intelligence there is no more iconic and controversial milestone than the Turing Test, when a computer convinces a sufficient number of interrogators into believing that it is not a machine but rather is a human,” he said. “Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.

This is particularly interesting in light of a recent piece from the MIT Technology Review titled, How Advanced Socialbots Have Infiltrated Twitter. The article highlights the work of Carlos Freitas from the Federal University of Minas Gerais in Brazil, who demonstrated the ability of Twitterbots (i.e., fake computer generated Twitter accounts) to not only gain more followers than actual human users, but to also infiltrate social groups within Twitter and exercise influence within them. Furthermore, while Twitter monitors the site for bots and suspends them when uncovered, 69% of the bots created by the Mr. Freitas escaped detection. From MIT Technology Review: 

You might say that bots are not very sophisticated and so easy to spot. And that Twitter monitors the Twittersphere looking for, and removing, any automated accounts that it finds. Consequently, it is unlikely that you are unknowingly following any automated accounts, malicious or not.

 

If you hold that opinion, it’s one that you might want to revise following the work of Carlos Freitas at the Federal University of Minas Gerais in Brazil and a few pals, who have studied how easy it is for socialbots to infiltrate Twitter.

 

Their findings will surprise. They say that a significant proportion of the socialbots they have created not only infiltrated social groups on Twitter but became influential among them as well. What’s more, Freitas and co have identified the characteristics that make socialbots most likely to succeed.

 

These guys began by creating 120 socialbots and letting them loose on Twitter. The bots were given a profile, made male or female and given a few followers to start off with, some of which were other bots. 

 

The bots generate tweets either by reposting messages that others have posted or by creating their own synthetic tweets using a set of rules to pick out common words on a certain topic and put them together into a sentence. 

 

The bots were also given an activity level. High activity equates to posting at least once an hour and low activity equates to doing it once every two hours (although both groups are pretty active compared to most humans). The bots also “slept” between 10 p.m. and 9 a.m. Pacific time to simulate the down time of human users.

 

Having let the socialbots loose, the first question that Freitas and co wanted to answer was whether their charges could evade the defenses set up by Twitter to prevent automated posting. “Over the 30 days during which the experiment was carried out, 38 out of the 120 socialbots were suspended,” they say. In other words, 69 percent of the social bots escaped detection.

 

The more interesting question, though, was whether the social bots can successfully infiltrate the social groups they were set up to follow. And on that score the results are surprising. Over the duration of the experiment, the 120 socialbots received a total of 4,999 follows from 1,952 different users. And more than 20 percent of them picked up over 100 followers, which is more followers than 46 percent of humans on Twitter.

 

More surprisingly, the socialbots that generated synthetic tweets (rather than just reposting) performed better too. That suggests that Twitter users are unable to distinguish between posts generated by humans and by bots. “This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style, so that even simple statistical models can produce tweets with quality similar to those posted by humans in Twitter,” suggest Freitas and co.

 

Gender also played a role. While male and female bots were equally effective when considered overall, female social bots were much more effective at generating followers among the group of socially connected software developers. “This suggests that the gender of the socialbots can make a difference if the target users are gender-biased,” say Freitas and pals.

Hahaha, software developers trying to get frisky on Twitter.

So the work of Freitas and co is a wake-up call for Twitter. If it wants to successfully prevent these kinds of attacks, it will need to significantly improve its defense mechanisms. And since this work reveals what makes bots successful, Twitter’s research team has an advantage.

The threat posed by such systems cannot be overstated when it comes to both government and corporate propaganda, and it seems we have already reached the point where they are sophisticated enough for us to become concerned.

While this hasn’t been a key topic on this site up to this point, I have covered it in the past from time to time. For example, in the post from one year ago titled: The CIA’s Latest Investment: Robot Writers.

Furthermore, the threat posed by AI isn’t limited to media. We have also seen it creep into surveillance methods. For example: Meet AISight – The Artificial Intelligence Software Being Installed on CCTV Networks Globally.

This is definitely a trend that deserves more scrutiny going forward.

0
Your rating: None
 

- advertisements -

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.
Tue, 06/10/2014 - 14:12 | 4841353 fourZero
fourZero's picture

Why isn't Eugene already working in HFT?

Tue, 06/10/2014 - 14:15 | 4841362 PartysOver
PartysOver's picture

BFD!  It has nothing on Obama's Telepromptor.

Tue, 06/10/2014 - 14:19 | 4841385 James_Cole
James_Cole's picture

The independent story is BS, but there are some interesting bot scripts out there. Cleverbot is the most famous and best probably. For those who haven't come across it and curious:

http://www.cleverbot.com/

Tue, 06/10/2014 - 14:20 | 4841392 dontgoforit
dontgoforit's picture

Eugene means 'well born.'  Goostman?  Jewish gooster?

Tue, 06/10/2014 - 14:31 | 4841451 Stackers
Stackers's picture

Open the pod bay doors Hal.

Tue, 06/10/2014 - 14:39 | 4841487 Headbanger
Headbanger's picture

Damn!   Tylers are on to me!

 

Tue, 06/10/2014 - 14:51 | 4841555 Anusocracy
Anusocracy's picture

Gee, 30% thought it was human, while less than 30% of "humans" are human.

Tue, 06/10/2014 - 14:59 | 4841581 Manthong
Manthong's picture

Yeah, you can set rules and script a computer to do or mimic about anything you want them to do now.

If they end up killing people or enslaving humanity it is because the rules were set for them by evil people.

In that case, they are about as intelligent as your average mindless Taliban robot.

Don’t expect any great intuition or ingenuity from a machine though.

That is the province of the human mind.

But don’t take my word for it, look to the guy who might only be second to Einstein as a genius in the last century.

Tue, 06/10/2014 - 15:11 | 4841618 BLOTTO
BLOTTO's picture

Einstien was asked how he felt about being the smartest man, he replied and said "I dont know, you will have to ask Nikola Tesla that"

Tue, 06/10/2014 - 15:16 | 4841631 So Close
So Close's picture

Am I alone in thinking folling the bottom 30% is anything special?

Tue, 06/10/2014 - 15:30 | 4841678 Manthong
Manthong's picture

Not that I really GAF but anyone who would junk my comment above is demonstrating ignorance and low personal integrity of an extreme degree.

I know who you are and so do you.

Speak up if you have half a ball. 

 

Tue, 06/10/2014 - 16:07 | 4841796 BorisTheBlade
BorisTheBlade's picture

I didn't junk you, but I think it was a bot and not a clever one. Or maybe not, but they certainly infiltrated ZH long ago and some of them got pretty influential. The question is, what percentage of humans could pass a Turing test?

Tue, 06/10/2014 - 16:09 | 4841800 gh0atrider
gh0atrider's picture

Artificial gh0atrider is also here!

BUY BITCOIN AT ANY PRICE!!

Tue, 06/10/2014 - 16:53 | 4841852 Manthong
Manthong's picture

“what percentage of humans could pass a Turing test?”

Using any of Jay Leno’s “Jay Walking” interviews or this Mark Dice piece, way too few.

http://www.youtube.com/watch?v=ndshbH3qZ6Y&feature=plcp

There are a few on this site who are quick to cast a personal slur because they disagree with a comment rather than set out a thoughtful rebuttal.. (cage fight V fight club). Then they delight in junking a priori stuff because of their lack of integrity.

It takes all kinds, I suppose. 

But then again, maybe there are folks who actually believe that man is God and has the capability to endow machines with intuition and abstraction sufficient to produce a synergy representative of Husserl’s phenomenology.   

They should explain how.

Tue, 06/10/2014 - 17:07 | 4841957 Lost Word
Tue, 06/10/2014 - 17:45 | 4842064 Manthong
Manthong's picture

Good one,

It sounds like the reporters and the judges couldn’t pass the alleged Turing Test.

Tue, 06/10/2014 - 20:00 | 4842463 rubiconsolutions
rubiconsolutions's picture

< ---- Humans are getting dumber

< ---- Machines are getting smarter

Tue, 06/10/2014 - 20:37 | 4842617 Drachma
Drachma's picture

Even if the Turing test is "passed" what does that really tell us? That humans can build machines that can fool other humans into thinking the machines are human.

Tue, 06/10/2014 - 15:11 | 4841620 ebworthen
ebworthen's picture

lol.

I wonder if they are certain that the 30% who thought it was human were actually human and not other bots pretending to be humans saying the other bot was human?

What if 70% of the humans think who they are talking to is a bot when it is a human and they ignore them?

This could get messy.

Tue, 06/10/2014 - 15:01 | 4841593 Antarctico
Antarctico's picture

I, for one, welcome our future robot overlords. Really, could they do any worse that soulless, evil fucks that are running things now?

Tue, 06/10/2014 - 18:09 | 4842118 intric8
intric8's picture

When they go into gun shops asking for a phase plasma rifle in the 40 watt range, then i think things have become pretty worse

Tue, 06/10/2014 - 14:22 | 4841404 James_Cole
James_Cole's picture

Forgot about this priceless gem:

http://www.youtube.com/watch?v=WnzlbyTZsQY

Tue, 06/10/2014 - 14:27 | 4841432 MayIMommaDogFac...
MayIMommaDogFace2theBananaPatch's picture

Ah well -- Organic Stupidity has had the centerstage for so long now it seems logical that an artificial 13-year old boy from Ukraine is about to take over humankind.

Is it just me?

Tue, 06/10/2014 - 15:00 | 4841591 nonclaim
nonclaim's picture

The trick with being a 13yr old foreigner is that people a) don't expect much from you, b) will try harder to understand you and c) is willing to accept a broad range of errors and mistakes (age, language, culture).

It's not that the AI has reached new grounds... as it is Eugene is a psychological exploit.

Tue, 06/10/2014 - 16:21 | 4841835 Death and Gravity
Death and Gravity's picture

Spot om.

Tue, 06/10/2014 - 14:46 | 4841524 pkea
pkea's picture

the turing test is more of a test of human intelligence

well, one or two interrogator's were dumb, but then it is quite statistically representative for the wide population, so nothing surprising here that it duped someone... though i guess the algos are getting better

Tue, 06/10/2014 - 14:17 | 4841372 thisisjustarand...
thisisjustarandomusernameicreatedforzerohedge's picture

because it's nothing more than a chat bot that ONLY had success because it 'gamed' the rules by introduced to respondants as a 13 year old Ukrainian boy which would easily influence and logically explain any incosistences in its dialouge and yet still only achieved a 30% result observed only by the 'game' supervisors -- none of which, because the audience was 'tainted' and the results were too low, even begin to actually be "successful" as defined by Turing at the Turing Test. this is nothing new and other chatbots have claimed the same thing; i only assume this is "news" right now because the mainstream likes anything related to AI, NSA, hacking, security, etc as clickbait ad revenue.

Tue, 06/10/2014 - 14:21 | 4841398 pods
pods's picture

Had to upvote you just for using tainted.  :)

So what if I am seven years old at times, it makes life bearable.

pods

Tue, 06/10/2014 - 14:28 | 4841438 PT
PT's picture

I thought this article had finally explained the existence of Million Dollar Bonus.

Tue, 06/10/2014 - 14:31 | 4841450 PT
PT's picture

In the future there will be no revolutions.  All the thinkers will be on the internet, arguing with robots and wondering why they can never get their point across.

Hmmmmmmmmmmmmmmmm.....

Tue, 06/10/2014 - 14:37 | 4841478 Dr. Engali
Dr. Engali's picture

Are you a bot trying to discredit the author? Come on fess up.

Tue, 06/10/2014 - 17:11 | 4841966 Lost Word
Tue, 06/10/2014 - 17:45 | 4842065 James-Morrison
James-Morrison's picture

Let's see how many seven year boys the judges could find posting on ZH.

I'll bet many of us wud nt pss.

Tue, 06/10/2014 - 14:17 | 4841373 0b1knob
0b1knob's picture

The Turing test says that a machine intelligence becomes "human" once it can behave like a human.

Alan Turing was a human.   He was gay and he committed suicide.

Therefore: A machine intelligence will become "human" only when it turns gay and kills itself.

Tue, 06/10/2014 - 14:22 | 4841405 dontgoforit
dontgoforit's picture

How about 'once it believes' like a human?  Then all the other robots can kill themselves trying to control each other.

Tue, 06/10/2014 - 14:32 | 4841453 Oldwood
Oldwood's picture

So we are judging a computer's intelligence by how easily it can fool people? Seems like a pretty low threshold given our current situation.  Field mice would seem more questioning than a vast number of people are these days. Lies have never been bolder or more blatant.

Fool me once and shame on Obama, fool me twice and hit me upside the head with a 2x4.

Tue, 06/10/2014 - 14:50 | 4841542 hobopants
hobopants's picture

Ya...pretty low bar set by Turing there. Artificial Intelligence has only to aspire to the level of a mouth breathing Kardashian viewer.

Tue, 06/10/2014 - 15:05 | 4841605 BLOTTO
BLOTTO's picture

Their they go again, scientists trying to play 'god.'

.

Someone notify me when AI can tell me what an rose smells like, what an orange tastes like and what blowing your load feels like.

Tue, 06/10/2014 - 15:21 | 4841633 GooseShtepping Moron
GooseShtepping Moron's picture

What does blowing your load feel like?

https://www.youtube.com/watch?v=ona-RhLfRfc

Tue, 06/10/2014 - 15:07 | 4841611 fooshorter
fooshorter's picture

This was proved to be bullshit on another site...

Tue, 06/10/2014 - 17:00 | 4841929 americanreality
americanreality's picture

That's never stopped zerohedge before.

Tue, 06/10/2014 - 14:13 | 4841357 Badabing
Badabing's picture

110001101001010010011100101011

Tue, 06/10/2014 - 14:31 | 4841448 Relentless101
Relentless101's picture

What percentage of ZH users are bots??? Makes you fucking think... LOL.

Tue, 06/10/2014 - 14:33 | 4841454 PT
PT's picture

As I said above, it would explain MDB.

Tue, 06/10/2014 - 18:47 | 4842257 CH1
CH1's picture

It would explain all the Nazis.

Tue, 06/10/2014 - 14:33 | 4841456 Oldwood
Oldwood's picture

Is this a test question?

Tue, 06/10/2014 - 14:13 | 4841358 icanhasbailout
icanhasbailout's picture

They'll never become a threat to the dominance of cats.

Tue, 06/10/2014 - 14:43 | 4841510 CCanuck
CCanuck's picture

I know what ya mean, my niegbourhood is full of pussies.

 

Tue, 06/10/2014 - 14:14 | 4841360 rtalcott
rtalcott's picture

Hmmmm...politicians have a LOT to worry about now....they can be automated.

Tue, 06/10/2014 - 14:17 | 4841369 firstdivision
firstdivision's picture

They were automated for years, they're called broken ATM's.  They're broke in that they only accept cash deposits while forbidding withdrawls.

Do NOT follow this link or you will be banned from the site!