Artificial Intelligence: Putting The 'AI' In Fail

Submitted by Andrew Orlowski via The Register,

“Fake news” vexed the media classes greatly in 2016, but the tech world perfected the art long ago. With “the internet” no longer a credible vehicle for Silicon Valley’s wild fantasies and intellectual bullying of other industries – the internet clearly isn’t working for people – “AI” has taken its place.

Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself into the mind of a three year old child, in order to be impressed.

For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would choose the correct answer, which is of course “none”.

Similarly, if you asked tech experts which recent theoretical or technical breakthrough could account for the rise in coverage of AI, even fewer would be able to answer correctly that “there hasn’t been one”.

As with the most cynical (or deranged) internet hypesters, the current “AI” hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they’re specialised and very limited in use. So not entirely useless, just vastly overhyped. As such, it more closely resembles “IoT”, where boring things happen quietly for years, rather than “Digital Transformation”, which means nothing at all.

The more honest researchers acknowledge as much to me, at least off the record.

"What we have seen lately, is that while systems can learn things they are not explicitly told, this is mostly in virtue of having more data, not more subtlety about the data. So, what seems to be AI, is really vast knowledge, combined with a sophisticated UX," one veteran told me.

But who can blame them for keeping quiet when money is suddenly pouring into their backwater, which has been unfashionable for over two decades, ever since the last AI hype collapsed like a souffle? What’s happened this time is that the definition of “AI” has been stretched so that it generously encompasses pretty much anything with an algorithm. Algorithms don’t sound as sexy, do they? They’re not artificial or intelligent.

The bubble hasn’t yet burst because the novelty examples of AI haven’t really been examined closely (we find they are hilariously inept when we do), and they’re not functioning services yet. For example, have a look at the amazing “neural karaoke” that researchers at the University of Toronto developed. Please do: it made the worst Christmas record ever.

It's very versatile: it can the write the worst non-Christmas songs you've ever heard, too.

Neural karaoke. The worst song ever, guaranteed

Here I’ll offer three reasons why 2016’s AI hype will begin to unravel in 2017. That’s a conservative guess – much of what is touted as a breakthrough today will soon be the subject of viral derision, or the cause of big litigation. There are everyday reasons that show how once an AI application is out of the lab/PR environment, where it's been nurtured and pampered like a spoiled infant, then it finds the real world is a lot more unforgiving. People don’t actually want it.

3. Liability: So you're Too Smart To Fail?

Nine years ago, the biggest financial catastrophe since the 1930s hit the world, and precisely zero bankers went to jail for it. Many kept their perks and pensions. People aren’t so happy about this.

So how do you think an all purpose “cat ate my homework” excuse is going to go down with the public, or shareholders? A successfully functioning AI – one that did what it said on the tin – would pose serious challenges to criminal liability frameworks. When something goes wrong, such as a car crash or a bank failure, who do you put in jail? The Board, the CEO or the programmer, or both? "None of the above" is not going to be an option this time.

I believe that this factor alone will keep “AI” out of critical decision making where lives and large amounts of other people’s money are at stake. For sure, some people will try to deploy algorithms in important cases. But ultimately there are victims: the public, and shareholders, and the appetite of the public to hear another excuse is wearing very thin. Let's check in on how the Minority Report-style precog detection is going. Actually, let's not.

After “Too Big To Fail”, nobody is going to buy “Too Smart to Fail”.

2. The Consumer Doesn’t Want It

2016 saw “AI” being deployed on consumers experimentally, tentatively, and the signs are already there for anyone who cares to see. It hasn’t been a great success.

The most hyped manifestation of better language processing is chatbots. Chatbots are the new UX, many including Microsoft and Facebook hope. Oren Etzoni at Paul Allen’s Institute predicts it will become a “trillion dollar industry” But he also admits “my 4 YO is far smarter than any AI program I ever met”.

Hmmm, thanks Oren. So what you're saying is that we must now get used to chatting with someone dumber than a four year old, just because they can make software act dumber than a four year old. Bzzt. Next...

Put it this way. How many times have you rung a call center recently and wished that you’d spoken to someone even more thick, or rendered by processes even more incapable of resolving the dispute, than the minimum-wage offshore staffer who you actually spoke with? When the chatbots come, as you close the [X] on another fantastically unproductive hour wasted, will you cheerfully console yourself with the thought: “That was terrible, but least MegaCorp will make higher margins this year! They're at the cutting edge of AI!”?

In a healthy and competitive services marketplace, bad service means lost business. The early adopters of AI chatbots will discover this the hard way. There may be no later adopters once the early adopters have become internet memes for terrible service.

The other area where apparently impressive feats of “AI” were unleashed upon the public were subtle. Unbidden, unwanted AI “help” is starting to pop out at us. Google scans your personal photos and later, if you have an Android phone will pop up “helpful” reminders of where you have been. People almost universally find this creepy. We could call this a “Clippy The Paperclip” problem, after the intrusive Office Assistant that only wanted to help. Clippy is going to haunt AI in 2017. This is actually going to be worse than anybody inside the AI cult quite realises.

The successful web services today so far are based on an economic exchange. The internet giants slurp your data, and give you free stuff. We haven’t thought more closely about what this data is worth. For the consumer, however, these unsought AI intrusions merely draw our attention to how intrusive the data slurp really is. It could wreck everything. Has nobody thought of that?

1. AI is a make believe world populated by mad people, and nobody wants to be part of it

The AI hype so far has relied on a collusion between two groups of people: a supply side and a demand side. The technology industry, the forecasting industry and researchers provide a limitless supply of post-human hype.

The demand comes from the media and political classes, now unable or unwilling to engage in politics with the masses, to indulge in wild fantasies about humans being replaced by robots. For me, the latter reflects a displacement activity: the professions are already surrendering autonomy in their work to technocratic managerialism. They've made robots out of themselves – and now fear being replaced by robots. (Pass the hankie, I'm distraught.)

There’s a cultural gulf between AI’s promoters and the public that Asperger’s alone can’t explain. There’s no polite way to express this, but AI belongs to California’s inglorious tradition of generating cults, and incubating cult-like thinking. Most people can name a few from the hippy or post-hippy years – EST, or the Family, or the Symbionese Liberation Army – but actually, Californians have been it at it longer than anyone realises.

There's nothing at all weird about Mark. Move along and please tip the Chatbot.

Today, that spirit lives on Silicon Valley, where creepy billionaire nerds like Mark Zuckerberg and Elon Musk can fulfil their desires to “play God and be amazed by magic”, the two big things they miss from childhood. Look at Zuckerberg’s house, for example. What these people want is not what you or I want. I'd be wary of them running an after school club.

Out in the real world, people want better service, not worse service; more human and less robotic exchanges with services, not more robotic "post-human" exchanges. But nobody inside the AI cult seems to worry about this. They think we’re as amazed as they are. We’re not.

The "technology leaders" driving the AI are doing everything they can to alert us to the fact no sane person would task them with leading anything. For that, I suppose, we should be grateful.