print-icon
print-icon

No, AI Cannot Think... and It's Not Going to Take Your Job

Phoenix Capital Research's Photo
by Phoenix Capital Research
Tuesday, Apr 07, 2026 - 11:55

Maybe… just maybe, things won’t be a disaster.

Everywhere you look today, there are articles claiming that the Artificial Intelligence (AI) revolution is going to result in widespread jobs losses and possibly even an economic depression. A research outfit called Citrini even went so far so to publish a report suggesting that by 2028 the economic damage could be so severe that the U.S. would need to introduce universal basic income to support those workers displaced by the technology.

Due to Citrini’s massive readership the piece went viral resulting in the firms that the report claimed were at risk collapsing in share price. Never mind that it later came out that the co-author of this report runs a hedge fund that was shorting the stocks that the report claimed were most exposed to AI disruption … that’s a story for another time.

I am going to formally counter this argument… and rather than speculate on what could or could not happen, I’m going to use verified facts about the reality of AI technology.

The most important fact? AI cannot think.

AI is essentially an algorithm that answers questions or makes decisions based on probabilities. It is neither creative, nor is it able to reason. It simply scrolls the internet (mostly Reddit and Wikipedia, which are not exactly bastions of truth) and sorts the information in a manner that it can predict what is the most probable response/answer to a query.

Put simply, AI, in its current form, is a kind of super-parrot capable of repeating things it has heard/read before in a fashion that sounds coherent. But it cannot actually think.

If you don’t believe me, perhaps you’ll believe researchers from Stanford who recently performed a study through which they asked multiple AI models open-ended queries (questions that do not have a single correct answer). And they didn’t do this once or twice, but 26,000 times.

Initially, the researchers expected two things:

  1. A specific AI model to come up with different answers each time it was asked an open-ended query.
  2. Different AI models to come up with different answers.

Instead, they found the exact opposite happened:

  1. An AI model would answer the same answer to an open-ended query every time.
  2. Different AI models ended up coming up with the same solutions to open-ended questions.

The paper concluded that AI actually suffers from “Hive Mind” or homogenous thinking. And most worryingly, this boring/ repetitive thinking occurred across numerous AI models. Put simply, instead of providing diverse, creative answers, AI, even across different models with different coding/ training, is quite boring and repetitive.

As if that wasn’t bad enough… AI also hallucinates or simply makes things up and states them as facts.

AI doesn’t have a fact-checking mechanism built in, meaning it isn’t “looking things up” in a verified database. So, when asked about something obscure, ambiguous, or outside its training, it tend to generate answers that are fluent, confident-sounding but wrong.

Some of the more common hallucinations include:

  • Fabricating citations — inventing plausible-sounding academic papers, authors, or URLs that don’t exist
  • Getting biographical details wrong about real people
  • Misremembering statistics or dates
  • Inventing legal cases, historical events, or product specifications
  • Confidently describing a process incorrectly

As the below table illustrates, this issue is endemic in LLMs with hallucination rates ranging from 15% to 52%!

In this context, the only people who could use AI effectively in a corporate setting would be those individuals who are experts are on the subject matter they are discussing with the AI model. After all, these would be the only people capable of determining when an LLM is providing a sound insight/ idea as opposed to fabricating key aspects of its answer!

This raises countless issues as to the true effectiveness of AI. If it is simply organizing information as opposed to “thinking” it isn’t actually providing insights or solutions, but simply arranging information in a pattern that seems as if it makes sense.

Will this result in job losses? Yes, for those jobs that are easily automated and do not require creative thinking or the ability to reason. But widespread job losses that result in an economic depression? Not likely. In fact, you could easily make the case that the true economic impact AI won’t even match that of the internet.

Some food for thought.

The financial media won’t tell you what’s really happening in markets. Gains, Pains & Capital does — every single day, free, straight to your inbox. Join 30,000+ investors who read it before the market opens.

Subscribe now!

Graham Summers, MBA

Chief Market Strategist

Phoenix Capital Research\

Contributor posts published on Zero Hedge do not necessarily represent the views and opinions of Zero Hedge, and are not selected, edited or screened by Zero Hedge editors.
Loading...