Is AI a Trojan Horse?
Is AI a Trojan Horse?
Trust not the horse, O Trojans. Be it what it may, I fear the Greeks when they offer gifts – Virgil
The deeper I dig into Artificial Intelligence, the less I see commercial utility and the more I see it as the Trojan Horse full of hackers, bad actors, and controlling bureaucrats. Large Language Models, or LLM’s, seem to have inherent flaws that prevent its widescale rollout by organizations. Besides the dangerous outcomes such as hallucinations, AI struggles with common sense reasoning and creativity. This limits the models to regurgitating solutions that have already been documented by humans.
AI excels at pattern recognition and those fast Nvidia chips allow systems to review vast amounts of information in a short time. Give everyone a digital id that is connected to a person’s financial information, it won’t be long until everyone’s transactions can be tabulated, reviewed, and followed in real-time.
Some will welcome such a future but having studied China closely for 25 years, I’ve watched the Chinese people become chattel for the ruling Communists. We are already on the slippery slope with our markets tightly controlled and our lives heavily regulated. It won’t take much legislation to take us to a bad place…
Automotive Electronics
What is ominous is the ease with which some people go from saying that they don’t like something to saying the government should forbid it. When you go down that road, don’t expect freedom to survive very long – Thomas Sowell
Modern vehicles are brimming with electronics, making them exceedingly expensive to purchase and repair but do they serve a secondary purpose? How long until central authorities can shut down car’s engines with a “kill switch” activated remotely in all cars?
Tesla already has such a switch which they can activate remotely through software commands. Starting in 2026, new vehicles will be required to have “kill switch” technology to prevent impaired drivers from operating their vehicles. It’s part of the 2021 Infrastructure Investment and Jobs Act. Nobody wants drunk drivers on the road but it won’t take much for government to abuse this technology – if history is any guide.
Given the enormous expense of buying a new car or truck, do you want to deal with this added layer of intrusiveness? Or would you rather pour a fraction of the money into an older car that preserves your freedom? If yes, how long until older cars are outlawed? How long until the new car market in the US collapses?
What happened in August of 2021to make the cost of new vehicles jump so much? It was Executive Order 14037 “Strengthening American Leadership in Clean Cars and Trucks” which aggressively raised minimum fuel economy standards. In effect, the order forced automakers to reduce engine sizes, increase the use of turbo chargers, and to increase electronics that promised to add to fuel economy.
I don’t expect vehicles made between 2022 and 2025 to last as long as vehicles from previous generations because they are filled with costly and hard-to-service electronics which I expect to shorten the lives of vehicles made over this time. I suspect that was the goal of the order because it would have forced people into electric vehicles before 2030. Those directives are presently being reversed – for now.
If current trends hold, the utility gap between internal combustion engine vehicles (ICE) and electric vehicles will expand. Simply reducing expensive electronics and returning to older engines and drivetrains will lower the price of ICE vehicles. We’re presently seeing cancellations of EV model offerings as car buyers reject this category. Scale matters and without widespread adoption, EV’s, with their suites of AI applications, will be reserved for the uber-wealthy (pun intended).
If the above proves correct, Tesla is going to be in a world of trouble. They are never going to maintain global scale unless they can move away from being a niche vehicle, where most owners maintain ICE vehicles as primary rides. People say that Tesla is an AI play, self-driving cars being an example. Self-driving taxis seem destined for city-centers where speeds are slow and routes are well-known. But will that be a big enough market?
At 265X trailing twelve-month earnings and a market capitalization of $1.5 trillion, the company is priced as if these speculative investments in AI and self-driving cars are a sure thing. Someone remind me how EV’s are a sure-thing.
People can’t afford today’s new cars. How will they be able to afford tomorrow’s new cars that have another layer of AI expense? Furthermore, in the US, our vehicles represent our freedom to move about the country. Will Americans willingly allow that freedom to be impinged upon?
Cyber Security
The opportunity to secure ourselves against defeat lies in our own hands, but the opportunity of defeating the enemy is provided by the enemy himself – Sun Tzu
Criminal hackers have proven adept at taking new technology and finding new and innovative ways to extract rent from the system. Ransomware alone is responsible for $1 trillion worldwide each year. I expect AI models to hit the dark web that leverage AI’s pattern recognition function to extract ever more rent.
This becomes extra risky when you consider using AI as an agent to make purchases for you. If you’re running a business and empower AI to make automatic purchases of inputs when inventory falls to a certain level, you need to make AI an agent of your business with authority to order product in your name as well as to pay for the product by authorizing it to make financial transactions. This example has more security holes than a pound of Swiss cheese. Since hackers prey on user mistakes, how many additional openings will come from AI agents?
When used to write code, these models specialize in producing bugs and security holes, necessitating large amounts of re-writing. Think about it – increased number of security holes just as hackers are getting new tools.
Returning to the Trojan Horse analogy, by making Nvidia chips available to the broader world, are we inviting hackers to subvert AI for nefarious purposes? Will Deepseek and other imported models either inadvertently or purposefully open doors for on-line pirates?
Will the open-source model hasten the degradation of our on-line experience by injecting bad data, outcomes, and even rule changes into the shared models? I recall reading where Deepseek takes a Machiavellian route that involves cheating to get to a desired outcome in addition to using machines to teach machines – wrong answers included!
When open source is open to the world, standards of behavior, morality, and ethics are impossible to enforce. It makes it easy for bad actors to spike the punch bowl.
Youtube is littered with bad information generated by AI. I found out this week that King Charles of Britain abdicated his throne – the deepfake video had more than a million views. It never happened. Each day, I see multiple examples of incorrect headlines ranging from sports to politics.
Once in the system, other AI models are taking this bad information and using it in the answers they provide. Keep in mind that LLM’s are statistical predictive engines – they’re trying to predict the next word of a sentence, not whether it’s right or wrong.
So we’ve got models relying on sometimes bad information that sometimes pose agency problems depending on the task, code that increases security holes with potential openings for bad actors along the spectrum of the process. What could go wrong?
Excess Enthusiasm
Lancelot, Galahad and I wait until nightfall and then leap out of the rabbit taking the French by surprise – Monty Python and the Holy Grail
Most people seem to have accepted that AI is the hope of the future, so much so that most have lost perspective. It ranges from the people who fear Skynet/Hal9000 where AI destroys the world to business executives who gleefully see the prospect of firing all employees and going fully automated.
All of them seem to be afraid of missing out, also known as FOMO and yet, the technology itself is full of mistakes and uncertainties. LLM’s can’t be trusted to do more than some relatively simple jobs. There are far too many examples of bad and sometimes dangerous outcomes. In fact, a whole new category of programmers has popped up to fix the terrible mistakes made by AI code generation. They’re called AI Janitors and they deal with AI-generated “workslop.” According to the Harvard Business Review, workslop is costing millions of dollars in lost productivity each year.
Don’t take my word for it, these are the findings of MIT Media Lab, Oxford University, McKinsey, Stanford Media Lab, and many lesser media labs. Yet the fear of missing out has everyone charging ahead. It feels exactly like the quote above from Monty Python such that I’m starting to expect the same result – or worse.
The technology is amazing and the models are beyond brilliant. The only problem is that it relies on the internet for good information and we all know that the quality of information on the internet is akin to a dumpster full of fish left in the sun for six months. So much so that if you get within a block of the dumpster, your first inclination is to “run away, run away!”
I don’t know for a fact but I strongly suspect that Nvidia’s recent $100 billion investment in OpenAI, which entails leasing Nvidia chips to OpenAI to “teach” a new version of ChatGPT that is so thorough that it overcomes the limitations of current LLM’s. Is this an opportunity or a desperate move?
Once again, we find ourselves in the same place as the year 2000 where visionaries and charlatans ruled the airwaves, inciting enthusiasm and fear in equal measures. Regretfully, I’m beginning to see AI as the bookend of the great internet cycle where AI ultimately makes the internet less usable.
Conclusion
We must cultivate our own garden. When man was put in the garden of Eden he was put there so that he should work, which proves that man was not born to rest – Voltaire
AI is tailor-made for destroying the advances of the past 30 years. I view it as a Trojan Horse that is full of hackers, bad actors, and control-oriented bureaucrats. Once the horse spills its payload, it will take great effort to restore our present level of utility – if possible.
We’re starting to see lost productivity already as financial institutions are besieged with fraudulent requests to transfer money, resulting in losses and red-tape. Personally, I believe we’ve invested too much, too quickly because the technology isn’t ready for the expectations of the visionaries – and I’m not sure it ever will.
To date, the world has spent an estimated $1 trillion, perhaps more, on making AI a reality. As one would expect, it’s terrific at creating videos and mimicking voices – areas where I expect it to continue to shine yet deepfakes are going cast doubt on video evidence in the future. This will further erode trust in our system.
Today’s major players have locked in exorbitant capital costs that I don’t believe will generate positive returns. I view it as the “swan song” of Apple, Amazon, Tesla, Google, Meta, Microsoft, and especially Nvidia. Without these companies driving the indexes higher, the stock market will suffer an ugly correction.
The only question is the timing. The last few years of this rally have been built on the wealth of those who were too early in betting against AI. Nvidia would never have approached a $4 trillion market cap without treading on the bodies of hedge fund managers who were early in attempting to short these overvalued monsters.
If you’re interested in learning more, visit us at https://geovestadvisors.com/ and contact Paul Hurley.
Philip M. Byrne, CFA
