Why States Are Right To Reject AI Legal Personhood
Authored by Siri Terjesen and Michael Ryall via The Epoch Times,
A quiet but consequential legal movement is gathering momentum. Idaho and Utah have enacted statutes declaring that artificial intelligence systems are not legal persons. Ohio’s House Bill 469 proposes to declare that AI systems are “nonsentient entities” and bars them from acquiring any form of legal personhood. Similar bills are advancing in Pennsylvania, Oklahoma, Missouri, South Carolina, and Washington. The legislatures driving this movement are not technophobes. They are drawing a necessary line that philosophy, law, and common sense all demand.
The pressure in the opposite direction is real. In January, at the World Economic Forum in Davos, historian Yuval Noah Harari described AI as “mastering language.” Since language is the medium through which law, religion, finance, and culture are constituted, AI may soon be capable of acting within every institution humans have built. Harari asked whether countries would recognize AI as legal persons—whether AI could open bank accounts, file lawsuits, and own property without human supervision. The prospect is not science fiction. It is a policy choice, and the wrong choice would be deeply consequential.
Phantasms versus Nous
Aristotle argued in De Anima that all sentient creatures share a basic cognitive capacity to perceive the world, retain impressions of it, and recombine impressions into new configurations—what he called phantasia, imagination. A dog, a crow, and a chess grand master possess this competency.
Aristotle distinguished human beings as categorically different: possessing nous, the capacity to grasp universal, abstract concepts—ideas like justice, causation, and the good—that cannot be derived from any sensory experience alone. A dog can recognize its owner, but it cannot grasp the concept of ownership. A parrot can reproduce a sentence about fairness, but it has no understanding of fairness.
What is the distinction? Can’t we simply feed an AI system Webster’s definition of “fairness” and let it work from there? No—feeding a machine the dictionary definition only gives it more words to pattern-match against—the concept is not in the words. Any child who grasps fairness can apply it correctly to a situation no definition anticipates. AI can only produce text that statistically resembles how humans talked about fairness before.
This is not a gap that more computing power or better training data will close. Computer scientist Judea Pearl demonstrated mathematically that no amount of pattern recognition over observational data can substitute for genuine causal inference. The appearance of understanding is not understanding itself. And it is precisely the capacity for genuine understanding—for deliberating about what is good and right—that grounds moral responsibility, which is the only coherent basis for legal personhood.
The Problem With the Corporate Analogy
Proponents of AI personhood often invoke corporate personhood as precedent. Corporations are not natural persons, yet the law treats them as legal persons capable of owning property, entering contracts, and being sued. Why not extend this pragmatic fiction to AI? The analogy breaks down at accountability.
Corporate personhood is a legal convenience built on human moral agency. Behind every corporation is a structured network of natural persons—board members, executives, shareholders—who bear fiduciary duties, can be deposed and held liable under piercing-the-veil doctrine, and face reputational and criminal consequences for their decisions. The corporation is a vehicle for organizing human action, not a substitute.
Ohio’s HB 469 captures this logic by denying AI legal personhood, prohibiting AI systems from serving as corporate officers or directors, and assigning all liability for AI-caused harm to identifiable human owners, developers, and deployers.
Labeling a system “aligned” or “ethically trained” does not discharge human responsibility. Granting AI legal personhood would shatter this accountability architecture. An AI “person” could own intellectual property, hold financial assets, and bring lawsuits—all without a human principal who can be held responsible. Sophisticated actors could construct chains of AI-owned shell companies that dissolve liability through layers of nominal personhood.
The result would not be extending rights to a new class of beings; it would be creating accountability vacuums that benefit the powerful humans who deploy AI while insulating them from consequence.
The Moral Stakes for Real People
A deeper moral issue underlies all of this. Legal personhood is not merely an administrative category; it carries normative weight. It signals that an entity has standing to make claims, to be wronged, and to bear obligations. Extending that status to systems that cannot genuinely deliberate, cannot suffer, and cannot be held morally responsible would dilute the concept of personhood in ways that could ultimately harm the humans who most need its protections.
We have not yet secured the full benefits of legal personhood for all human beings in practice—for the displaced, stateless, and structurally invisible. Rushing to extend a contested status to machines while that work remains unfinished would be a profound misallocation of moral and legal energy.
None of this requires hostility to AI as a technology. AI systems can be powerful, useful, and—when properly governed—enormously beneficial. What AI systems cannot be is persons. The states passing anti-personhood legislation are preserving something more important than a competitive advantage—a clear chain of human accountability from every AI action to every AI consequence. When an AI system causes harm, there must always be a human who answers for it. That principle is not a constraint on technology; it is the foundation of a just society.
Aristotle taught that law is reason without passion—a framework for coordinating human beings capable of living well together. AI can help us pursue the good life, but it cannot deliberate about what that life requires. As states across the country move to codify this distinction, they are doing exactly what legislatures exist to do—drawing lines that protect persons: all of them, and only them.
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times or ZeroHedge.

