While experts warn about the risk of human extinction, the Department of Defense plows full speed ahead...
The recent boardroom drama over the leadership of OpenAI—the San Francisco–based tech startup behind the immensely popular ChatGPT computer program—has been described as a corporate power struggle, an ego-driven personality clash, and a strategic dispute over the release of more capable ChatGPT variants. It was all that and more, but at heart represented an unusually bitter fight between those company officials who favor unrestricted research on advanced forms of artificial intelligence (AI) and those who, fearing the potentially catastrophic outcomes of such endeavors, sought to slow the pace of AI development.
At approximately the same time as this epochal battle was getting under way, a similar struggle was unfolding at the United Nations in New York and government offices in Washington, D.C., over the development of autonomous weapons systems—drone ships, planes, and tanks operated by AI rather than humans. In this contest, a broad coalition of diplomats and human rights activists have sought to impose a legally binding ban on such devices—called “killer robots” by opponents—while officials at the Departments of State and Defense have argued for their rapid development.
At issue in both sets of disputes are competing views over the trustworthiness of advanced forms of AI, especially the “large language models” used in “generative AI” systems like ChatGPT. (Programs like these are called “generative” because they can create human-quality text or images based on a statistical analysis of data culled from the Internet). Those who favor the development and application of advanced AI—whether in the private sector or the military—claim that such systems can be developed safely; those who caution against such action, say it cannot, at least not without substantial safeguards.
Without going into the specifics of the OpenAI drama—which ended, for the time being, on November 21 with the appointment of new board members and the return of AI whiz Sam Altman as chief executive after being fired five days earlier—it is evident that the crisis was triggered by concerns among members of the original board of directors that Altman and his staff were veering too far in the direction of rapid AI development, despite pledges to exercise greater caution.
As Altman and many of his colleagues see things, humans technicians are on the verge of creating “general AI” or “superintelligence”—AI programs so powerful they can duplicate all aspects of human cognition and program themselves, making human programming unnecessary. Such systems, it is claimed, will be able to cure most human diseases and perform other beneficial miracles—but also, detractors warn, will eliminate most human jobs and may, eventually, choose to eliminate humans altogether.
“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past,” Altman and his top lieutenants wrote in May. “We can have a dramatically more prosperous future; but we have to manage risk to get there.”
For Altman, as for many others in the AI field, that risk has an “existential” dimension, entailing the possible collapse of human civilization—and, at the extreme, human extinction. “I think if this technology goes wrong, it can go quite wrong,” he told a Senate hearing on May 16. Altman also signed an open letter released by the Center for AI Safety on May 30 warning of the possible “risk of extinction from AI.” Mitigating that risk, the letter avowed, “should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
Nevertheless, Altman and other top AI officials believe that superintelligence can, and should be pursued, so long as adequate safeguards are installed along the way. “We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” he told the Senate subcommittee on privacy, technology and the law.
Washington Promotes the “Responsible” Use of AI in Warfare
A similar calculus regarding the exploitation of advanced AI governs the outlook of senior officials at the Departments of State and Defense, who argue that artificial intelligence can and should be used to operate future weapons systems—so long as it is done so in a “responsible” manner.
“We cannot predict how AI technologies will evolve or what they might be capable of in a year or five years,” Amb. Bonnie Jenkins, under secretary of state for arms control and nonproliferation, declared at a Nov. 13 UN presentation. Nevertheless, she noted, the United States was determined to “put in place the necessary policies and to build the technical capacities to enable responsible development and use [of AI by the military], no matter the technological advancements.”\
Jenkins was at the UN that day to unveil a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” a US-inspired call for voluntary restraints on the development and deployment of AI-enabled autonomous weapons. The declaration avows, among other things, that “States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing,” and that “States should implement appropriate safeguards to mitigate risks of failures in military AI capabilities, such as the ability to… deactivat[e] deployed systems, when such systems demonstrate unintended behavior.”
None of this, however, constitutes a legally binding obligation of states that sign the declaration; rather, it simply entails a promise to abide by a set of best practices, with no requirement to demonstrate compliance with those measures or risk of punishment if found to be in non-compliance.
Although several dozen countries—mostly close allies of the United States—have signed the declaration, many other nations, including Austria, Brazil, Chile, Mexico, New Zealand, and Spain, insist that voluntary compliance with a set of US-designed standards is insufficient to protect against the dangers posed by the deployment of AI-enabled weapons. Instead, they seek a legally binding instrument setting strict limits on the use of such systems or banning them altogether. For these actors, the risks of such weapons “going rogue,” and conducting unauthorized attacks on civilians, is simply too great to allow their use in combat.
“Humanity is about to cross a major threshold of profound importance when the decision over life and death is no longer taken by humans but made on the basis of pre-programmed algorithms. This raises fundamental ethical issues,” Amb. Alexander Kmentt, Austria’s chief negotiator for disarmament, arms control, and nonproliferation, told The Nation.
For years, Austria and a slew of Latin American countries have sought to impose a ban on such weapons under the aegis of the Convention on Certain Conventional Weapons (CCW), a 1980 UN treaty that aims to restrict or prohibit weapons deemed to cause unnecessary suffering to combatants or to affect civilians indiscriminately. These countries, along with the International Committee of the Red Cross and other non-governmental organizations, claim that fully autonomous weapons fall under this category as they will prove incapable of distinguishing between combatants and civilians in the heat of battle, as required by international law. Although a majority of parties to the CCW appear to share this view and favor tough controls on autonomous weapons, decisions by signatory states is made by consensus and a handful of countries, including Israel, Russia, and the United States, have used their veto power to block adoption of any such measure. This, in turn, has led advocates of regulation to turn to the UN General Assembly—where decisions are made by majority vote rather than consensus—as an arena for future progress on the issue.
On October 12, for the first time ever, the General Assembly’s First Committee—responsible for peace, international security, and disarmament—addressed the dangers posed by autonomous weapons, voting by a wide majority—164 to 5 (with 8 abstentions)—to instruct the secretary-general to conduct a comprehensive study of the matter. The study, to be completed in time for the next session of the General Assembly (in fall 2024), is to examine the “challenges and concerns” such weapons raise “from humanitarian, legal, security, technological, and ethical perspectives and on the role of humans in the use of force.”
Although the UN measure does not impose any binding limitations on the development or use of autonomous weapons systems, it lays the groundwork for the future adoption of such measures, by identifying a range of concerns over their deployment and by insisting that the secretary-general, when conducting the required report, investigate those dangers in detail, including by seeking the views and expertise of scientists and civil society organizations.
“The objective is obviously to move forward on regulating autonomous weapons systems,” Ambassador Kmentt indicated. “The resolution makes it clear that the overwhelming majority of states want to address this issue with urgency.”
What will occur at next year’s General Assembly meeting cannot be foretold, but if Kmentt is right, we can expect a much more spirited international debate over the advisability of allowing the deployment of AI-enabled weapons systems—whether or not participants have agreed to the voluntary measures being championed by the United States.
At the Pentagon, It’s Full Speed Ahead
For officials at the Department of Defense, however, the matter is largely settled: the United States will proceed with the rapid development and deployment of numerous types of AI-enabled autonomous weapons systems. This was made evident on August 28, with the announcement of the “Replicator” initiative by Deputy Secretary of Defense Kathleen Hicks.
Noting that the United States must prepare for a possible war with China’s military, the People’s Liberation Army (PLA), in the not-too-distant future, and that US forces cannot match the PLA’s weapons inventories on an item-by-item basis (tank-for-tank, ship-for-ship, etc.), Hicks argued that the US must be prepared to overcome China’s superiority in conventional measures of power—its military “mass”—by deploying “multitude thousands” of autonomous weapons.
“To stay ahead, we’re going to create a new state of the art—just as America has before—leveraging attritable [i.e., disposable], autonomous systems in all domains,” she told corporate executives at a National Defense Industrial Association meeting in Washington. “We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat.”
In a follow-up speech, delivered on September 6, Hicks provided (slightly) more detail on what she called all-domain attritable autonomous (ADA2) weapons systems. “Imagine distributed pods of self-propelled ADA2 systems afloat…packed with sensors aplenty…. Imagine fleets of ground-based ADA2 systems delivering novel logistics support, scouting ahead to keep troops safe…. Imagine flocks of [aerial] ADA2 systems, flying at all sorts of altitudes, doing a range of missions, building on what we’ve seen in Ukraine.”
As per official guidance, Hicks assured her audience that all these systems “will be developed and fielded in line with our responsible and ethical approach to AI and autonomous systems.” But except for that one-line one nod to safety, all the emphasis in her talks was on smashing bureaucratic bottlenecks in order to speed the development and deployment of autonomous weapons. “If [these bottlenecks] aren’t tackled,” she declared on August 28, “our gears will still grind too slowly, and our innovation engines still won’t run at the speed and scale we need. And that, we cannot abide.”
And so, the powers that be—in both Silicon Valley and Washington—have made the decision to proceed with the development and utilization of even more advanced versions of artificial intelligence despite warnings from scientists and diplomats that the safety of these programs cannot be assured and that their misuse could have catastrophic consequences. Unless greater effort is made to slow these endeavors, we may well discover what those consequences might entail.