Oppenheimer, AI/ML and Unintended Consequences

Why wasn’t footage of the bombings of Hiroshima and Nagasaki shown in the movie Oppenheimer? Although the movie has several moral ambiguities, the ending focuses on Oppenheimer’s security clearance and the role of Senator TK Strauss. Shouldn’t the movie’s main theme be the unintended consequences of Oppenheimer and his co-workers’ efforts to create the bomb and the realities of its human devastation?

The euphoric foot stomping, cheering and flag waving when the first test is successful should be juxtaposed with scenes of the horrors in Hiroshima and Nagasaki and the long-term consequences of radiation. That’s what the movie’s ending should be about.

The creation of the bomb was put forward as essential to ending World War II. In addition, it was also presented by the U.S. government that the United States must be first to have an atomic bomb before the Germans and Soviets. To win the war and be first technologically, $2 billion was spent in top-secret activities in Los Alamos, Berkeley, Chicago, and Oak Ridge.

Winning the war and being first in atomic energy were the national priorities, something Oppenheimer’s naiveté could not fully grasp. He thought that after the war atomic energy could be controlled by some international cooperation between countries. (Henry Kissinger’s 1957 security best seller Nuclear Weapons and Foreign Policy advocating the possible use of limited tactical nuclear weapons is an excellent counterfactual.) Although there are treaties between countries, and the International Atomic Energy Agency (IAEA) tries to oversee nuclear development, all has not been as Oppenheimer hoped. More and more countries continue to develop nuclear weapons. Complex attempts to limit the numbers and proliferation continue, such as with Iran.

Following the dangers of nuclear weapon proliferation, shouldn’t we be thinking about unintended consequences in the current technological competition in Artificial Intelligence and Machine Learning? While the United States is not yet at war with China in a direct military confrontation, there is a fierce competition over technology. According to an article published by Bloomberg Government, the United States Department of Defense was projected to spend $1.4 billion on AI and Machine Learning (ML) in fiscal year 2020, up 43 % from 2019. According to National Defense, citing the Bloomberg report: “In the less than two years since the White House published its executive order ‘Maintaining American Leadership in Artificial Intelligence [2019],’ federal budgets and contract spending obligations on artificial intelligence and machine learning (AI/ML) technologies have accelerated sharply…Washington is projected to invest more than $6 billion in AI-related research-and-development projects in 2021, while contract obligations are on pace to grow nearly 50 percent, to $3 billion, relative to 2020, according to the forecast,” the report predicted.

Are there similarities between the Manhattan Project and today’s race between China and the United States in AI and ML? What is clear is that fevered nationalism and a war-like competition is already there. If the World Health Organization tries desperately to oversee transparent information and universal vaccines for Covid, what are the chances of international cooperation on security-related issues such as AI or ML by organizations like the IAEA?

The consequences of the use or threat of use of nuclear weapons has not changed since 1945. Countries continue to develop more sophisticated nuclear weapon systems; serious disarmament talks between major nuclear states are stalled. Non-nuclear states continue to try to join the nuclear club. Vladimir Putin threatens to use his ultimate weapon in the Russia/Ukraine conflict. In an important Advisory Opinion, the International Court of Justice in The Hague did not render a definitive ruling on the illegality of the use or the threat of the use of nuclear weapons. According to the Court, states may use or threaten to use nuclear weapons if their very existence is under attack.

Unintended consequences are part of all change. But having seen the consequences of Hiroshima and Nagasaki, warnings should be going out about AI and ML and their control. For the moment, a fierce technological battle is taking place. How much longer will it be before the dual-use use of AI and ML for military purposes will become obvious, and out of control?

As the International Committee of the Red Cross has written in an Introduction to a current report: “AI and machine-learning systems could have profound implications for the role of humans in armed conflict, especially in relation to: increasing autonomy of weapon systems and other unmanned systems; new forms of cyber and information warfare; and, more broadly, the nature of decision-making.”

The scientists working on the Manhattan Project knew part of the endgame of their work. Are those working on AI and ML fully aware of the military potential of their inventions? The movie Oppenheimer should be seen as a powerful example of unintended consequences.

Daniel Warner is the author of An Ethic of Responsibility in International Relations. (Lynne Rienner). He lives in Geneva.