“These machines will eventually need to have the power to take lethal action on their own, while remaining under human oversight in how they are deployed. Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose. I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.” —Frank Kendall, U.S. Secretary of the Air Force
This fascinating quotation about the military potential of A.I. is deeply revealing of how an obsolete way of thinking works. The Secretary of the Air Force is not an evil person, only someone trapped inside his limited perspective. There are too many like him in Russia, in China—and in Israel and Gaza.
This statement allows a direct stare into the heart of evil, not the evil of malign intent, but of the blind futility of violence accelerated by technological “progress.” It foretells a perverse refusal of possibilities other than dehumanizing our adversaries so completely that we are willing to kill them with machines that are already frighteningly lethal even without the capacity to make their own decisions.
“I don’t think people we would be up against would do that.” Of course the Secretary means that our adversaries would be unable to refuse any possible military advantage available through A.I. Isn’t this projecting our own proven capacity for depravity (think Vietnam, Iraq etc.) onto our adversaries? And isn’t it also an admission that we have no other option but to continue the we-build-they build cycle, already nuclear, on the A.I. level, a path that leads at best to some variation of war as depicted in the Terminator films?
Also implicit in the Secretary’s old thinking is that sacred cow of establishment thinking, deterrence. As long as we have more of the latest, fastest, most intelligent and most destructive weapons, we will not need to use them, because that will be sufficient to make our enemy think twice before taking us on. But contemporary asymmetric warfare (think 9-1l-2001, 10-7-2023), let alone the likelihood of either human or A.I. error, effectively undermines deterrence theory.
The truth of the obsolescence of war has been demonstrated for all to see by the events unfolding from the October 7thpogrom. Hamas, seeking to slow or stop any larger peace process, has only ensured that a further cycle of violence will eat its own young along with those of Israel.
Conventional war doesn’t resolve the underlying conflict that initiated it. Nuclear war even less so (think nuclear winter). Variations on nuclear or chemical or biological war with the added dimension of A.I. will become doubly, triply world-destructive—in other words, obsolete.
Because everyone’s security and survival is a shared problem, the need is to re-humanize our adversaries—to perceive the me-semblance of the “other” even if they seem hateful to us and toward us. We need our military people on all sides to gather and peer together down the time-stream at a future which holds only two possibilities: either adversaries spend infinite treasure and resources to arrive at stalemate on a new, even more hair-trigger level—or we destroy ourselves. When we agree that these will be the outcomes unless we change, we can work together to apply A.I to common challenges, including the prevention of wars no one can win.
Because there is no doubt Artificial Intelligence can do remarkable things for us. It could point the way toward pragmatic climate solutions where everyone wins. It is already revolutionizing medical diagnoses and treatments. But ordinary unenhanced intelligence provides an indispensable perspective still in short supply, such as that articulated by almost every astronaut who has had the privilege of seeing the Earth from space—Russell “Rusty” Schweikart for example:
“And you look down there, and you can’t imagine how many borders and boundaries you cross, again and again and again. And you don’t even see them. . . . there you are—hundreds of people killing each other over some imaginary line that you’re not even aware of, that you can’t see. And from where you see it, the thing is a whole, and it’s so beautiful. And you wish you could take one in each hand and say, ‘Look!’ You know? One from each side. ‘Look at it from this perspective! Look at that! What’s important?’”