- CounterPunch.org - https://www.counterpunch.org -

Attack of the Killer Robots

Photograph Source: Lt. Col. Leslie Pratt – Public Domain

Imagine you’re living in Kyiv, a city that has been relatively calm for most of the war in Ukraine.

One chilly morning, you’re awakened by a faint buzzing noise, like the sound of swarming wasps. It gets louder and louder—until suddenly, intense flashes light up the windows of your apartment.

You peek outside, just in time to see part of a residential building—your neighbors’ homes—collapse in flames. Acrid clouds of smoke billow from a hole in the side. Down below, you see the wreckage of several small delta-winged drones.

Welcome to the world of virtual warfare.

Last month’s Russian assault on Kyiv is a stark reminder of how automated weapon systems threaten to make wars more traumatic than previous conflicts—and potentially lethal.

The attacks also foreshadow a future in which inexpensive robotic weapons can be easily deployed by virtually anyone—not just military forces, but also criminal organizations and rogue actors seeking to unleash political terror.

Even hobbyist drones, which are subject to a tangled web of regulatory regimes in our country, can be weaponized with modified grenades or other explosives.

Since Russian forces invaded Ukraine in February, they’ve reportedly used several different drones made by the same company that developed the Kalashnikov rifle. The newest weapons in Russia’s arsenal are Iranian-made Shahed-136 drones, which cost about the same as a Toyota Corolla.

The Shahed drones used in the Kyiv raids are powered by a simple air-cooled engine. In fact, many parts used in the Shahed-136 are off-the-shelf components, readily available online. Experts think the drone’s electronic navigation system uses a civilian-grade GPS sensor that allows users to accurately program their targets using satellite technology.

These rudimentary swarming robots can wreak havoc on Ukrainian cities. Last month’s raids led to dangerous confusion on the streets of Kyiv, as police fired desperately at the invading drones. Living under constant fear of attack can have mental health consequences for target communities.

Ukraine’s troops have also used attack robots. Just days ago, Ukrainian aerial and sea drones launched a major assault on Russia’s storied Black Sea fleet, seriously damaging its flagship vessel at the Crimean port of Sevastapol.

In the first weeks of the conflict, Ukrainian forces deployed Turkish-made remote-controlled drones armed with guided missiles against Russian troop formations. Over the past few months, they’ve also acquired hundreds of US-made “suicide drones,” loitering munitions with names like Switchblade and Phoenix Ghost.

The idea of virtual warfare was once seductive. It held out hope that someday we might conduct wars without soldiers, without physical battlegrounds, and maybe even without death.

But virtual wars may turn out to be deadlier than anything in the past, especially now that lethal autonomous weapon systems—what some call “killer robots” or “slaughterbots”—are on the horizon. These technologies use AI (artificial intelligence) and sophisticated algorithms to locate targets and to determine when and where to attack.

Although we haven’t yet reached the point of armed, data-driven robotic armies, we should be aware of the perils of militarizing AI.

Fully autonomous weapon systems take human decision-makers out of the equation—a frightening prospect, given the fact that remote-controlled drones have killed thousands of civilians since the CIA and the Pentagon first used them in 2001.

The US military’s own documents reveal how drone wars in the Middle East from 2014 to the present were marked by intelligence errors, flawed targeting, and needless civilian deaths—in stark contrast to official portrayals of precision air strikes.

Autonomous weapons are likely to be even more destructive.

The physical and emotional costs of such weapons far outweigh any benefits. Ahmed Ali Jaber, whose relatives were indiscriminately killed by a US drone in Yemen, knows how automated warfare can leave deep psychological scars. “My children deserve so much better,” he writes. “I want them to watch fireworks, not drone strikes. I don’t want my family to cry every time they hear a drone.”

And its not just suspected enemies who are harmed by remote-control warfare. Many US drone pilots undergo severe psychological strain and mental health problems, including substance abuse disorder and PTSD.

The threat of war by algorithms urgently requires swift action, before it’s too late. National governments and international bodies such as the United Nations must develop unambiguously clear regulations on how automated and autonomous weapons are used. Otherwise, rival superpowers like the United States, China, Iran, and Russia will convince themselves that they’re locked into a global AI arms race. The current situation is comparable to the nuclear arms race in the earliest years of the Cold War. Proliferation of these weapons poses a grave danger to us all.

According to Automated Decision Research , more than seventy countries have called for an international treaty setting limits on the use of autonomous weapon systems and weaponized AI. The movement to stop killer robots is growing, and there’s an opportunity to find common cause with millions of others who are concerned about the hidden costs of digital warfare.

We should act now—while there’s still time.