When Machines Kill

What does it mean for a machine to “decide” to kill someone?

I’m in Berlin, attending an interdisciplinary expert workshop on robotic weapons, where this question has come up. My job was to brief the participants about international legal standards relevant to assessing governments’ reliance on unmanned weapons systems, such as armed drones.

The main purpose of the workshop is to discuss the possible need for new international standards to address weaponized robots and drones. Such standards might regulate the development, proliferation, and use of unmanned weapons systems, or even ban robotic weapons that are deemed “autonomous.”

The workshop is timely. The international trend toward warfare using unmanned weapons systems has accelerated rapidly over the past decade.

Government reliance on armed drones, in particular, has expanded dramatically. The most high-profile use of unmanned aerial vehicles is by the United States in Pakistan, where the Obama administration has carried out hundreds of drone attacks against suspected terrorists over the past year and a half. But the US is far from the only country that uses drones: Israel has a highly-developed drone program, and the UK has employed drones to kill so-called “high-value targets” in Afghanistan.

At present, more than 40 countries have access to drone technology, and a sizeable number of them, including Israel, the UK, Russia, Turkey, China, India, Iran, and France, either have equipped or are seeking to equip their drones with laser-guided missiles.

And drones seem to be just the beginning. Unmanned ground vehicles—robots with evocative names like the Crusher, the Raptor, and the Guardian—may also be equipped with weapons in the future. Already, South Korea has employed armed robotic sentries to protect its northern border.

“Risk-Free” Warfare?

So what ethical and legal concerns are raised by this increasing reliance on robotic weapons systems?

A number of participants at this workshop have alluded to the worrying possibility that “risk-free” warfare – military actions that are “risk-free” in the sense of not causing troop casualties – has a dark side. By eliminating a key disincentive to war, it may make war more attractive, and thus more likely to occur. Of course, any truly game-changing weapons technology alters the balance of military power in a potentially destabilizing way, by giving the country that controls the new technology a decisive advantage over its adversaries.

But there is a deeper, less consequentialist argument that seems compelling to many participants here. It is set out in the founding mission statement of the International Committee for Robot Arms Control, the group that convened this workshop: “machines should not be allowed to make the decision to kill people.”

The notion of robots making lethal choices is chilling; it smacks of the Terminator. But what does it mean for machines to “decide” to kill?

Remote Control vs. Autonomous Robotic Decisionmaking

The statement that someone was killed by a bullet does not attribute intent to the bullet, or imply that the bullet is morally or legally culpable. When someone in Pakistan is killed by an unmanned drone, the same is true: The decision to fire the deadly missile is made remotely, by someone sitting at a computer monitor in Langley, Virginia. Though that person is far from the site of the killing, remote sensors and video technology ensure that he is aware of the consequences of his actions. The locus of decisionmaking and responsibility remains human.

But participants at this workshop in Berlin have a different scenario in mind. Given rapid advances in hardware and computer technologies, as well as the military’s obvious eagerness to reduce the need for human input, there is a perceptible long-term trend toward combat robots and drones that are, in military parlance, “fully autonomous.” Not only would such unmanned vehicles move about in an autonomous way, without direct human input, they might be empowered to make autonomous targeting decisions.

A recent US Air Force strategy paper, describing the military’s long-range plans for unmanned aircraft systems, lauded the trend toward these autonomous systems. “Technologies to perform auto air refueling, automated maintenance, automatic target engagement, hypersonic flight, and swarming would drive changes across the [entire military spectrum],” it explained enthusiastically. “The end result would be a revolution in the roles of humans in air warfare.”

Professor Noel Sharkey, one of the organizers of the present workshop, agrees that the change would be fundamental, but sees the impact differently. He warns: “we are sleepwalking into a brave new world where robots decide who, where and when to kill.”

Algorithms or Autonomous Choices?

I don’t think I quite see the situation either way. Although the military may label these robotic weapons system “autonomous,” they would not actually be autonomous in any meaningful way: They would have no free will and would not make discretionary choices.

While complex robots and drones may not have much, or any, human input at the operational stage, their responses to visual and other stimuli are pre-determined through a series of programs and algorithms. Their human control is non-simultaneous, and may, for that reason, be more plagued with errors and miscalculations, but this does not mean that the robots themselves are “deciding” anything. Nor does it in any way shift the locus of decisionmaking authority and responsibility away from the humans and onto the robots.

I still differ from the military, however, on the question of whether such quasi-autonomous weapons systems should be granted targeting powers. My hesitance doesn’t reflect a fear of machines’ “decisions”; it just reflects a skepticism regarding whether, even a few decades in the future, machines can be counted on to have the sophisticated sensory and processing skills necessary to distinguish civilians from combatants, and to comply in other necessary ways with the laws of war.

JOANNE MARINER is a human-rights lawyer based in New York and Paris.

JOANNE MARINER is a human rights lawyer living in New York and Paris.