Click amount to donate direct to CounterPunch
  • $25
  • $50
  • $100
  • $500
  • $other
  • use PayPal
DOUBLE YOUR DONATION!
We don’t run corporate ads. We don’t shake our readers down for money every month or every quarter like some other sites out there. We provide our site for free to all, but the bandwidth we pay to do so doesn’t come cheap. A generous donor is matching all donations of $100 or more! So please donate now to double your punch!
FacebookTwitterGoogle+RedditEmail

When Machines Kill

What does it mean for a machine to “decide” to kill someone?

I’m in Berlin, attending an interdisciplinary expert workshop on robotic weapons, where this question has come up. My job was to brief the participants about international legal standards relevant to assessing governments’ reliance on unmanned weapons systems, such as armed drones.

The main purpose of the workshop is to discuss the possible need for new international standards to address weaponized robots and drones. Such standards might regulate the development, proliferation, and use of unmanned weapons systems, or even ban robotic weapons that are deemed “autonomous.”

The workshop is timely. The international trend toward warfare using unmanned weapons systems has accelerated rapidly over the past decade.

Government reliance on armed drones, in particular, has expanded dramatically. The most high-profile use of unmanned aerial vehicles is by the United States in Pakistan, where the Obama administration has carried out hundreds of drone attacks against suspected terrorists over the past year and a half. But the US is far from the only country that uses drones: Israel has a highly-developed drone program, and the UK has employed drones to kill so-called “high-value targets” in Afghanistan.

At present, more than 40 countries have access to drone technology, and a sizeable number of them, including Israel, the UK, Russia, Turkey, China, India, Iran, and France, either have equipped or are seeking to equip their drones with laser-guided missiles.

And drones seem to be just the beginning. Unmanned ground vehicles—robots with evocative names like the Crusher, the Raptor, and the Guardian—may also be equipped with weapons in the future. Already, South Korea has employed armed robotic sentries to protect its northern border.

“Risk-Free” Warfare?

So what ethical and legal concerns are raised by this increasing reliance on robotic weapons systems?

A number of participants at this workshop have alluded to the worrying possibility that “risk-free” warfare – military actions that are “risk-free” in the sense of not causing troop casualties – has a dark side. By eliminating a key disincentive to war, it may make war more attractive, and thus more likely to occur. Of course, any truly game-changing weapons technology alters the balance of military power in a potentially destabilizing way, by giving the country that controls the new technology a decisive advantage over its adversaries.

But there is a deeper, less consequentialist argument that seems compelling to many participants here. It is set out in the founding mission statement of the International Committee for Robot Arms Control, the group that convened this workshop: “machines should not be allowed to make the decision to kill people.”

The notion of robots making lethal choices is chilling; it smacks of the Terminator. But what does it mean for machines to “decide” to kill?

Remote Control vs. Autonomous Robotic Decisionmaking

The statement that someone was killed by a bullet does not attribute intent to the bullet, or imply that the bullet is morally or legally culpable. When someone in Pakistan is killed by an unmanned drone, the same is true: The decision to fire the deadly missile is made remotely, by someone sitting at a computer monitor in Langley, Virginia. Though that person is far from the site of the killing, remote sensors and video technology ensure that he is aware of the consequences of his actions. The locus of decisionmaking and responsibility remains human.

But participants at this workshop in Berlin have a different scenario in mind. Given rapid advances in hardware and computer technologies, as well as the military’s obvious eagerness to reduce the need for human input, there is a perceptible long-term trend toward combat robots and drones that are, in military parlance, “fully autonomous.” Not only would such unmanned vehicles move about in an autonomous way, without direct human input, they might be empowered to make autonomous targeting decisions.

A recent US Air Force strategy paper, describing the military’s long-range plans for unmanned aircraft systems, lauded the trend toward these autonomous systems. “Technologies to perform auto air refueling, automated maintenance, automatic target engagement, hypersonic flight, and swarming would drive changes across the [entire military spectrum],” it explained enthusiastically. “The end result would be a revolution in the roles of humans in air warfare.”

Professor Noel Sharkey, one of the organizers of the present workshop, agrees that the change would be fundamental, but sees the impact differently. He warns: “we are sleepwalking into a brave new world where robots decide who, where and when to kill.”

Algorithms or Autonomous Choices?

I don’t think I quite see the situation either way. Although the military may label these robotic weapons system “autonomous,” they would not actually be autonomous in any meaningful way: They would have no free will and would not make discretionary choices.

While complex robots and drones may not have much, or any, human input at the operational stage, their responses to visual and other stimuli are pre-determined through a series of programs and algorithms. Their human control is non-simultaneous, and may, for that reason, be more plagued with errors and miscalculations, but this does not mean that the robots themselves are “deciding” anything. Nor does it in any way shift the locus of decisionmaking authority and responsibility away from the humans and onto the robots.

I still differ from the military, however, on the question of whether such quasi-autonomous weapons systems should be granted targeting powers. My hesitance doesn’t reflect a fear of machines’ “decisions”; it just reflects a skepticism regarding whether, even a few decades in the future, machines can be counted on to have the sophisticated sensory and processing skills necessary to distinguish civilians from combatants, and to comply in other necessary ways with the laws of war.

JOANNE MARINER is a human-rights lawyer based in New York and Paris.

More articles by:

JOANNE MARINER is a human rights lawyer living in New York and Paris.

Weekend Edition
October 19, 2018
Friday - Sunday
Jason Hirthler
The Pieties of the Liberal Class
Jeffrey St. Clair
A Day in My Life at CounterPunch
Paul Street
“Male Energy,” Authoritarian Whiteness and Creeping Fascism in the Age of Trump
Nick Pemberton
Reflections on Chomsky’s Voting Strategy: Why The Democratic Party Can’t Be Saved
John Davis
The Last History of the United States
Yigal Bronner
The Road to Khan al-Akhmar
Robert Hunziker
The Negan Syndrome
Andrew Levine
Democrats Ahead: Progressives Beware
Rannie Amiri
There is No “Proxy War” in Yemen
David Rosen
America’s Lost Souls: the 21st Century Lumpen-Proletariat?
Joseph Natoli
The Age of Misrepresentations
Ron Jacobs
History Is Not Kind
John Laforge
White House Radiation: Weakened Regulations Would Save Industry Billions
Ramzy Baroud
The UN ‘Sheriff’: Nikki Haley Elevated Israel, Damaged US Standing
Robert Fantina
Trump, Human Rights and the Middle East
Anthony Pahnke – Jim Goodman
NAFTA 2.0 Will Help Corporations More Than Farmers
Jill Richardson
Identity Crisis: Elizabeth Warren’s Claims Cherokee Heritage
Sam Husseini
The Most Strategic Midterm Race: Elder Challenges Hoyer
Maria Foscarinis – John Tharp
The Criminalization of Homelessness
Robert Fisk
The Story of the Armenian Legion: a Dark Tale of Anger and Revenge
Jacques R. Pauwels
Dinner With Marx in the House of the Swan
Dave Lindorff
US ‘Outrage’ over Slaying of US Residents Depends on the Nation Responsible
Ricardo Vaz
How Many Yemenis is a DC Pundit Worth?
Elliot Sperber
Build More Gardens, Phase out Cars
Chris Gilbert
In the Wake of Nepal’s Incomplete Revolution: Dispatch by a Far-Flung Bolivarian 
Muhammad Othman
Let Us Bray
Gerry Brown
Are Chinese Municipal $6 Trillion (40 Trillion Yuan) Hidden Debts Posing Titanic Risks?
Rev. William Alberts
Judge Kavanaugh’s Defenders Doth Protest Too Much
Ralph Nader
Unmasking Phony Values Campaigns by the Corporatists
Victor Grossman
A Big Rally and a Bavarian Vote
James Bovard
Groped at the Airport: Congress Must End TSA’s Sexual Assaults on Women
Jeff Roby
Florida After Hurricane Michael: the Sad State of the Unheeded Planner
Wim Laven
Intentional or Incompetence—Voter Suppression Where We Live
Bradley Kaye
The Policy of Policing
Wim Laven
The Catholic Church Fails Sexual Abuse Victims
Kevin Cashman
One Year After Hurricane Maria: Employment in Puerto Rico is Down by 26,000
Dr. Hakim Young
Nonviolent Afghans Bring a Breath of Fresh Air
Karl Grossman
Irving Like vs. Big Nuke
Dan Corjescu
The New Politics of Climate Change
John Carter
The Plight of the Pyrenees: the Abandoned Guard Dogs of the West
Ted Rall
Brett Kavanaugh and the Politics of Emotion-Shaming
Graham Peebles
Sharing is Key to a New Economic and Democratic Order
Ed Rampell
The Advocates
Louis Proyect
The Education Business
David Yearsley
Shock-and-Awe Inside Oracle Arena
FacebookTwitterGoogle+RedditEmail