Robots That Kill: the United Nations and the Rules for Autonomous Warriors

Robots can be UAVs like the "Fire Scout"

As surreal as it sounds, robots that target and kill people are the subject of a recent report published by the United Nations Human Rights Council.

Presented in Geneva by Christof Heyns—Professor of Human Rights Law and UN Special Rapporteur on extrajudicial, summary or arbitrary executions—the report (A/HRC/23/47) “focuses on lethal autonomous robotics [LARs] and the protection of life.”

Heyns’ report recommends an international delay on “at least the testing, production, assembly, transfer, acquisition, deployment and use of LARs,” Until a panel can be organized “to articulate a policy for the international community on the issue.”

Translation: Until the UN can agree on rules for LARs and the taking of human life, nations should stop the development and production of killer robots.

The recommendation comes following a discussion surrounding the ethics of deployment of robots into the modern battlefield.

Triggering the discussion was a 2012 study published by the New York based human rights organization, Human Rights Watch (HRW). Their “case against killer robots”—written in conjunction with Harvard University’s International Human Rights Clinic—clearly states their primary concern.

 The primary concern of HRW and IHRC is the impact fully autonomous weapons would have on the protection of civilians during times of war. This report analyzes whether the technology would comply with international humanitarian law and preserve other checks on the killing of civilians. It finds that fully autonomous weapons would not only be unable to meet legal standards but would also undermine essential non-legal safeguards for civilians. Our research and analysis strongly conclude that fully autonomous weapons should be banned and that governments should urgently pursue that end.  Proponents of LARs highlight the benefits of removing human soldiers from the dangers of combat, thereby reducing troop losses. Opponents believe robot warriors would potentially increase civilian casualties.

The philosophical discussion about the integration of robots with human society is not new. Science fiction authors have speculated for decades about the ramifications of the integration of robots into human society.

In his 1942 short story “Runaround,” Isaac Asimov established what he considered the fundamental “Three Laws of Robotics.”

The first law states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

Following that rule, rescue-bots would be desirable. Killer robots would be illegal.

Three Robotic Laws by Isaac Asimov

But the modern rule set isn’t quite that specific yet.

In an April 2013 interview with the Telegraph, Jodi Williams—1997 Nobel Peace Prize Laureate—shared her concerns on the subject of LARs.

Right now in war—for the most part still—there’s a human being making the decision whether or not to fire a weapon of any type.” She continues, saying, “If robotics were allowed to fully develop—and they were autonomous, killer-robots as I like to call them—they would be able to be programmed, set free, and make the decisions about when, where, who, and how to attack.

Steve Goose, Arms Division Director for Human Rights Watch, spoke on the attraction of LARs to the modern military commander.

“A number of governments, including the United States, are very excited about moving in this direction. Very excited about taking the soldier off the battlefield and putting machines on the battlefield, thereby lowering casualties.”

Mr. Goose concludes, “killer robots don’t exist yet. There are precursors—systems that can make determinations but have a human that can override their decisions.” But, he warns, “with fully autonomous robots you take the human out of the loop.”

The HRW believes that development of autonomous robotic technology is approaching a dangerous area, and they point at weapons systems in use by today’s militaries.

The United States has successfully employed a robotic anti-air gun for decades. The Mk-15 Phalanx CIWS can, with the flip of a switch, automatically detect, target, and destroy inbound missiles and aircraft.

MK-15 Phalanx anti-air gun system

Samsung’s similar Intelligent Surveillance & Security Guard System—currently in use by South Korea and Israel—is capable of detecting, tracking, and firing upon human targets. Developed to replace human guards, it is advertised to overcome human limitations. Similar to the U.S. weapon system, the SG series identifies targets within its area of operation and sends a request to its remote human operator to fire.

Samsung's Security Guard System

It’s the potential removal of that human “switch”, the only intervention that prevents these systems from automatically engaging everything that comes too close regardless of hostile or civilian status, that is under scrutiny.

The mounting tension regarding the advancement of robotic technology has moved the HRW to join with other human rights organizations in the Campaign to Stop Killer Robots.

The campaign’s website, stopkillerrobots.org, states that they are made up of “five international non-governmental organizations (NGOs) and four national NGOs that work internationally.”

Like the HRW, the Campaign to Stop Killer Robots is worried that a “fundamental moral line” is crossed when you allow a machine to make a life or death decision.

Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.

Ultimately Christof Heyns’ report to the UN asks the question, “who takes responsibility for a robot’s actions?”

The Telegraph’s interview with the HRW included a discussion with Dr. Noel Sharkey, Professor of Artificial Intelligence and Robotics in South Yorkshire, England, about killer robots and responsibility. A world leader in robot ethics, Dr. Sharkey had this to say:

If a robot goes wrong, who’s accountable? It certainly won’t be the robot. But it could be the commander who sent it off, could be the manufacturer—it could be the programmer that programmed the mission. The robot could take a bullet in its computer and go berserk, so there’s no way of determining who’s really accountable, and that’s very important for the laws of war.

Are robots who are capable of targeting and killing humans going to become the future of combat?

Are we on the frontier of a dystopian world where Lethal Autonomous Robots programmed to kill enemy combatants could mistakenly fire on a child with a toy gun?

Those questions are the focus of UN report A/HRC/23/47.

How the international community will finally rule on LARs is unknown, but Christof Heyns, the HRW, the Campaign to Stop Killer Robots, and many in the international community are making sure that there are at least rules place before we mass produce autonomous warriors and set them free to kill people.

By Matt Darjany

Sources:

Human Rights Watch

Campaign to Stop Killer Robots

UN Office of the High Commissioner for Human Rights

UN Report A/HRC/23/47

HRW “The Case against Killer Robots”

Isaac Asimov

The Telegraph

Samsung ISSGS

Leave a Reply

Your email address will not be published.