Lethal autonomous systems: The ethics of programming robots for war

Tuesday, November 25, 2008

Now that it's possible to program unmanned combat vehicles to make decisions about where (and who) to strike in war situations, new questions of ethics have risen: In which situations can we allow robots to make their own decisions? Can we program robots to follow the Geneva Conventions? There is a more basic question, too: Do we even want robot soldiers?
"The question of under what circumstances is it ethical to fire a lethal weapon — whether it's possible to build that capacity into a robot."
— Cornelia Dean on the ethics of programming robots for war

Guest's notes: Ronald Arkin on ethical behavior (and robots)

Historically research in military autonomous systems has focused on how to ensure that robots comply with mission requirements and safely conduct their duties from an operational perspective, whether as individual robots or as teams. It is now time to focus on other aspects as well, which include ethical compliance to the Laws of War and the Rules of Engagement. The end goal of this research is not necessarily more efficient killing machines but possibly more humane ones, i.e., where their application in the battlefield can potentially result in a reduction of collateral damage and noncombatant casualties when compared to human performance. This should occur without eroding mission performance.

The research I am conducting involving embedding ethical behavior in robots capable of lethal force is premised on two assumptions. The first is that warfare is inevitable and the second is that autonomous systems will eventually be used in its conduct. While I maintain the utmost respect for our warfighters and I believe that the vast majority do the best they can under the circumstances, given the current tempo of the battlefield it is no longer possible for humans to make fully informed and rational decisions regarding the application of lethal force in many instances. The tendency towards ethical infractions in soldiers is well documented in a recent report by the Surgeon General.

It is my belief that the use of robotic technology can potentially reduce the number of atrocities that occur during war, and it is the responsibility of scientists such as myself to look for ways to protect innocent lives while designing advanced technological solutions. I am also committed to providing our warfighters with the best possible tools for their job. These goals need not be in conflict.

It should be noted that I do not foresee the advent of robot warriors sweeping across the countryside as evidenced in science fiction, but rather that these machines will be embedded with our troops for highly specialized mission-specific tasks in support of human operations, such as counter-sniper or building clearing missions. They should not, and likely could not, replace soldiers one-for-one. Also I do not see the results of this research being used in the near future but are rather geared for the so-called war after next. It is also intended that these systems be deployed in total war scenarios and not where there are high concentrations of civilians, contrary to many of our current military involvements.

Space limitations prevent a full exposition of any of these positions. My personal research approach to this problem is documented in an upcoming book available this spring entitled "Governing Lethal Behavior in Autonomous Robots". An earlier technical report upon which it is loosely based is also available.

Finally, I should clearly state that I am not an advocate for war or the use of robots as weapons of war. But if they are going to be used for this purpose, which I see as largely inevitable, we must find ways to ensure that they are suitably restrained according to international law. In any case, further discussion should go on at both national and international levels to determine the appropriate use of this new class of weapons to ensure that we as a society understand and accept the consequences of this new technology, even if such a discussion leads to an outright ban on its use.

— Ronald Arkin


Ronald Arkin is a professor in the College of Computing at the Georgia Institute of Technology.

Guests:

Ronald Arkin and Cornelia Dean

Contributors:

Molly Webster

Leave a Comment

Register for your own account so you can vote on comments, save your favorites, and more. Learn more.
Please stay on topic, be civil, and be brief.
Email addresses are never displayed, but they are required to confirm your comments. Names are displayed with all comments. We reserve the right to edit any comments posted on this site. Please read the Comment Guidelines before posting. By leaving a comment, you agree to New York Public Radio's Privacy Policy and Terms Of Use.