Backscatter

The Future of Automated Warfare

By Donald Christiansen

The concept of automated warfare is not new. Nor is its limited use for specific purposes. One of the principal objectives has been to avoid the exposure of military personnel in either offensive or defensive situations.

In 1969, General William Westmoreland, then the U.S. Army Chief of Staff, gave this description of the ideal automated battlefield. It would consist of a specific area in which neither civilian nor military personnel would be present. The invading force would exploit robotic vehicles and weaponry. The defending force would counter with similar automated weaponry. Human observers representing both sides would monitor the action from a safe distance, overriding and redirecting the actions of their own “troops” as each deemed necessary.

Debatable Issues

While in theory the concept of automated warfare may seem to have desirable aspects (if any aspect of warfare can be deemed desirable), its use in certain areas has proven to be questionable.

On the plus side, the U.S. Navy has always been wary of and responsive to the danger to its personnel in locating and neutralizing enemy mines. The Navy now has three types of unmanned surface vehicles (USVs) – and more are in development. Mine countermeasures are the primary mission of these USVs. The Navy is also testing a 38-foot USV having a fully automated mission selection scenario (intelligence, surveillance, reconnaissance, and mine countermeasures) whereby the system can be preloaded or dynamically retasked, and then execute the chosen mission autonomously.

A still larger USV, the self-deploying medium-displacement Sea Hunter, will be equipped with three radars and a stereo camera, to provide automatic target recognition.

The Marine Corps has given its smallest infantry units an unmanned aerial system (UAS) for use in its own reconnaissance, and has employed a larger multimission UAS to support its deployed expeditionary units.

The foregoing are examples of AI/AW used to protect military personnel from exposure to dangerous situations in generally defensive roles as opposed to those that are overtly offensive.

The Offensive Drone Arrives

The epitome of automatic warfare, at least as it is deemed today, is the drone.

While some think of it as a toy, and others as a replacement for the trucking fleet of UPS, the drone in its larger configurations is seeing more service as a programable offensive weapon. By 2011, according to an article in Nature, the United States had more than 7000 unmanned aerial systems (drones). The unmanned Predator propeller-driven aircraft was equipped with sensors, cameras, and a pair of Hellfire missiles. It was used by the U.S. Air Force and the CIA in combat over eight countries. Its range is over 400 nautical miles, where it can loiter over a target for 14 hours before returning to base.

The Air Force Reaper is powered by a turboprop engine, is three times faster than the Predator, and can carry 15 times the ordnance. It has been succeeded by the Avenger, a turbofan-powered UAV.

The U.S. Marine Corps is also planning an unmanned aircraft the size of the Air Force Reaper. It will have to operate from a Navy ship, so will require vertical-take-off-and-landing (VTOL) capabilities.

Autonomy for Drones?

A remote “pilot” makes the decision to fire or release a missile from a UAV. To what extent drones will assume more autonomy in their missions is unclear. The dilemma was noted in an Air Force treatise, “Unmanned Aircraft Systems Flight Plan 2009-2047,” as follows: “Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions. These include the appropriateness of machines having this ability, under what circumstances it should be employed, where responsibilities for mistakes lies, and what limitations should be placed upon the autonomy of such systems.”

In a study funded by the Army Research Office, “Governing Lethal Behavior in Autonomous Robots,” the author concluded that lethal autonomy is inevitable, contending that ethical military drones and robots, capable of using deadly force while programmed to adhere to international humanitarian law and the rules of engagement, are possible.

Others disagree, including Johann Borenstein, chief of the University of Michigan’s Mobile Robotics Lab. Calling autonomy the “Achilles’ heel of warfare,” he noted that robots lack the human skill of common sense. (Might this imply that humans, who have been killing one another in wars since the beginning of recorded history, do have common sense and invariably use it?)

The London-based Bureau of Investigative Journalism recently reported that among an estimated 9,240 killed by drones, as many as 1,400 were civilians, including more than 300 children.

The Reality

Returning briefly to the bloodless battlefield of General Westmoreland, Charles Barnaby, a nuclear physicist who worked at the Atomic Research Establishment in London, raised these questions: How would victory be defined? Why not simply decide the issue by having the generals play computer games?

In the real world, there is little expectation that completely (bilateral) autonomous warfare is a  possibility. The losers in such a contest would likely revert to uncivilized behaviors that defy international humanitarian law, including selective and even mass killing of civilian populations.

I’ll conclude by observing how in the discussions of automatic warfare, the general assumption appears to be that the escalation of warfare is inevitable. Thus the following challenge by Andrew Lichterman, at an event organized in 2015 by the International Network of Engineers and Global Scientists for Global Responsibility at the U.N. headquarters in New York City, was unexpected: “The purpose of our work is not to regulate how people are killed, but to stop the killing.”

Open Question

Yet to be determined is whether enhancements in automatic warfare will discourage random attacks on civilians by the losing side or promote their increase along with an inevitable use of nuclear weapons.

Your comments are welcome at donchristiansen@ieee.org.

Resources

  • Barnaby, C.F., “Automated warfare is on the way: What are the consequences?” Law/International Law/Law of War and Self Defense, retrieved May 23, 2017
  • Barnaby, C.F., The Automated Battlefield: New Technology in Modern Warfare, Oxford University Press, 1987.
  • Singer, P.W., “A World of Killer Apps,” Nature, Sept. 2011.
  • Finn, P., “A Future for Drones: Automated Killing,” The Washington Post, Sept. 19, 2011.
  • Lichterman, A., “Automated Warfare, Weapons Modernization, and Nuclear War Risk,” presented at 2015 Nuclear Nonproliferation Treaty Review Conference, 2015.
  • Davis, L.E.; McNarney, M.J.; Chow, J.; Hamilton, T.; Harting, S.; and Ryman, D., Armed and Dangerous? UAVs and U.S. Security, RAND Corp., 2014.
  • Russell, S., “Take a stand on AI Weapons,” Nature website, retrieved May 3, 2017.
  • “Special Report: ISR & Unmanned Systems,” Seapower, Vol. 60, No. 4, May 2017.
  • Situational Awareness: “Coastal Riverine Forces use unmanned systems to see beyond the bend,” Seapower, Vol. 60, No. 5, June 2017.

Donald Christiansen is the former editor and publisher of IEEE Spectrum and an independent publishing consultant. He is a Fellow of the IEEE. He can be reached at donchristiansen@ieee.org.

Advertisement

Donald Christiansen

Donald Christiansen is the former editor and publisher of IEEE Spectrum and an independent publishing consultant. He is a Fellow of the IEEE. His Backscatter columns can be found here.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button