As armed robots become reality, a tug-of-war is building over humans’ place in the decision-making loop during battle.
On July 7, a sniper opened fire on a crowded street protest in downtown Dallas, Tex., killing five police officers and injuring others. A long standoff followed inside a parking garage, where the shooter was said to have falsely told police he had planted bombs. Then, the Dallas Police Department deployed a wirelessly controlled Northrup Grumman Remotec Androx Mark V A-1 all-terrain vehicle with surveillance camera and interchangeable “gripper” designed for bomb disposal. Instead of removing explosives, the Mark V A-1 held a C4 explosive in its gripper and delivered it to the suspect.
The incident is believed to be the first time a robot was used to kill a civilian in the US, and it suddenly thrust the issue of weaponized robots into the public’s attention. This deadly application of a robot was but a crude improvisation under an extraordinary circumstance. However, in militaries around the world, robots are quickly becoming highly sophisticated and able to rival the role of the human soldier.
AI and Autonomous Weapons
In a 2015 CNN.com essay, Peter Warren Singer, a strategist and senior fellow at New America and author of Wired for War: The Robotics Revolution and Conflict in the 21st Century, wrote that current versions of Predator-class unmanned aerial vehicles (UAVs) are “more automated, able to do things like take off and land on their own, fly to various mission waypoints on their own, and carry sensors that make sense of what they are seeing.”
As military robots increasingly become autonomous, their missions are evolving beyond support roles such as disarming bombs and search-and-rescue operations to being able to engage a target. Artificial intelligence (AI) will play an important role within autonomous weapons systems (AWS), employing software, GPUs, FPGAs, and SoCs to take the data collected by an AWS’ sensors and use that information to make the necessary decisions so it can complete its mission. AI is a key component of the US Department of Defense’s “Third Offset” strategy, which aims to counteract military force reductions and technology gains made by adversaries by leveraging the United States’ innovative prowess to bolster its superiority. Thanks to AI, human warriors are making room for autonomous, weaponized machines – on the ground, in the air, and at sea. Is mankind entering a new era of science-fiction-style robots of war spawned by unintended consequences, which will operate beyond the control of humans?
“The Human is Always First”
Society must address the ethical implications of AWS soon. On one side of the issue, some non-governmental organizations are seeking to ban them. Last July, a group called the Future of Life Institute wrote an open letter signed by more than 1,000 AI and robotics researchers that said, “Starting a military AI arms race is a bad idea.” But, in a December 2015 speech to the CNAS (Center for a New American Security) Defense Forum, Deputy Secretary of Defense Robert O. Work asserted that “the human is always first” as part of an overall “battle network” in which the AI complements personnel and leaves the decision to use lethal force up to the human.
Michael Horowitz, an associate professor of political science at the University of Pennsylvania and author of Why Leaders Fight, says, “What you’re seeing is the increasing convergence between person and machine as leading militaries harness machine learning and robotics to improve their effectiveness.” He cites several examples like South Korea’s SGR-A1 sentry robots deployed between North and South Korea, which are said to be able to automatically fire a machine gun when they detect an intruder but instead report the sighting to a human operator who decides whether to fire. Israel also protects its borders using its Guardium robotic ground vehicle that can be equipped with weapons and is controlled from a central command center.
Horowitz points out that militaries want to have firm control of how their assets operate, and that for the foreseeable future the decision to keep humans “in the loop” is a matter of trust. “Militaries want to know that their systems work, and right now people are more reliable than machines.”
In order to cultivate trust between warfighters and their budding “machine partners,” for example, DARPA (Defense Advanced Research Projects Agency) launched a program called Explainable AI (XAI). The agency hopes to develop machine-learning systems that can explain their actions to the human end user so they may better manage their performance.
An Uncertain Trust
While Work’s assurance and DARPA’s program may indicate a commitment by the Defense Department to keep humans in the loop, it is in the business of winning wars, and all bets are off should the United States fall behind other nations or even terrorist groups in an AI arms race. Additionally, the private sector threatens to remove humans from the loop, too. In an April 2016 essay on vice.com, Singer and August Cole wrote that the private sector outspends the Pentagon in research by more than $600 billion, investing in things like self-driving cars, algorithms that trade hedge funds, and other applications that will do the thinking for humans.
Horowitz notes that there are many off-the-shelf applications that militaries can use, “especially if commercial applications of AI involve tasks that are similar to some military tasks.” Frank Wilson, a former executive at iRobot Defense and Security (now Endeavor Robotics, maker of the human-operated, multi-mission PackBot) told Army Technology in 2014 that “commercial chip technology has so improved that the reliability of even industrial-grade components can be suitable for military environments.”
Who Makes the Decisions?
As the Defense Department proceeds cautiously, the world’s march toward automation continues. Widespread autonomous weapons of war are a certainty, so all segments of society from political and military leaders to Silicon Valley and component vendors everywhere have a lofty responsibility to ensure that life-or-death decisions continue to include humans.
Author Chris Warner is a freelance technology writer who comes from an old Western Electric family. He has more than 15 years experience in covering the electronic components industry.
More on robotics: