I recently read “Army of None: Autonomous Weapons and the Future of War” by Paul Scharre which was released in April 2018. This book provides a comprehensive view on autonomous weapons and the role of AI in the military.
Paul Scharre is a former U.S. Army Ranger who served in Iraq and Afghanistan. Later he served as civilian adviser in the Pentagon where he wrote policy documents on autonomous weapons. For the last two years, Paul has been a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security (CNAS), a bipartisan think tank.
Even while there is debate about a commonly accepted definition of autonomous weapon systems, it’s clear that such weapon with a range of capabilities exists and are being used. Examples includes the Patriot air defense system, the Aegis combat system, and loitering drones which can stay over predefined areas and autonomously search for targets and destroy them.
Paul interviewed a wide variety of military and civilian leaders and on their views of the application and limits of autonomous weapons which vary widely. A new arms race seems likely.
An interesting point is that the advancement of technology for autonomous weapons driven is, unlike for previous weapons systems, largely driven by the civilian tech sector. An additional layer of complexity comes from the inherently multi-purpose character of AI technology and the practically unrestricted proliferation of digital technology.
The book also left me with questions. These are two most critical ones to me.
Is the concept of human in/on the loop an illusion?
In 1983, the world was at brink of a nuclear war. The Soviet Army’s Lt Colonel Stanislav Petrov was on duty when their new missile warning system issued alerts of incoming nuclear intercontinental missiles. Petrov assessed the situation and concluded that it was a false alarm. A human in the loop avoided nuclear war.
Petrov, however, had additional information available which the system did not have, e.g. that the attack pattern was unlikely and the understanding that the new warning system was unproven. Assessing this information, enabled him to be a meaningful decider.
During the Gulf War, there were a number of friendly fire incidents by a Patriot battery. As a subsequent investigation showed, the Patriot battery commander was not at fault. She did not have additional information beyond what the Patriots radar was providing. This made her an actuator rather than a decider.
I think the difference between decider and actuator is if there is information which can better processed by a human than an AI system. As AI technology advances this gap is going to shrink. The human in-the-loop will more and more turn into an actuator. The human on-the-loop will have little additional information to determine if and when to pull the plug.
An additional factor is speed, if both sides in a conflict deploy autonomous weapons, e.g. swarms of drones, things will happen so fast they are most likely beyond human sensing capabilities.
Will the reduced risk of loss of human life lower the threshold for military conflict?
Governments are, and rightfully so, reluctant to commit “boots on the ground” as means for resolving political conflicts and to confront bad actors. The threshold is a lot lower when it comes to deliver missile or drone strikes or other military means with a low risk of harm to its armed services personnel.
However, these means have their limits in today’s asymmetric conflicts.
Projecting from publicly available information about drones, self-driving vehicles, the Boston Dynamics dog and similar ground force robots, swarming, sensing and assessing, e.g. image processing, makes (semi-)autonomous ground forces a likely reality in a not so distant future. These would significantly lower the risk of loss of human life.
While it is a desirable goal to reduce the risk for human life, it might also lower the threshold for governments to engage in military conflict.