Sometime in the next three decades, the U.S. military will be able to field robots that can make life-and-death decisions, operating without human supervision thanks to software and superfast computers.
But the technology to get to that point is running far ahead of considerations of the ethics of robotic warfare.
Or, as Peter Singer, a Brookings Institution scholar who has written widely on military robots has put it — technology grows at an exponential pace, human institutions at a linear, if not glacial, pace. That echoes an observation by the late science fiction writer Isaac Asimov that “science gathers knowledge faster than society gathers wisdom.”
The subject merits debate after the January 26 announcement that the Pentagon is planning to trim America’s armed forces by 100,000 while boosting the global fleet of armed drones, America’s most effective tool for the targeted killing of anti-American militants. So far, the drones are remotely operated, by pilots on bases in the United States.
But for a glimpse of how U.S. military thinkers see the future of the drone program, an 82-page report by the Air Force is recommended reading. Entitled “Unmanned Aircraft Systems Flight Plan 2009-2047“, it says that “advances in AI (Artificial Intelligence) will enable systems to make combat decisions and act within legal and policy constraints without necessarily requiring human input.”