News article by Owen Daniels and Brian Williams.
Published by War On The Rocks. Special Series: AI and National Security.
Editor’s Note: This article was submitted in response to the call for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (parts b. and d.) which asks authors to consider the ethical dimensions of AI.
Examining the legal, moral, and ethical implications of military artificial intelligence (AI) poses a chicken-and-egg problem: Experts and analysts have a general sense of the risks involved, but the broad and constantly evolving nature of the technology provides insufficient technical details to mitigate them all in advance. Employing AI in the battlespace could create numerous ethical dilemmas that we must begin to guard against today, but in many cases the technology has not advanced sufficiently to present concrete, solvable problems.
To this end, 2019 was a bumper year for general military AI ethics. The Defense Innovation Board released its ethical military AI principles; the National Security Commission on AI weighed in with its interim report; the European Commission developed guidelines for trustworthy AI; and the French Armed Forces produced a white paper grappling with a national ethical approach. General principles like these usefully frame the problem, but it is technically difficult to operationalize “reliability” or “equitability,” and assessing specific systems can present ambiguity — especially near the end of development. [. . . ]