The latest updates and analysis from Morrison Foerster
March 16, 2020 - Artificial Intelligence + Robotics

Department of Defense Adopts AI Ethical Principles to Guide Its Expanded Use of AI

The Data Rights Black Hole:  DOD Lobbies Congress to Eliminate Proprietary Rights in Your Most Valuable Trade Secrets — Your Detailed Manufacturing and Process Data

Popular science fiction is replete with examples of artificial intelligence (AI) gone awry, from HAL, the killer AI of Arthur C. Clarke’s 2001: A Space Odyssey, to the supercomputer bent on nuclear annihilation in WarGames, to the robot forces of 2004’s I, Robot.  Concern about the secure, effective, and unbiased use of AI by the Department of Defense (DoD), however, is very real, and of increasing import as DoD makes AI a top technology modernization priority.  Against this backdrop, the DoD announced in late February a series of ethical principles to guide its adoption and use of AI.  (See sidebar)

AI, defined in the 2019 National Defense Authorization Act (NDAA) § 238 (g) as “an artificial system . . . that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action,” has the potential for both untold benefits and potentially deadly hazards.  As advances in hardware allow unsurpassed information collecting capacity, an accompanying need arises to process massive quantities of data at rates that exceed unaided human capabilities.  Both civilian and military agencies are relying increasingly upon sophisticated AI algorithms to assess problems and generate answers at computer speeds without human fatigue.  (Our prior article highlighted some of the areas of AI research the DoD and other parts of the federal government are pursuing.)  The use of AI thus opens up exciting possibilities.

However, when national security depends on the outcome, and the lives of American soldiers or allies may hang in the balance, how do we know we can trust an AI agent to make the right decision?  Enter AI ethics.  The DoD’s newly adopted ethical principles will dictate the foundational design and development principles embedded into AI to be used in both combat and noncombat situations.

The DoD’s efforts to design safe and ethical AI technology are not new.  In 2012, DoD Directive 3000.09 established guidelines calling for the “exercise of appropriate levels of human judgment” and “clear human-machine interfaces” in the design and use of autonomous and semi‑autonomous weapons to ensure the solutions functioned as anticipated.  The Department of Defense Artificial Intelligence Strategy of 2018 included “Leading in military ethics and AI safety” among major identified focus areas.  Finally, the five adopted AI principles stem from more than a year of work by the Defense Innovation Board, an independent advisory committee of leaders from industry, academia, and think tanks, as well as current and former military leaders.

DoD’s adoption of these ethical guidelines is a message to defense contractors that, to the extent they are not already doing so, they must bring AI ethics into their technology.  The five ethical principles officially adopted by the DoD are also a public declaration of the values and norms that must be embedded into AI solutions to be sold to DoD.

Among the most important of these norms is the need to address two of the most persistent issues with AI – “black box” syndrome and bias.  These issues square up against the DoD’s principles of equitable and traceable decision-making.  The black-box nature of AI arises from the complex interplay of thousands of variables.  While the binary choices at any individual decision node is relatively straightforward, the compound effect of layered choices grants AI its power but also makes understanding just how it arrived at a final decision challenging, if not impossible.  Although careful examination of the code may allow for a general understanding, or traceability, of the decision-making structure, typically developers see their source-code as proprietary intellectual property.  How the DoD plans to navigate these IP considerations remains unclear.

The opaqueness of AI algorithms also plays into the dangers of bias.  Various studies have shown how imperfections in the data used to train AI may cause an algorithm to produce biased results, whether in predicting the credit-worthiness of mortgage applicants, recidivism among inmates, or patient outcomes in epidemiological studies.  This issue goes to the heart of the DoD’s desire for equitable AI.  While it is plain to see the potential for trouble resulting from bias in an AI-driven hiring and promotion program, the DoD’s potential uses for AI make the stakes much higher.  Smart algorithm designs may avoid more obvious sources of bias, but the current inscrutability of highly complex, deep architectures leaves the door open for hidden biases to enter.

The technical challenges presented by bias and black-box syndrome illustrate the need for contractors to view DoD’s five principles not as distinct and severable criteria, but rather as interlocking objectives.  For example, by observing the first principle of responsibility in AI by keeping humans in the loop, observers may be able catch AI systems developing biased output.

Adoption of these ethical principles is likely to materially alter the DoD’s procurement of AI‑driven technology.  The new additions have been carved into the bedrock of DoD’s existing commitments as DoD has linked the ethical principles to its existing ethical framework “based on the U.S. Constitution, Title 10 of the U.S. Code, Law of War, existing international treatises and long-standing norms and values.”  Further, DoD has a legal mandate to implement AI ethics in section 238 of the 2019 NDAA, which requires the DoD to develop “appropriate ethical, legal, and other policies for the Department governing the development and use of artificial intelligence enabled systems and technologies.”  Thus, the AI ethical principles are not only squarely grounded in long-standing military values, but also mandated by statute.  As a consequence, contractors seeking to provide AI solutions to the DoD should surely pay attention to the new principles and incorporate them into future designs.

*Markus Speidel is a Law Clerk in our Washington, D.C. office and not admitted to the bar.