AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense (Internal Policy)

Policy Details

Policy Details

Originating Entity
Last Action
Document published
Date of Last Action
Nov 1 2019
Date Introduced
Nov 1 2019
Publication Date
Nov 19 2019

SciPol Summary

The Defense Innovation Board (DIB) of the US Department of Defense (DOD) released AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense. The DOD Artificial Intelligence Strategy had previously said that the DOD will consider ethics when using artificial intelligence (AI), so DOD leadership asked the DIB to provide ethics recommendations. The DIB, an advisory committee for the DOD focusing on solutions to challenges created by technological advances and comprised of stakeholders relevant to particular challenges in innovative technologies, produced the five following broad ethics considerations for DOD’s use of AI:

  1. Responsible: Humans should use “appropriate levels of judgment” and should “remain responsible” for developing AI systems, deploying those systems, and monitoring the outcomes of their deployment.
  2. Equitable: DOD should “avoid unintended bias” when developing and using AI systems that would cause “inadvertently cause harm to persons.”
  3. Traceable: AI systems must be made and used in understandable and transparent ways.
  4. Reliable: AI systems should have an “explicit, well-defined domain of use.” AI systems must also be safe and effective at doing the specific, well-defined tasks that they were designed to do.
  5. Governable: AI systems should be “designed and engineered to fulfill their intended function.” Such systems should also be able to “detect and avoid unintended harm or disruption,” and humans or automated systems must be able to shut down AI systems that “demonstrate unintended escalatory or other behavior.”

The DIB provides twelve recommendations to incorporate these AI ethics principles into the DOD’s operations. Examples of these recommendations include:

  • Formalizing these principles via official DOD channels to ensure future DOD policy and communications between members align with the principles.
  • Establishing a DOD-wide AI Steering Committee to oversee that AI projects are consistent with AI ethics principles.
  • Cultivating and growing the field of AI engineering, security, and reproducibility, both within and outside of DOD, to better understand and develop AI systems consistent with ethics principles.
  • Strengthening AI test and evaluation techniques that create new testing infrastructure that incorporates ethics principles or improves on existing DOD testing procedures for AI systems.
  • Developing an AI risk management methodology to manage the varying levels of risks of AI applications based on ethics, safety, and legal considerations.

The DIB defines artificial intelligence as “a variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task.” However, the DIB draws a distinction between the general term of AI versus the specific term of machine learning; they do so to differentiate between older, broad AI systems and more modern systems. The DIB states that AI is an “enabling capability, akin to electricity… or computers” and is neither inherently good nor bad.

The DIB further distinguishes that AI is different than autonomy, in that autonomous systems could but do not necessarily make use of AI. To illustrate this distinction, the DIB provides the example of autonomous weapons, or weapons that can be used without human operators, clarifying that autonomous weapons fall under different guidelines than AI systems.

In laying out the ethics principles, the DIB recognizes that AI technologies are relatively new and constantly evolving. The DIB also recognizes that AI can change how war is conducted and calls for ethical guidance to shape how AI is used in the DOD. The DIB states that it is crucial to implement norms for AI use in a military context considering the novelty of AI technology and the risks associated with its use. The DIB also states that a key motivation for the DOD to invest in ethical use of AI is to gain a “competitive military advantage.” Both the DIB and DOD note that many of the US’s technologically enabled military competitors are authoritarian and their development of AI is “inconsistent with the legal, ethical, and moral norms expected by democratic countries.”

The DIB developed these principles to align with to the DOD’s existing ethics frameworks, which themselves are based on the US Constitution, laws about how the US conducts war, and international treaties about the conduct of war. The DIB gives an example of how AI fits into the laws and ethics of war followed by the DOD: AI enabled weapons would be required legally to not cause “unnecessary suffering” and to have low chances of harming civilians. The DIB states that AI developers and operators should use ethics principles proactively to prevent dangerous incidents involving AI technology.

SciPol Summary authored by