Guidance for Regulation of Artificial Intelligence Applications (Draft Memorandum)

Policy Details

Policy Details

Last Action
Draft memorandum issued
Date of Last Action
Jan 7 2020
Date Introduced
Jan 7 2020
Publication Date
Jan 16 2020

SciPol Summary

On January 7, 2020 the White House Office of Science and Technology Policy (OSTP) released draft guidelines to federal agencies overseeing non-federal entities’ deployment of  artificial intelligence (AI) applications. Comments on the draft can be submitted at Regulations.gov until March 13, 2020.

OSTP proposes ten guiding principles for regulatory and non-regulatory approaches to narrow AI applications—AI technology that addresses a specific task such as image recognition, language translation, self-driving vehicles, or machine learning.

  1. Public Trust in AI: agencies should “promote reliable, robust and trustworthy AI applications.” While the European Union has provided a definition for the term trustworthy AI, no specific definition appears in the OSTP’s current draft.
  2. Public Participation: the public should have opportunities to “provide information and participate in all stages of the rulemaking process.
  3. Scientific Integrity and Information Quality: policies should be based on “verifiable evidence” that is clearly communicated to the public.
  4. Risk Assessment and Management: agencies should only address risks that “present the possibility of unacceptable harm or harm that has expected costs greater than expected benefits.”
  5. Benefits and Costs: agencies should consider “potential benefits and costs of employing AI” compared to business-as-usual.
  6. Flexibility: “rigid, design-based regulations that attempt to prescribe technical specifications of AI applications” are ineffective given the rapid changes in AI technology. International competitiveness of US firms should also be considered in rulemaking.
  7. Fairness and Non-Discrimination: agencies should consider “issues of fairness and non-discrimination with respect to outcomes” of AI application.
  8. Disclosure and Transparency: “transparency and disclosure can increase public trust… in AI.” Yet, agencies should refrain from additional disclosure requirements if current regulation is sufficient.
  9. Safety and Security: “agencies should be mindful of any potential safety and security risk, as well as the risk of possible malicious deployment and use of AI applications.”
  10. Interagency Coordination: “agencies should coordinate… to ensure consistency and predictability of AI-related policies.”

OSTP also suggests the use of non-regulatory approaches such as sector-specific guidance or frameworks, pilot programs and experiments, and voluntary consensus standards when the costs of regulation overweigh its benefits. In order to accelerate innovation in AI, OSTP calls for increasing “public access to government data.”

Central to the draft is preventing regulation from stifling innovation and growth in the AI sector: “Agencies must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.” In this sense, it reflects Executive Order 13859 of February 2019—that called for the publication of these guidelines— and the American AI Initiative, placing AI at the forefront of economic and national security objectives.

The ten principles will not apply to federal agencies’ own use of AI, but instead to their rulemaking efforts. As an example, as of January 2020 the Food and Drug Administration (FDA) is looking to regulate AI technology use by manufacturers of medical devices. Under the OSTP framework, the FDA would need to accompany any such regulation with an outline of how the new rule meets OSTP guidelines.

In practice, OSTP does not have the power to enforce compliance. Yet, according to R. David Edelman, director of the Project on Technology, the Economy, and National Security at the Massachusetts Institute of Technology, this framework is “a very reasonable attempt to build some quality control into our AI policy.” Dissonant regulatory efforts have emerged in response to the rapid uptake of AI technology; OSTP’s objective through this policy is to nudge a convergence to common guidelines.

The promise of boosts in profitability rates, on average by 38% according to Accenture, has pushed companies to include AI in various aspects their businesses. AI software revenues are projected to grow from $14.7 billion in 2019 to $118.6 billion in 2025. However, integration of AI into consumer technology, finance, healthcare and education, raises ethical concerns. Data privacy and control, cybersecurity, potential infringement of civil liberties and job loss due to automation are among the problems that the OSTP’s “light-touch regulatory approach”—as characterized by Michael Kratsios, US’s Chief Technology Officer—does not directly address.

SciPol Summary authored by