In March 2018, the Government Accountability Office (GAO) published “Artificial Intelligence – Emerging Opportunities, Challenges, and Implications” following its July 2017 forum on Artificial Intelligence (AI). This forum convened AI experts and stakeholders from industry, academia, government, and nonprofits to consider the impacts and policy implications of AI in the cybersecurity, transportation, criminal justice, and financial sectors. Following the GAO’s report, a summary testimony of the Forum’s findings was presented before the Subcommittees on Research and Technology, and Energy within the House Committee on Science, Space, and Technology.
The GAO’s report to Congress provides a brief overview of the forums findings by focusing on the following three topics:
- AI’s definition and evolution over time;
- Future opportunities, challenges, and risks of AI;
- Research priorities and policy implications of AI advancements.
AI’s Definition and Evolution Over Time
Since AI’s first inception at Dartmouth College in 1956, the technology’s capabilities and attempts to define them have grown exponentially throughout the years. In lieu of establishing a unified definition of AI, the GAO report instead cites a description of the technology’s evolution in three waves, each divided by the technology’s ability to perceive, learn, abstract, and reason though challenges and data presented to it.
- In the first wave of AI, the technology can automate human pre-programed actions or services using data that humans have made readable to the AI system (e.g. logistical planners and tax preparation services).
- In the second wave of AI, the technology can translate visual, auditory, and semantic information from its surroundings into a format that the AI system can perceive and respond through processes known as machine-learning with limited human oversight.
- In the third wave of AI, which the GAO forum indicates that we are just entering in to, the technology can adapt its operations to new contexts and objectives without human oversight while also explaining how it has adapted meet these changes.
Future Opportunities and Challenges of AI
To shed light on the spectrum of opportunities, challenges, and risks of AI, the GAO forum considered the impacts of AI in the cybersecurity, transportation, criminal justice, and financial sectors. Overall, the GAO report finds that while AI has the capability to improve safety, justice, and security in these sectors, the technology can also undermine these advances if they used maliciously or without proper oversight. The GAO’s report also provides the following summary of AI’s opportunities and challenges:
- Improved economic outcomes and productivity: like other technological advancements in the past, AI will improve the rate and efficiency of production. However, the report also mentions that measuring AI’s impact will be difficult and there are currently no available mechanisms to accurately measure its impact.
- Improved or assisted human decision making: AI enables its users to integrate and discover trends or abnormalities hidden within enormous and diversified datasets. Policymakers can use AI systems to create data-driven policy, though validation and potential the programmed-bias of such systems is not yet well understood.
- Improved problem solving: current progress in AI research promises increasing applications of the technology to society’s challenges while also minimizing regulatory oversight burdens to the Government and those being regulated.
- Barriers to data collection and sharing: AI systems using dissimilar sources of data may face challenges accessing and integrating data from sources that vary in their data’s regulatory accessibility, completeness, and overall quality.
- Limited access to computing resources and human capital: developers, researchers, and implementers in various governmental organizations or agencies may have difficulties obtaining and funding the computing power and talent-intense needs of AI systems.
- Legal and regulatory hurdles: the rapid advancement and application of AI systems have in some ways outpaced the regulatory framework to govern how and these systems should be used effectively and safely in its numerous applications. New technological expertise within the government will be needed to make sure that policy for AI is up to date and appropriate for the technology.
- Developing ethical, explainable, and acceptable AI applications: as AI systems enhance, and increasingly surpass, human capabilities, it will be important that the actions and decisions derived from these systems are able to be held as accountable as the human decision-makers they are assisting and/or replacing.
Cross-cutting Policy and Research Considerations for AI
In response to the emerging opportunities and challenges of AI, the GAO Forum identified a list of policy and research areas that ought to be considered across the government where AI systems are used and researched. The Forum’s policy and research considerations include:
- Incentivizing improved data collection, sharing, and labeling – To improve the efficiency and safety of AI’s application, federal agencies are advised to implement and organize standardized data collection, sharing, and labeling programs, like those implemented by MITRE, the National Institute of Standards and Technology, and the Office of Science and Technology Policy’s Subcommittee on Machine Learning and Artificial Intelligence, that protect the privacy and intellectual property of contributors and allow for more accurate outcomes for the technology’s use.
- Improving AI safety and security – New regulatory standards are called for to balance the costs and liabilities of cybersecurity and AI-use are shared more equitably among AI system users, developers, and manufacturers.
- Updating the current regulatory framework for AI – the capabilities and nature of AI systems undermine many current regulatory approaches to privacy, liability, and evaluation where AI systems are being implemented. Federal agencies will need to explore new regulatory approaches while cultivating AI expertise to vigilantly evaluate and improve AI regulation as the technology evolves.
- Defining and assessing acceptable risks and ethics decision making for AI – Federal agencies using AI systems will need to create standardized benchmarks of AI system performance, derived from the perspectives of multiple fields of expertise including economics, philosophy, ethics, the law, to test and evaluate AI systems’ degree of risk and ethical implementation.
- Establishing regulatory sandboxes – As new regulatory approaches are considered for AI systems, the government needs to develop regulatory “safe havens” to protect participating stakeholders from risk and liability to allow for more robust participation and evaluation of the new approaches.
- Understanding AI’s impact on the Nation’s employment and establishing improved job training and readiness programs – The federal government will need to establish more comprehensive data collection to better assess the impact of AI systems on individual and overall employment as well as understand what job sectors will need to be retrained and what new jobs skills will need to be taught.
- Exploring computations ethics and explainable AI – As AI systems are further developed and used in more contexts, the government and regulatory stakeholders will have to remain vigilant of new ethical considerations for AI’s use and the technologies that enable it, like machine learning, big data, and high-powered computer systems.