Preparing for the Future of Artificial Intelligence (Agency Report)
The report “Preparing for the Future of Artificial Intelligence”, prepared by the National Science and Technology Council (NSTC), surveys the current state of artificial intelligence (AI) research, including current and potential applications, and identifies questions that progress in AI raises for society and public policy. The report also makes recommendations for further AI-related actions by federal agencies. In the context of this report, AI refers to computerized systems that are capable of rationally solving complex real-world problems or taking appropriate actions to achieve a set of goals.
To accommodate the current landscape and address challenges for AI, the report makes 23 recommendations organized into eight topic areas. These recommendations name specific actions federal agencies and other government entities could take that are intended to expand AI research and application, investigate AI’s economic and social impact, and monitor its safety, security, and fairness. The recommendations within each topic area are as follows:
Applications of AI for Public Good – To foster beneficial applications of AI in both public and private sectors:
- Private and public institutions are encouraged to investigate approaches to responsibly leverage AI and machine learning to benefit society; and
- The federal government should prioritize open training data (i.e., datasets used to discover potential predictive relationships in machine learning applications) to accelerate AI research and promote open data standards and best practices in Federal agencies.
AI in the Federal Government – To promote the use of AI in government to serve the public faster, more effectively and at lower cost, the Federal government should:
- Improve the capacity of key agencies to apply AI to their missions; and
- Develop a community of practice for AI practitioners across agencies to work together, share standards, and include AI opportunities in federal training programs when appropriate.
AI and Regulation – While developing and adapting regulatory policy regarding AI, agencies should:
- Draw on appropriate technical expertise at the senior level;
- Use the full range of personnel assignment and exchange models to foster a federal workforce with diverse perspectives on the current state of AI;
- Work with industry and researchers through the Department of Transportation to increase sharing of data for safety, research, and other purposes;
- Invest in the development and implementation of an advanced and automated air traffic management system that fully accommodates both piloted and autonomous (i.e., self-flying) aircraft; and
- Continue developing an evolving regulatory framework to enable the safe integration of fully automated (i.e., self-driving or driverless) vehicles and autonomous aircraft into the transportation system.
Research and Workforce – To support basic research (i.e., research intended to expand knowledge or interest in a scientific question) and applications of AI to benefit the public good and develop a skilled and diverse workforce:
- The NSTC Subcommittee on Machine Learning and Artificial Intelligence (MLAI) should monitor developments in AI and report status updates regularly to senior administration leadership, especially with regard to domestic and international milestones;
- The government should monitor the state of AI milestones in other countries;
- Industry should update the government on general progress of AI in industry;
- The federal government should prioritize basic and long-term AI research; and
- NSTC subcommittee on MLAI and the Networking and Information Technology Research and Development program should work with NSTC Committee on Science, Technology, Engineering and Math Education (CoSTEM) on initiating study on the AI workforce pipeline to develop actions that ensure appropriate increases in its size, quality and diversity.
AI, Automation and the Economy – To understand the potential impacts of AI on the economy and put policies and institutions in place to support the benefits of AI while mitigating the costs:
- The Executive Office of President should publish a follow-up report (“Artificial Intelligence, Automation and the Economy”) by the end of 2016 to investigate the effects of AI and automation on the US job market and to outline recommended economic policy.
Fairness, Safety and Governance – To monitor the safety and fairness of AI applications for public protection:
- Federal agencies using AI-based systems to make or provide decision support for consequential decisions about individuals should ensure efficacy and fairness based on evidence-based verification and validation;
- Federal agencies providing grants to state and local government for the application of AI-based systems that will make consequential decisions about individuals should ensure that AI-based products or services purchased with Federal funding produce results in a sufficiently transparent fashion and are supported by efficacy and fairness;
- Educational institutions should include ethics, security, privacy, and safety as integral parts of their AI curriculum; and
- AI professionals and safety professionals should collaborate toward developing a mature field of AI safety engineering.
Global Considerations and Security – While developing AI policy questions with regards to international relations, cyber security, and defense, the federal government should:
- Develop a government-wide strategy on international engagement related to AI and develop a list of AI topic areas that need international engagement and monitoring;
- Deepen its engagement with key international stakeholders to exchange information and facilitate collaboration on AI research and development;
- Ensure agencies’ plans and strategies to account for the mutual influence of AI and cyber-security; and
- Develop a single, government-wide policy on autonomous and semi-autonomous weapons that is consistent with international humanitarian law.
The Report was developed by the NSTC’s Subcommittee on Machine Learning and Artificial Intelligence, which was chartered in May 2016. The Report was reviewed by the NSTC Committee on Technology.
OSTP led a series of public outreach activities to engage with experts and the general public to acquire information for the report. The events included:
- AI, Law, and Policy (May 24, 2016);
- AI for Social Good (June 7, 2016);
- Future of AI: Emerging Topics and Societal Benefit at the Global Entrepreneurship Summit (June 23, 2016);
- AI Technology, Safety, and Control (June 28, 2016); and
- Social and Economic Impacts of AI (July 7, 2016).
In June 2016, the Office of Science and Technology Policy (OSTP) published a Request for Information (RFI)to solicit feedback on overarching questions and proposed solutions in emerging AI research. The submitted comments were published by OSTP on September 6, 2016.
There is currently no universally agreed-upon definition of AI. As quoted in Stanford University’s 100-year study of AI, Nils J. Nilsson defines AI research as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”
Here, intelligence is understood as a measure of a machine’s ability to successfully achieve an intended goal. Like humans, machines exhibit varying levels of intelligence subject to the machine’s design and training. However, there are different perspectives on how to define and categorize AI. In 2009, a foundational textbook classified AI in four categories:
- Ones that think like humans;
- Ones that think rationally;
- Ones that act like humans; and
- Ones that act rationally.
Most of the progress seen in AI has been considered "narrow," having addressed specific problem domains like playing games, driving cars, or recognizing faces in images. In recent years, AI applications have surpassed human abilities in some narrow tasks, and rapid progress is expected to continue, opening up new opportunities in critical areas such as health, education, energy, and the environment. This is in contrast to “general” AI, which would replicate intelligent behavior equal to or surpassing human abilities across the full range of cognitive tasks. Experts involved with the NSTC Committee on Technology believe that it will take decades before society advances to artificial "general" intelligence.
According to Stanford University’s 100-year study of AI, by 2010, advances in three key areas of technology intersected to increase the promise of AI in the US economy:
- Big data: large quantities of structured and unstructured data amassed from e-commerce, business, science, government, and social media on a daily basis;
- Increasingly powerful computers: greater storage and parallel processing of big data; and
- Machine learning: using increased access to big data as raw materials, increasingly powerful computers can be taught to automatically improve their performance tasks by observing relevant data via statistical modeling.
Key AI applications include the following:
- Machine learning is the basis for many of the recent advances in AI. Machine learning is a method of data analysis that attempts to find structure (or a pattern) within a data set without human intervention. Machine learning systems search through data to look for patterns and adjust program actions accordingly, a process defined as training the system. To perform this process, an algorithm (called a model) is given a training set (or teaching set) of data, which it uses to answer a question. For example, for a driverless car, a programmer could provide a teaching set of images tagged either “pedestrian” or “not pedestrian.” The programmer could then show the computer a series of new photos, which it could then categorize as pedestrians or non-pedestrians. Machine learning would then continue to independently add to the teaching set. Every identified image, right or wrong, expands the teaching set, and the program effectively gets “smarter” and better at completing its task over time.
- Machine learning algorithms are often categorized as supervised or unsupervised. In supervised learning, the system is presented with example inputs along with desired outputs, and the system tries to derive a general rule that maps input to outputs. In unsupervised learning, no desired outputs are given and the system is left to find patterns independently.
- Deep learning is a subfield in machine learning. Unlike traditional machine learning algorithms that are linear, deep learning utilizes multiple units (or neurons) stacked in a hierarchy of increasing complexity and abstraction inspired by structure of human brain. Deep learning systems consists multiple layers and each layer consists of multiple units. Each unit combines a set of input values to produce an output value, which in turn is passed to the other unit downstream. Deep learning enables the recognition of extremely complex, precise patterns in data.
- Advances in AI will bring the possibility of autonomy in a variety of systems. Autonomy is the ability of a system to operate and adapt to changing circumstances without human control. It also includes systems that can diagnose and repair faults in their own operation such as identifying and fixing security vulnerabilities.
Important areas of AI research:
- AI researcher John McCarthy of Stanford University describes AI research and development as comprising of both theory and experimentation. AI theory includes contemplating the ways in which one defines the field of research itself as well as how to integrate AI with human notions of rationality, morality, and ethics. AI experimentation involves attempting to mimic human and animal physiology and psychology in machines as well as problem solving for actions outside the scope of biological organisms.
- Experimental research in artificial intelligence includes several key areas that mimic human behaviors, including reasoning, knowledge representation, planning, natural language processing, perception, and generalized intelligence.
- Reasoning includes performing sophisticated mental tasks that people can do (e.g., play chess, solve math problems).
- Knowledge representation is information about real-world objects the AI can use to solve various problems. Knowledge in this context is usable information about a domain, and the representationis the form of the knowledge used by the AI.
- Planning and navigation includes processes related to how a robot moves from one place to another. This includes identifying safe and efficient paths, dealing with relevant objects (e.g., doors), and manipulating physical objects.
- Natural language processing includes interpreting and delivering audible speech to and from users.
- Perception research includes improving the capability of computer systems to use sensors to detect and perceive data in a manner that replicates humans’ use of senses to acquire and synthesize information from the world around them.
- Ultimately, success in the discrete AI research domains could be combined to achieve generalized intelligence, or a fully autonomous “thinking” robot with advanced abilities such as emotional intelligence, creativity, intuition, and morality.
Vincent Conitzer, Ph.D. is the Kimberly J. Jenkins University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. His research focuses on artificial intelligence and economic theory, social choice, and mechanism design. He raised questions concerning the rapid development of AI technologies:
“Artificial intelligence researchers have made rapid progress in recent years. The resulting capabilities allow us to make the world a better place, but they have also led to a broad variety of concerns. How should autonomous vehicles be designed and regulated? Will AI cause massive technological unemployment? Will weapons systems become increasingly autonomous, and should autonomous weapons be banned? Is there perhaps even a chance that AI will end up broadly superseding human capabilities, making us obsolete at best and extinct at worst?”
He commented on the impact of AI on the job market in the article “Today’s Artificial Intelligence Does Not Justify Basic Income”.
Cynthia Rudin, Ph.D. is an associate Professor of Computer Science and Electrical and Computer Engineering at Duke University, with secondary appointments in the Statistics and Mathematics departments. She directs the Prediction Analysis Lab. Her interests are in machine learning, data mining, applied statistics, and knowledge discovery (big data). She has particular interest in machine learning models that are interpretable to human experts. She raised ethical concerns regarding the expanded application of AI in government and industry:
“I think any industrial firm that wants to use AI tools in the judicial system or in other important ways should be forced to undergo a careful evaluation and test against standard methods.… I think it is highly unethical for the government to be using that when there are publicly available tools that are transparent for the same purpose that have been tested.”
"There are issues surrounding what data sources can be used ethically for what purposes. For instance, companies can infer private information about people. Predictions of sensitive information can be powerful/dangerous.”
Endorsements & Opposition
- IBM responded to the RFI in a response letter that supported on some of the recommendations. “AI systems are augmenting human intelligence and will ultimately transform our personal and professional lives. Its benefits far outweigh its risks. And with the right policies and support, those benefits can be realized sooner.” IBM suggested that policy makers should focus on “developing progressive social and economic policies to deploy AI systems for broad public good”, “developing progressive education and workforce programs for future generations” and “investing in a long-range interdisciplinary research program for advancing the science and design of AI systems”.
- The Center for Democracy and Technology (CDT) responded to the RFI in a letter, saying it is optimistic about AI and its future positive impacts. The CDT also provided recommendations that resonate with the report, including that the government should promote and invest in a diverse workforce to prevent bias in AI algorithms and provide an economic safety net in the event of disruption in the labor market.
- Andrew Critch from the Machining Intelligence and Research Institute (MIRI) focused on the safety of AI-based systems: “when we develop powerful reasoning systems deserving the name artificial general intelligence (AGI), we will need value alignment and/or control techniques that stand up to powerful optimization processes yielding what might appear as creative or clever ways for the machine to work around our constraints. Therefore, in training the scientists who will eventually develop it, more emphasis is needed on a security mindset: namely, to really know that a system will be secure, you need to search creatively for ways in which it might fail.… In cybersecurity, it is common to devote a large fraction of R&D time toward actually trying to break into one’s own security system, as a way of finding loopholes.”
- There are ethical concerns about AI’s application, specifically artificial general intelligence in response to the report. Manuel Beltran from Boeing made the distinction between narrow AI and general AI and pointed out that “the most pressing, fundamental questions in AI research, common to most or all scientific fields include the questions of ethics in pursuing an AGI. While the benefits of narrow AI are self-evident and should not be impeded, an AGI has dubious benefits and ominous consequences. There needs to be long term engagement on the ethical implications of an AGI, human brain emulation, and performance enhancing brain implants.”