The European Commission argues in its white paper on artificial intelligence that Artificial Intelligence (AI) will be a key enabler of European policy objectives in the near future and warrants a comprehensive strategy to ensure European competitiveness in this emerging domain. Against a backdrop of fragmentary national policies and Europe’s comparative weakness in consumer technologies, the Commission proposes a unified framework for allocating investments, pooling resources, and regulating AI companies operating within the European Union (EU) that aims to encourage scientific research and industrial development.
The Commission hopes this approach can augment the EU’s strengths in business-to-business applications of AI while maintaining a strong commitment to EU values and to protecting individual rights.
To improve European competitiveness in AI, the document outlines six actions:
- By the end of 2020 revising the Commission's Coordinated Plan on AIwith the objective of collecting 20 billion euros annually in funding for AI for the following ten years. It solicits comments from stakeholders and interested parties ahead of a planned revision.
- Creating centers to coordinate institutions conducting research on AI, to facilitate translational research and development efforts into industrial applications and retain top researchers.
- Establishing and supporting networks of universities with strengths in AI-related fields to train workers in AI.
- Ensuring at each member state has at least one Digital Innovation Hub that specializes on AI for small and medium sized AI enterprises to network, access research institutions, and receive financial advice.
- Setting up a public-private partnership to coordinate research efforts in AI, data, and robotics in the context of the Horizon Europe research and innovation framework.
- Initiating dialogues around how best to adopt AI technologies into public sector processes, especially concerning healthcare, rural administration, and public service operations.
The white paper also outlines a proposed regulatory framework for AI. It defines high-risk AI applications as application in sectors like healthcare, transportation, and energy, that can pose significant societal risk as well as applications that pose significant risks to individual health, property, privacy, fairness, and other individual and institutional rights. The Commission recommends further coordinated regulatory development among member states for various aspects of high-risk AI applications, including:
- Training data, to ensure that personal data used to train AI systems remains secure, and that systems designed to make high-stakes decisions do not reflect discriminatory biases as a result of incomplete or non-representative training data;
- Record keeping on the development of AI systems, to ensure that the decisions involved in the system's design can be audited at a later date;
- System documentation, to ensure that users are aware that they are interacting with an AI system, of its limitations and proper use;
- Robustness and accuracy, to ensure that AI systems can be trusted in their predictions and cannot be tampered with;
- Human oversight, to ensure appropriate safeguards during the operation of an AI system;
- Biometric identification, to protect privacy during the gathering and use of biometric data such as images for facial recognition.
The document also proposes a voluntary labeling scheme for non-high-risk applications.