EUR-Lex Access to European Union law

Back to EUR-Lex homepage

This document is an excerpt from the EUR-Lex website

European Union artificial intelligence policy

Artificial intelligence (AI) is technology that displays human-like behaviour. It involves software that uses machine learning, and other tools, to process information from large amounts of data using sensors, digital data or other inputs to generate behaviour based on models built using those data. This software allows machines to take decisions that would normally be taken by humans.

In response to the European Union (EU) leaders’ call for an EU approach to AI, the European Commission, in 2018, put forward an approach to make the most out of the opportunities offered by AI, while addressing the new challenges it brings. This EU-wide approach had three aims:

  • to increase public and private investment in AI;
  • to prepare for socio-economic changes; and
  • to ensure an appropriate ethical and legal framework.

This was followed, in 2021, by a more comprehensive package on AI involving:

  • a communication on fostering an EU approach to AI;
  • a coordinated plan with EU Member States to boost excellence in AI by joining forces on AI policy and investment; and
  • a proposal for a regulation laying down harmonised rules on artificial intelligence, addressing the risks of specific uses of AI and categorising them into four different levels: unacceptable risk, high risk, limited risk and minimal risk.

Further legislative proposals are planned. These will seek to revise some of the EU’s sectoral safety legislation (e.g. on machinery and on general product safety) and address liability issues related to new technologies.

By means of the Digital Europe and Horizon Europe programmes, the Commission plans to invest €1 billion per year in AI. It will mobilise additional investments from the private sector and the Member States in order to reach an annual investment volume of €20 billion over the course of the next decade.

SEE ALSO

Top