EVERYTHING YOU NEED TO KNOW ABOUT THE EU ARTIFICIAL INTELLIGENCE ACT

ceps.eu

The European Commission on April 21, 2021, published a long-awaited draft regulation establishing harmonized rules for artificial intelligence (the AI Act). As of this moment, the Act is still not in force and has been revised all along.

The aim of the Act is to create a uniform legal framework for the creation, commercialization and use of artificial intelligence systems. AI is increasingly being used to make important decisions about people’s lives, and until now there has been little supervison of its actions, which can lead to serious consequences – i.e unlawful arrests or discrimination.

Definition of artificial intelligence

The AI Act defines an “artificial intelligence system” as software that meets the following two conditions:

  1. It is developed using at least one of the techniques and approaches listed in Schedule I of the AI Act (i.e. machine learning) and
  2. is capable of producing results such as content, predictions, recommendations, or decisions for a given set of human-defined goals that affect the environment with which it interacts.

The proposed definition of artificial intelligence is broad and technology-neutral, so it can cover a wide range of systems.

Risk-based approach

The European Commission has proposed to subdivide artificial intelligence systems according to the risk they may pose to humans:

  1. Unacceptable risk – these are all systems that could pose a risk to EU citizens; they are banned; this category includes all systems that use subliminal techniques that escape people’s awareness, systems used for military purposes, as well as systems used in China to assess the public (like Social Credit System);
  2. High risk – AI systems that may affect people’s safety or fundamental rights must be strictly regulated, e.g. technology used in critical infrastructures (like transportation), public and private services, migration, asylum and border control management (e.g. verifying the authenticity of travel documents);
  3. Limited risk – systems that may have some impact on the society; transparency requirements apply, e.g. users must be informed that they are talking to or being operated by a machine (like chatbots);
  4. Minimal risk – other systems for which there are no additional obligations; e.g. video games, the vast majority of AI systems currently in use in the EU fall into this category.
digital-strategy.ec.europa.eu/

High risk systems

High-risk AI systems may be placed on the market only if they meet certain requirements:

  1. they are equipped with a risk management system;
  2. the quality criteria for the data used to train these systems are met;
  3. technical documentation on this system is prepared and kept up to date;
  4. transparency of AI operations is ensured;
  5. most importantly – humans must monitor the machine, which should be equipped with tools that allow humans to stop the system or overrule AI decisions.

A high-risk AI system must go through a conformity assessment process before it can be put into operation. Only after it passes the tests will it receive the appropriate designation CE.

Some systems must also be registered in the European Commission database.

Such a system is also subject to continuous monitoring and any non-conformity of the system is reported to the national authorities.

Other systems

The basic principles of transparency will also apply to other systems. Users must be notified when they interact with an artificial intelligence system, such as a chatbot or machine-generated content (deepfakes).

There will be some exceptions to this obligation, for example, when the fact of interaction with AI is obvious due to the circumstances and context of use.

New authorities

The AI Act set up new authorities: European AI Council, which will include representatives of the relevant regulatory authorities in each member state and will be responsible for providing advice, as will the European Data Protection Supervisor. EU members will also be required to designate national authorities to enforce the AI Act, which will be given the power to impose fines.

Major challenges

The law is intended to protect citizens from the side effects of artificial intelligence by ensuring an appropriate level of oversight and accountability. This is expected to create an AI that inspires public confidence. While the proposed regulation applies only in the European Union, it can serve as a model for countries outside the community to create future regulations.

At this stage, some of the law’s requirements may be technically impossible to meet (e.g., the requirement that datasets be error-free – the datasets used to learn AI are huge and practically impossible for humans to verify; furthermore, the neural networks built into AI are so complex that even their creators don’t fully understand how the system reaches certain conclusions).

Amendments to the Act

The Act is still under discussion, and many countries are proposing amendments. So far, more than 300 proposals have been received at the public consultation stage from i.e. non-governmental organizations, interested parties from industry and the academic community, indicating a high level of interest in the proposed AI regulation.

In January 2022, France proposed changes to risk management systems, data governance, technical documentation, record keeping, transparency and provision of information to users, human oversight, accuracy, robustness, and cybersecurity.

The Czech Republic, on the other hand, proposed in July 2022 to introduce a narrower definition of AI, a revised and shortened list of high-risk systems, and a stronger role for the AI Council.

The final wording of the act has yet to be determined, and it will likely take several more years for companies to comply. In order for the act to become legally binding, it must go through the normal EU legislative process, where the Council and the European Parliament must consider and approve the proposed regulation.

Once adopted, the AI Act will enter into force twenty days after its publication in the Official Journal. However, the draft provides for a period of 24 months before the law enters into force.

The AI Act will be directly applicable in all EU countries and will not require transposition into the local laws of member states.

Author: Agata Konieczna

Credits: Robot icons created by Freepik – Flaticon

Risk icons created by mynamepong – Flaticon

Meeting icons created by juicy_fish – Flaticon

Challenges icons created by Flat Icons – Flaticon

Law icons created by noomtah – Flaticon

2 thoughts on “EVERYTHING YOU NEED TO KNOW ABOUT THE EU ARTIFICIAL INTELLIGENCE ACT

Leave a Reply

Your email address will not be published. Required fields are marked *

en_US