The AI Act: A Milestone in Technology Regulation

The European Union is introducing groundbreaking regulations on artificial intelligence, known as the AI Act. These new rules aim to ensure the safe development and use of AI technology while minimizing the associated risks. This article will delve into what the AI Act entails, the changes it brings and who will be affected by these new regulations.

What is the AI Act?

The AI Act is a new regulation that establishes rules for governing artificial intelligence. The goal is to promote the safe and ethical use of AI systems. The AI Act provides a unified definition of an AI system, describing it as a machine-based system designed to operate with varying levels of autonomy and adaptability, capable of generating outputs such as predictions, recommendations, or decisions that can impact physical or virtual environments based on input data.

To better understand this definition, let’s break it down into key components:

  • Machine-based system: AI operates on mechanisms and algorithms. The definition does not exclude AI created using analog or quantum technology or even AI developed entirely without any software component.
  • Autonomy: AI can function independently, making decisions without continuous human intervention.
  • Adaptability: AI systems can learn and adjust their operations in response to new data, making them dynamic rather than static.
  • Inference: AI analyzes input data to generate outputs like predictions or decisions.

By not strictly defining “artificial intelligence,” legislators avoid engaging in the ongoing scientific debate about its essence. This approach prevents challenges when translating AI into a normative act.

Classification of AI by Risk

A crucial aspect of the AI Act is the classification of AI systems based on the level of risk they pose to users. The regulation introduces three main risk categories:

  • Unacceptable risk systems: These includes prohibited practices considered the most dangerous and completely banned, such as systems using subliminal techniques to manipulate behavior.
  • High risk systems: Covers systems that significantly impact safety and fundamental rights, like employee surveillance systems.
  • Minimal risk or no risk systems: These systems are considered safe, as they do not pose significant risks. The Act introduces few optional duties for them.

Additionally, the AI Act distinguishes between:

  • General-purpose AI models: Trained on large datasets, capable of generating various content and performing a wide range of tasks.
  • General-purpose AI models with systemic risk: These can pose risks related to severe failures, significant impacts on public health and safety, or the spread of illegal content.
Who Will Be Affected by the New Regulations?

The AI Act targets a broad audience, including providers, importers, distributors and users of AI systems. Notably, the regulations also cover entities outside the European Union if their AI systems’ outputs are used within the EU.

Penalties for Non-compliance

Companies that fail to comply with the new regulations face severe penalties. For instance, violations of prohibited practices can result in fines up to 35 million EUR, while breaches of other provisions can incur fines up to 15 million EUR. The substantial penalties highlight the European Union’s commitment to the safe and ethical development of artificial intelligence.

Implementation Timeline

AI Act came into force on August 1, 2024. Although the Artificial Intelligence Act is officially in effect, the application of the various provisions will be postponed. As a general rule, the regulations will apply starting on August 2, 2026. Other key dates include:

  • February 2, 2025: Provisions concerning prohibited practices come into force.
  • August 2, 2025: Provisions related to general-purpose AI models take effect.
  • August 2, 2026: Provisions regarding high-risk AI systems become operative.

Companies need to start preparing now to comply with these new regulations and avoid hefty fines.

Conclusion

The AI Act represents a significant step toward ensuring the safe and ethical development of artificial intelligence within the European Union. By defining acceptable AI systems, classifying them by risk levels and introducing stringent penalties for non-compliance, the new regulations aim to foster a secure environment for AI technology. Technology companies must take immediate action to align with the upcoming changes.

If you have any questions or need assistance preparing for these regulations, feel free to reach out for help in preparing for these changes.

Full version of the act at this link.

Author: Agata Konieczna, PhD

Credits:

Icon by msidiqf

Icon by Chattapat.k

Icon by juicy_fish

4 thoughts on “The AI Act: A Milestone in Technology Regulation

  1. Your writing is like a breath of fresh air in the often stale world of online content. Your unique perspective and engaging style set you apart from the crowd. Thank you for sharing your talents with us.

  2. Great article! I really appreciate the clear and detailed insights you’ve provided on this topic. It’s always refreshing to read content that breaks things down so well, making it easy for readers to grasp even complex ideas. I also found the practical tips you’ve shared to be very helpful. Looking forward to more informative posts like this! Keep up the good work!

  3. Great article! I really appreciate the clear and detailed insights you’ve provided on this topic. It’s always refreshing to read content that breaks things down so well, making it easy for readers to grasp even complex ideas. I also found the practical tips you’ve shared to be very helpful. Looking forward to more informative posts like this! Keep up the good work!

Leave a Reply to Randy Murazik Cancel reply

Your email address will not be published. Required fields are marked *

en_US