The European Union's AI Act Summary

Understanding the impact of this regulation for the AI industry
August 26, 2024 by
Gastón Raúl Valdés

The European Union's AI Act, a proposed regulation, outlines specific criteria for AI systems that determine whether they fall under its scope, including those related to fine-tuning or training AI models. 

Here's a summary of the key points: 


Risk-Based Approach

The EU AI Act categorizes AI systems based on the level of risk they pose: 


  • Unacceptable Risk: These AI systems are prohibited because they pose a clear threat to safety, livelihoods, and rights. Examples include AI systems for social scoring by governments. 
  • High Risk: These AI systems are subject to strict requirements because they significantly impact safety or fundamental rights. They must meet criteria related to data quality, documentation, transparency, human oversight, and robustness. 
  • Limited Risk: These AI systems must comply with transparency obligations, such as informing users that they are interacting with an AI system. 
  • Minimal or No Risk: Most AI systems fall into this category and are subject to the fewest requirements. 
  • Fine-Tuning and Training: If an AI model is being fine-tuned or trained to perform tasks that fall under the "high-risk" category (e.g., biometric identification, critical infrastructure management), it will need to comply with the requirements specified for high-risk AI systems. 

 

Scope of AI Systems 

The regulation applies to all AI systems that are: 

  • Developed, deployed, or used within the EU. 
  • Developed outside the EU but affect people within the EU. 

  

Obligations for Providers 


  • Data Management: Providers must ensure high-quality training, validation, and testing data to avoid bias and ensure accuracy. 
  • Documentation: Detailed documentation must be maintained to prove compliance with the regulation. 
  • Transparency: Users must be informed that they are interacting with an AI system, especially if the system falls into the high-risk category. 
  • Human Oversight: High-risk AI systems must have mechanisms for human oversight to ensure they can be overridden or controlled if necessary. 
  • Monitoring and Reporting: Continuous monitoring and reporting mechanisms must be in place to ensure ongoing compliance. 

  

Exemptions and Flexibility 

Certain AI systems used for military purposes or developed by public authorities for law enforcement and public safety might have different requirements or exemptions. 

There is also consideration for regulatory sandboxes to allow innovation while still ensuring safety and compliance.

  

Timeline and Implementation 

The AI Act is still under active discussion, with final approval expected in the coming years. Companies and AI professionals should stay informed and prepare for potential compliance requirements as the regulation evolves. 

The AI Act will be enforced in full in August 2027, but still local implementation on the different countries at the EU is under discussion. Companies and developers should stay informed and prepare for potential compliance requirements as their country's implementation evolves. 

Some application timeline milestones are worthy to highlight: 

  • February 2nd, 2025: several chapters of Article 113 are scheduled for application, these articles ban certain AI applications. 
  • August 2nd, 2025: The following rules start to apply: 
    • Chapter III, Section 4: Related to Notified bodies. 
    • Chapter V: Related to GPAI models. 
    • Chapter VII: Related to Governance. 
    • Article 78: Related to Confidentiality. 
    • Articles 99 and 100: Related to Penalties. 
  • August 2nd, 2026: The remainder of the AI Act starts to apply, except Article 6(1). 
  • August 2nd, 2027: Article 6(1) and the corresponding obligations in the Regulation start to apply. 

 

Penalties for Non-Compliance 

The AI Act enforces a tiered penalty system: 

  • Unacceptable Risk Violations: Fines up to €35 million or 7% of global annual turnover. 
  • High-Risk Systems Non-Compliance: Fines up to €15 million or 3% of global annual turnover. 
  • Transparency and Information Violations: Fines up to €7.5 million or 1% of global annual turnover. 


Our recommendations 


Start planning: Even if the timeline seems far in the future, and we have a few years to plan and adopt the changes, depending on the case, it might not. Also, rolling in changes and implementing new processes might end up being time-consuming and more complex than it might appear in hindsight. Early planning avoids the risk of non-compliance scenarios potentially harmful to your reputation and finances. 

Determine your rank: If your company falls into any of these categories, we recommend doing an in-depth review to determine your actual rank be ranked as Limited or Minimal to no risk. 

Keep your users informed: Be ready to let your users know how and if this regulation applies to you and if and how that might impact them. 

 

More details ca​n be found at https://artificialintelligenceact.eu/high-level-summary/ 

The complete implementation timeline can be found at https://artificialintelligenceact.eu/implementation-timeline/ 

 

Credits:
Writer: Luis Vinay  
Collaborator: Ben Rodriguez 
Editor: Gastón Valdés 
Technical reviewer: Gastón Valdés 
Illustrator: Luis Vinay 


Gastón Raúl Valdés August 26, 2024
Share this post