Understanding EU AI Act Risk Categories

Understanding EU AI Act Risk Categories

The European Union (EU) AI Act is the first comprehensive legislative framework. It aims to ensure that AI systems are developed and utilized in a manner that protects fundamental human rights and is consistent with the principles of trustworthy AI.

AI is changing industries and societies, presenting both opportunities and challenges. Recognizing this fact and the need for a structured approach to managing the potential risks to fundamental human rights after long negotiations, the Council of the EU approved the final text, and the act was published in the EU Official Journal on July 12, 2024. The act classifies AI systems into different risk categories to safeguard public interest while promoting innovation.

What Are the Risk Categories in the EU AI Act?

The EU AI Act now divides AI systems into three main risk categories for compliance purposes: Unacceptable, High, and General Purpose AI (GPAI). Originally, Limited and Minimal Risk were separate categories, but the final version consolidates these into general guidelines with minimal obligations.

Understanding EU AI Act Risk Categories

This classification is designed to address the varying degrees of potential harm and ethical concerns associated with different AI applications. The four categories in the initial versions of the texts included:

  1. Unacceptable Risk – AI systems that are deemed too dangerous or unethical to be allowed.
  2. High Risk – AI applications that have significant implications for individual rights and safety.
  3. Limited or Minimal Risk – AI systems that pose little to no risk and are subject to minimal regulation.

In addition to these categories, the act also elaborates on the requirements of a separate category named ‘General Purpose AI (GAPI),’ which will be discussed later.

1. Unacceptable Risk

AI applications with unacceptable risk are those that are too harmful or unethical to deploy. These applications are prohibited under the AI Act.

Article 5 outlines ‘Prohibited Artificial Intelligence Practices’. The EU AI Act bans these systems to protect individuals and society from potential harm. Examples of AI systems that fall into this category include:

  • AI systems manipulating human behavior through subliminal techniques can influence decisions without the user’s awareness.
  • Systems that exploit the vulnerabilities of certain groups, such as children, older adults, or those with disabilities.
  • Government use of AI for social scoring, where individuals are evaluated based on their behavior in a way that affects their access to services and opportunities.

These practices have a shorter deadline to be terminated, and the act’s prohibitions apply just six months after the regulation enters into force.

2. High Risk

AI systems that significantly impact individuals’ rights, health, or safety are classified as High-risk and are subject to strict regulations and oversight.

High-risk AI applications can potentially affect crucial aspects of health, safety, or fundamental rights and require stringent regulatory measures. AI systems used in safety components and those listed in Annex III of the act (such as biometric systems) are within the high-risk category. Systems in this category may need to conduct a ‘conformity assessment,’ which can be done either internally or externally by a conformity assessment body (notified body). Article 31 outlines the requirements relating to notified bodies.

Annex III and Article 43, should be consulted to determine whether an application falls under High Risk and if a conformity assessment is needed. Examples of high-risk AI applications include:

  • Providers
    • Risk Management Systems (Article 9): Providers must implement a comprehensive risk management framework, referencing standards like NIST AI RMF, to identify and mitigate risks across the AI system lifecycle.
    • Transparency, Record Keeping, Data Governance, and Documentation (Articles 10-13): Providers are responsible for maintaining thorough documentation on system functionality, data sources, and decision-making processes.
    • Accuracy, Robustness, and Cybersecurity (Article 15): Providers need to design high-risk systems with security-by-design, ensuring accuracy, resilience, and cybersecurity.
    • Conformity Assessment (Articles 31 and 43): For applications listed in Annex III, providers must conduct a conformity assessment—internally or via a notified body—to verify compliance.
  • Deployers
    • Human Oversight (Article 14): Deployers must establish mechanisms to allow human intervention, enabling users to oversee and intervene in AI decisions as needed.
    • Application-Specific Documentation: Deployers should document how the AI is applied in specific contexts, ensuring transparent use and relevant risk management.
    • Risk Management Adaptation: Deployers are responsible for adapting the provider’s risk management framework to fit the deployment context, regularly auditing and updating strategies.
  • Distributors
    • Verification of Compliance: Distributors must ensure high-risk AI products comply with transparency and oversight requirements set by providers and deployers.
    • End-User Transparency: Distributors should maintain records of provider and deployer compliance measures and ensure end-users are aware they are interacting with AI.

Requirements for high-risk AI systems will apply 24 to 36 months after the regulation comes into force (36 months for the systems that require conformity assessment).

3. General Purpose AI (GPAI)

The EU AI Act also addresses General Purpose AI systems (GPAI), which refers to AI systems that are versatile and applicable across a wide range of domains, such as language models and generative AI systems. Due to their broad applications, GPAI systems have specific compliance obligations based on their deployment context, with particular attention to risk management and transparency. Providers and deployers of GPAI systems must meet the requirements in Article 53 within 12 months of the Act’s enforcement, which includes obligations for documentation, data governance, and human oversight when integrated into high-stakes applications.

Role-Specific Requirements for General Purpose AI (GPAI)

  • Providers
    • Risk and Impact Assessment: Conduct risk assessments across different potential uses, especially in sensitive contexts.
    • Documentation: Maintain clear documentation covering system capabilities, intended uses, and data governance.
    • Data Governance: Ensure ethical, secure data use is adaptable for various downstream applications.
  • Deployers
    • Context-Specific Compliance: Tailor compliance for each use case, especially in high-stakes applications.
    • User Transparency: Inform users when interacting with AI, explaining its role in decisions that affect them.
    • Human Oversight: Implement oversight to allow human intervention when necessary.
  • Distributors
    • Use Case Transparency: Ensure end-users understand the AI system’s purpose and limitations.
    • Compliance Records: Keep records verifying adherence to provider and deployer standards, especially for regulated uses.

4. Limited or Minimal Risk

In the final version of the EU AI Act, limited and minimal-risk AI systems face minimal regulatory obligations. These systems are generally unregulated, except where transparency obligations apply. Users need to be informed that they are interacting with an AI system. Examples of limited-risk AI applications include chatbots, AI systems that generate deepfakes, and spam filters.

Low or Minimal-risk AI Systems in general have obligations to:

  • Provide transparency and clear disclosure: developers and deployers should inform users that they are interacting with an AI system. This can be done through explicit notifications or labels.
  • Register in the EU Database: Some AI systems considered to pose limited or minimal risk still need to be registered in the EU database according to Article 71.
  • Provide AI Literacy: Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure a sufficient level of AI literacy for the staff and operators of the system.

Although minimal-risk AI systems are largely unregulated, it is still important to follow best practices for low or minimal risk systems (especially when required by other laws), such as:

  • Ensuring data privacy: Protecting user data and maintaining confidentiality.
  • Mitigating biases: Addressing potential biases in AI algorithms to ensure fair and unbiased outcomes.

Why is the EU AI Act Important?

The EU AI Act is crucial for fostering trust and safety in AI technologies while promoting innovation.

Here are some key reasons why the act is both important and impactful:

  • Protecting fundamental rights: By classifying AI systems based on risk, the act ensures that applications impacting safety, privacy, and fundamental rights are subject to stringent oversight.
  • Promoting transparency: The act mandates transparency measures, especially for limited and high-risk AI systems, ensuring users are informed about AI interactions.
  • Encouraging innovation: By focusing regulatory efforts on higher-risk categories, the act allows minimal-risk AI systems to thrive with fewer constraints, fostering an environment conducive to innovation.
  • Establishing accountability: The act holds developers and businesses accountable for the ethical deployment of AI technologies, promoting responsible AI practices.
  • Global leadership: The EU AI Act positions the European Union as a leader in AI governance, setting standards that could influence global AI policies.

Overall, the EU AI Act aims to build a trustworthy AI ecosystem where safety and innovation coexist, benefiting individuals, businesses, and society at large.

How Does the EU AI Act Impact Businesses?

As we have seen, businesses must meet specific requirements based on their AI systems’ risk category to comply with the EU AI Act. Here are some examples of how these risk categories can impact businesses:

Understanding EU AI Act Risk Categories

  1. High-Risk AI Systems
    • Risk Management Systems: Businesses must implement comprehensive risk management frameworks to identify and mitigate potential risks.
    • Transparency and Documentation: Detailed documentation on the AI system’s functionality, data sources, and decision-making processes must be maintained and made accessible.
    • Human Oversight: Mechanisms for human intervention must be established.
    • Conformity Assessments: Assessments are required to ensure ongoing compliance with regulatory standards.
    • Accuracy, Robustness, and Cybersecurity: Requires security by design and compliance with several other cybersecurity and safety standards.
  2. General Purpose AI (GPAI)
    • Role-Based Compliance: Businesses deploying GPAI systems must align with role-specific requirements, especially when GPAI is used in high-stakes applications.
    • Transparency and Risk Management: Similar to high-risk systems, GPAI requires transparency in documentation and risk management, with a focus on ensuring these versatile AI models are safely integrated into diverse applications.
    • Human Oversight: GPAI systems must include mechanisms that allow for human intervention, particularly in scenarios where the AI may impact user rights or safety.
  3. Limited or Minimal Risk AI Systems
    • Transparency: Businesses must inform users when they are interacting with an AI system and ensure transparency about the AI system’s operation.
    • While limited or minimal-risk AI systems are largely unregulated, businesses are encouraged to follow best practices in AI development, such as ensuring data privacy and mitigating biases.

Examples of Business Adjustments and Strategies

  • AI in Recruitment: A company using AI for hiring must ensure the system is unbiased and transparent and inform applicants about how their data is used and assessed.
  • Customer Service Chatbots: Businesses deploying chatbots should clearly inform users that they are interacting with an AI, enhancing trust and user experience.
  • Spam Filtering: Companies utilizing AI for spam detection can operate with fewer restrictions but should still adhere to data protection regulations to maintain user trust.

By following the EU AI Act, businesses can ensure compliance, build customer trust, and support responsible AI innovation.

Conclusion

For businesses, compliance with the EU AI Act means implementing necessary measures based on the risk level of the application. High-risk systems demand rigorous oversight and transparency, while limited-risk systems require clear user disclosures. Minimal-risk systems face the least regulation, fostering an environment conducive to innovation. By following these requirements, businesses can develop and use AI responsibly, gain user trust, and contribute to a secure AI ecosystem.

Contact us today to learn more!