Understanding EU AI Act Risk Categories

Understanding EU AI Act Risk Categories

The European Union (EU) AI Act is the first comprehensive legislative framework of its kind. It aims to ensure that AI systems are developed and utilized in a manner that protects fundamental human rights and is consistent with the principles of trustworthy AI.

AI is changing industries and societies, presenting both opportunities and challenges. Recognizing this fact and the need for a structured approach to managing the potential risks to fundamental human rights after long negotiations, the Council of the EU approved the final text and the act was published in the EU Official Journal on July 12, 2024. The act classifies AI systems into different risk categories to safeguard public interest while promoting innovation.

What Are the Risk Categories in the EU AI Act?

The EU AI Act divides AI systems into four risk categories: unacceptable, high, limited, and minimal risk.

This classification is designed to address the varying degrees of potential harm and ethical concerns associated with different AI applications. The four categories are:

  1. Unacceptable Risk – AI systems that are deemed too dangerous or unethical to be allowed.
  2. High Risk – AI applications that have significant implications for individual rights and safety.
  3. Limited Risk – Systems that require transparency and some level of oversight.
  4. Minimal Risk – AI systems that pose little to no risk and are subject to minimal regulation.

Each category has specific criteria and regulatory requirements.

1. Unacceptable Risk

AI applications with unacceptable risk are those that are too harmful or unethical to deploy. These applications are prohibited under the AI Act.

Article 5 outlines ‘Prohibited Artificial Intelligence Practices’. The EU AI Act bans these systems to protect individuals and society from potential harm. Examples of AI systems that fall into this category include:

  • AI systems that manipulate human behavior through subliminal techniques that can influence decisions without the user’s awareness.
  • Systems that exploit the vulnerabilities of certain groups, such as children, elderly people, or those with disabilities.
  • Government use of AI for social scoring, where individuals are evaluated based on their behavior, affecting their access to services and opportunities.

These practices have a shorter deadline to be terminated, and the act’s prohibitions apply just six months after the regulation’s entry into force.

2. High Risk

AI systems that significantly impact individuals’ rights, health or safety are classified as high-risk and are subject to strict regulations and oversight.

High-risk AI applications have the potential to affect crucial aspects of health, safety, or fundamental rights and require stringent regulatory measures. AI systems used in safety components, as well as the ones listed in Annex III of the act (such as biometric systems) are within high-risk category. Systems in this category may need to conduct a ‘conformity assessment’ which can be done either internally or externally by a conformity assessment body (notified body). Article 31 outlines the requirements relating to notified bodies.

Annex III and Article 43, should be consulted to determine whether an application falls under High Risk and if a conformity assessment is needed. Examples of high-risk AI applications include:

  • AI in critical infrastructure: Systems used in areas such as electricity, water supply, and transportation, where failures can have severe consequences.
  • AI in educational settings: Technologies that impact access to education and learning opportunities, such as automated grading systems.
  • AI in employment: Tools used in hiring processes, performance evaluations, and promotions, which can significantly affect people’s careers.
  • AI in law enforcement and biometric identification: Applications used for facial recognition, predictive policing, border control, and other law enforcement activities, where accuracy and fairness are paramount.

For these high-risk systems, the EU AI Act requires:

  • Risk management systems: Article 9 makes it clear that companies must implement a comprehensive risk management framework for high-risk AI systems. NIST AI RMF is one possible option to consider for AI risk management.
  • Transparency, record keeping, data governance, and documentation: Many requirements are in scope which are detailed in Articles 10-13.
  • Human oversight: Article 14 stipulates that mechanisms must be designed and developed to ensure that human intervention remains possible and effective in overseeing AI decisions and actions.
  • Accuracy, Robustness, and Cybersecurity: Article 15 can be interpreted as a security-by-design requirement. It requires high-risk AI systems to be designed and developed in such a way that they can achieve an appropriate level of accuracy, robustness, and cybersecurity.

Requirements for high-risk AI systems will apply 24 to 36 months after the regulation comes into force (36 months for the systems that require conformity assessment).

It is worth mentioning that the EU AI Act also addresses General Purpose AI systems (GPAI), which are AI systems that can be utilized in a wide range of applications and can be integrated into various downstream systems (such as generative AI-based image, voice, and text generation systems). Article 53 outlines the requirements, which must be met 12 months after the regulation comes into force.

3. Limited Risk

AI applications with limited risk require transparency measures. Users need to be informed that they are interacting with an AI system.

Limited-risk AI systems are those that do not significantly impact individuals’ rights and safety but still require some degree of regulatory oversight. Examples of limited-risk AI applications include:

  • Chatbots: Automated systems that simulate human conversation in customer service or other interactions.
  • AI systems generating deepfakes: Tools that create realistic but synthetic media content, provided they disclose their nature to users.

To comply with the EU AI Act, limited-risk AI systems must:

  • Provide clear disclosure: Inform users that they are interacting with an AI system. This can be done through explicit notifications or labels.
  • Ensure transparency: Offer information on the purpose and capabilities of the AI system, allowing users to understand its limitations and potential biases.

4. Minimal Risk

AI systems posing minimal or no risk to individuals’ rights and safety are categorized as minimal risk. These systems are largely unregulated under the AI Act.

Minimal-risk AI applications are those that have negligible impact on people’s lives and do not pose significant ethical or safety concerns. Examples of minimal-risk AI systems include:

  • Spam filters: AI tools that identify and filter out unsolicited and harmful email content.
  • Personalized content recommendations: Systems that suggest movies, music, or products based on user preferences and behavior.

Although these systems are largely unregulated, it is still important to follow best practices for them (especially when required by other laws) such as:

  • Ensuring data privacy: Protecting user data and maintaining confidentiality.
  • Mitigating biases: Addressing potential biases in AI algorithms to ensure fair and unbiased outcomes.

By allowing minimal regulation for these low-risk applications, the EU AI Act encourages innovation and the widespread adoption of AI technologies without imposing unnecessary constraints.

Why is the EU AI Act Important?

The EU AI Act is crucial for fostering trust and safety in AI technologies while promoting innovation.

Here are some key reasons why the act is both important and impactful:

  • Protecting fundamental rights: By classifying AI systems based on risk, the act ensures that applications impacting safety, privacy, and fundamental rights are subject to stringent oversight.
  • Promoting transparency: The act mandates transparency measures, especially for limited and high-risk AI systems, ensuring that users are informed about AI interactions.
  • Encouraging innovation: By focusing regulatory efforts on higher-risk categories, the act allows minimal-risk AI systems to thrive with fewer constraints, fostering an environment conducive to innovation.
  • Establishing accountability: The act holds developers and businesses accountable for the ethical deployment of AI technologies, promoting responsible AI practices.
  • Global leadership: The EU AI Act positions the European Union as a leader in AI governance, setting standards that could influence global AI policies.

Overall, the EU AI Act aims to build a trustworthy AI ecosystem where safety and innovation coexist, benefiting individuals, businesses, and society at large.

How Does the EU AI Act Impact Businesses?

As we have seen so far, businesses must meet specific requirements based on their AI systems’ risk category to comply with the EU AI Act. Here are some examples of how these risk categories can impact businesses:

  1. High-Risk AI Systems
    • Risk Management Systems: Businesses must implement comprehensive risk management frameworks to identify and mitigate potential risks.
    • Transparency and Documentation: Detailed documentation on the AI system’s functionality, data sources, and decision-making processes must be maintained and made accessible.
    • Human Oversight: Mechanisms for human intervention must be established.
    • Conformity Assessments: Assessments are required to ensure ongoing compliance with regulatory standards.
    • Accuracy, Robustness, and Cybersecurity: Requires security by design and compliance with several other cybersecurity and safety standards.
  2. Limited Risk AI Systems
    • Transparency: Businesses must inform users when they are interacting with an AI system and ensure transparency about the AI system’s operation.
  3. Minimal Risk AI Systems
    • While minimal-risk AI systems are largely unregulated, businesses are encouraged to follow best practices in AI development, such as ensuring data privacy and mitigating biases.

Examples of Business Adjustments and Strategies

Examples of Business Adjustments and Strategies

  • AI in Recruitment: A company using AI for hiring must ensure the system is unbiased and transparent and inform applicants about how their data is used and assessed.
  • Customer Service Chatbots: Businesses deploying chatbots should clearly inform users that they are interacting with an AI, enhancing trust and user experience.
  • Spam Filtering: Companies utilizing AI for spam detection can operate with fewer restrictions but should still adhere to data protection regulations to maintain user trust.

By following the EU AI Act, businesses can ensure compliance, build customer trust, and support responsible AI innovation.

Conclusion

Understanding the risk categories defined by the EU AI Act is essential for businesses and developers working with AI technologies. The EU AI Act categorizes AI systems into unacceptable risk, high risk, limited risk, and minimal risk, each with specific regulatory requirements to ensure safety and transparency. This structured approach balances innovation with the need to protect fundamental rights.

For businesses, compliance with the EU AI Act means implementing necessary measures based on the risk level of the application. High-risk systems demand rigorous oversight and transparency, while limited-risk systems require clear user disclosures. Minimal-risk systems face the least regulation, fostering an environment conducive to innovation. By following these requirements, businesses can develop and use AI responsibly, gain user trust, and contribute to a secure AI ecosystem.

Contact us today to learn more!