The European Union’s AI Act (The EU AI Act) is set to bring significant changes across organizations, impacting data leaders and everyone involved in integrating AI tools into business processes. Compliance with this new legislation will require a collective effort to ensure that AI systems meet stringent governance, transparency, and oversight standards.
The EU AI Act’s risk categories explained
The EU AI Act classifies AI products based on their levels of associated risk. The Act’s regulatory framework defines four levels of risk for AI systems:
Unacceptable risk
Unacceptable risk is the highest level of risk. This tier can be divided into eight AI application types that are incompatible with EU values and fundamental rights. These are applications related to:
- Subliminal manipulation: Changing a person’s behavior without them being aware of it, which would harm a person in any way.
- Exploitation of the vulnerabilities of persons resulting in harmful behavior: This includes social or economic situation, age, and physical or mental ability. For instance, a toy with voice assistants that may animate children to do dangerous things.
- Biometric categorization of persons based on sensitive characteristics: This includes gender, ethnicity, political orientation, religion, sexual orientation, and philosophical beliefs.
- General purpose social scoring: Using AI systems to rate individuals based on their personal characteristics, social behavior, and activities, such as online purchases or social media interactions.
- Real-time remote biometric identification (in public spaces): Biometric identification systems, including ex-post identification, will be completely banned.
- Assessing the emotional state of a person: This regulation applies to AI systems at the workplace or in education. Emotion recognition may be allowed as a high-risk application if it has a safety purpose.
- Predictive policing: Assessing the risk of persons committing a future crime based on personal traits.
- Scraping facial images: Creating or expanding databases with untargeted scraping of facial images available on the internet or from video surveillance footage.
High risk
AI systems identified as high-risk include AI technology used in:
- Critical infrastructures that could put the life and health of citizens at risk (ex., Transport systems)
- Educational or vocational training that may determine the access to education and professional course of someone’s life (ex., The scoring of exams)
- Safety components of products (ex., AI application in robot-assisted surgery)
- Employment, management of workers, and access to self-employment (ex. Resume and CV sorting software for recruitment procedures)
- Essential private and public services (ex., Credit scoring denying citizens the opportunity to obtain a loan)
- Law enforcement that may interfere with people’s fundamental rights (ex., Evaluation of the reliability of evidence)
- Migration, asylum, and border control management (ex., Automated examination of visa applications)
- Administration of justice and democratic processes (ex., AI solutions to search for court rulings)
High-risk AI systems are subject to strict obligations before they can be put on the market, including:
- Adequate risk assessment and mitigation systems
- High quality of the datasets feeding the system to minimize risks and discriminatory outcomes
- Logging of activity to ensure traceability of results
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
- Clear and adequate information to the deployer
- Appropriate human oversight measures to minimize risk
- High level of robustness, security, and accuracy
Limited risk
Limited risk refers to the risks associated with a lack of transparency in AI usage. The EU AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust.
For instance, when using AI systems such as chatbots, humans should be aware that they are interacting with a machine to make an informed decision to continue or step back.
Providers also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published to inform the public on matters of public interest must be labeled as artificially generated. This also applies to audio and video content constituting deepfakes.
Minimal or no risk
The EU AI Act allows the free use of minimal-risk AI, including applications such as AI-enabled video games or spam filters. Most AI systems currently used in the EU fall into this category.
Conclusion
In conclusion, the EU AI Act establishes a comprehensive, risk-based framework for regulating artificial intelligence in the European Union. It aims to protect fundamental rights while fostering innovation. By classifying AI systems into four risk levels – unacceptable, high, limited, and minimal – based on their potential impact on individuals and society, the Act seeks to mitigate harm while allowing safe and ethical AI to flourish.
High-risk AI systems face stringent requirements to ensure safety and transparency, while limited and minimal-risk applications are subject to lighter or no restrictions, promoting accessibility and technological progress.
As AI advances, the EU AI Act is poised to play a crucial role in shaping responsible AI deployment within Europe and possibly influencing global standards.
—
Are you interested in learning more about using your data as an asset to achieve higher data governance and quality levels? Book a demo today to get started on your organization’s journey to complete data lifecycle management with DataGalaxy!