Loading...
Loading...
EU AI Act Annex III
Category of the EU AI Act covering AI systems with significant impact on health, safety, or fundamental rights: biometric identification, critical infrastructure, education, employment (recruitment, evaluation), access to essential services (credit scoring, insurance, social benefits), law enforcement, migration, justice. Strict obligations: risk management, data governance, technical documentation, automatic logging, human oversight (HITL), accuracy and cybersecurity, conformity assessment, CE marking. Applicable by 2 August 2027 at the latest.
High-risk AI systems are those listed in Annex III of the EU AI Act (Regulation (EU) 2024/1689) as posing a significant risk to the health, safety, or fundamental rights of natural persons. The category is the cornerstone of the regulation's risk-based approach.
Annex III enumerates eight domains: biometrics (remote biometric identification, emotion recognition); management of critical infrastructure (transport, water, energy); education and vocational training (admission, scoring, monitoring of cheating); employment (recruitment, evaluation, allocation of tasks, monitoring); access to essential private and public services (credit scoring, social benefits, emergency triage, life and health insurance pricing); law enforcement; migration, asylum, and border control; and administration of justice and democratic processes.
Obligations on providers of high-risk AI are structured by chapter III, section 2 of the Act: risk management system (Article 9), data and data governance (Article 10), technical documentation (Article 11), automatic record-keeping (Article 12), transparency and information to users (Article 13), effective human oversight (Article 14, the HITL requirement), accuracy, robustness and cybersecurity (Article 15). Providers must conduct a conformity assessment, draw up an EU declaration of conformity, and affix the CE marking before placing the system on the market.
Deployers (users) carry their own obligations: use according to instructions, monitoring of operation, log retention, information of affected workers, and — for certain public-sector uses — a fundamental rights impact assessment (Article 27).
The European Commission's proposal for an AI Regulation was published in April 2021. After three years of trilogue negotiations between the Council, Parliament, and Commission, the final text was adopted on 13 March 2024 and published in the Official Journal of the European Union on 12 July 2024 as Regulation (EU) 2024/1689.
The Act entered into force on 1 August 2024 with a phased application: prohibited practices apply from 2 February 2025, general-purpose AI model rules from 2 August 2025, the bulk of obligations (including most high-risk requirements) from 2 August 2026, and high-risk systems integrated into products already covered by sectoral legislation (machinery, medical devices, toys, lifts) by 2 August 2027 at the latest.
Governance is handled by the European AI Office within the Commission, supported by an EU AI Board, scientific panel, and national market surveillance authorities. The AI Act applies extraterritorially: any provider placing AI on the EU market or any deployer whose AI output affects EU persons must comply, regardless of where the company is established.
For an executive, classifying an AI system as high-risk turns the project from a simple tool selection into a compliance program with documentary, organizational, and technical requirements. Penalties under Article 99 reach up to €35 million or 7% of worldwide annual turnover for prohibited practices, and €15 million or 3% for high-risk obligation breaches.
The total cost of an AI initiative therefore must include conformity costs (quality system, documentation, logging, human oversight, audits) and ongoing maintenance. A POC delivered quickly can become production-blocked if compliance is bolted on at the end. The most exposed sectors in the EU market are banking and insurance (credit scoring, claim assessment), healthcare (diagnostic aids, triage), HR (CV screening, evaluation), and industrial (critical infrastructure components).
The right reflex: qualify the AI Act risk category at the scoping stage, not at the production-readiness review.
For any AI engagement that may fall under the high-risk category, we start with an AI Act risk qualification at scoping. A short matrix crosses Annex III with the client context and the functional scope, producing a clear category and an estimate of the documentation effort.
When the system is high-risk, we build in by design: a HITL framework sized to the use case (pre-decision validation, post-decision audit, or hybrid), an event log usable for audit, training data traceability per Article 10, and a living technical documentation versioned with the code. We engage with the client's legal and compliance teams from day one, not at production readiness.
Our principle: a high-risk AI system delivered in six months without compliance scaffolding is six months of rework before production. We prefer slightly longer scoping and a calm production launch.
The European Union's Artificial Intelligence Act, the world's first comprehensive AI regulation, adopted March 2024 and published in the Off…
AI architecture pattern where a human validates, adjusts, or supervises AI-generated decisions before they have effect on a user, patient, c…
GDPR article prohibiting in principle the processing of sensitive personal data: racial or ethnic origin, political opinions, religious or p…
Voluntary framework published by the US National Institute of Standards and Technology in January 2023 (AI RMF 1.0) to help organizations de…
Free initial scoping. We assess your context and identify concrete levers.