DEV Community

Guillermo Llopis
Guillermo Llopis

Posted on

Is My AI System High-Risk? How to Classify Under the EU AI Act

The EU AI Act's biggest enforcement date (August 2, 2026) is now less than 6 months out. That's when full requirements for "high-risk" AI systems become enforceable, including mandatory risk management, technical documentation (Annex IV), conformity assessment, human oversight, and post-market monitoring. Fines for non-compliance: up to €15M or 3% of global revenue.
The tricky part is figuring out whether your system counts as high-risk. The Commission was supposed to publish classification guidelines by February 2, they missed that deadline. So here's a practical breakdown based on the regulation itself.
Two pathways to high-risk:

  1. Product safety (Annex I): Your AI is a safety component of a regulated product (medical devices, machinery, vehicles). Applies from August 2027.
  2. Standalone (Annex III): Your AI operates in one of 8 domains: biometrics, critical infrastructure, education, employment, essential services (credit scoring, insurance), law enforcement, migration/borders, or justice/democracy. Applies from August 2026. The exception most people miss: Article 6(3): Not every system touching these domains is automatically high-risk. If your AI performs a narrow procedural task (like extracting contact details from CVs without ranking candidates), improves a completed human activity, or only flags patterns for human review, it may qualify for an exception. BUT: if your system profiles individuals in any way (evaluating performance, predicting behavior, assessing personality), it's always high-risk, regardless of the exception conditions. A CV parser that extracts dates = not high-risk. A CV parser that ranks candidates = high-risk. If you claim the exception, you must document it (Article 6(4)) before placing the system on the market. No documentation = non-compliance even if the system genuinely qualifies. Conformity assessment: For most Annex III systems, you can self-assess (internal control). Third-party assessment is required only for biometric identification systems or if you haven't applied the harmonized standards. Common mistakes I see: Classifying after development instead of during design Assuming non-EU companies are exempt (the Act applies where the system is used, not where it's built) Treating classification as one-time (it must be reassessed when intended purpose changes) Underestimating what counts as "profiling" Spain's AESIA has already published 16 practical guidance documents if you want something concrete to work from while waiting for the Commission's delayed guidelines. I wrote a more detailed breakdown with examples of what is and isn't high-risk here: https://annexa.eu/blog/is-my-ai-system-high-risk/

Top comments (0)