The August 2026 deadline
The EU AI Act (Regulation 2024/1689) entered into force on August 1, 2024. It is the world's first comprehensive, binding AI regulation. The compliance timeline is phased:
| Date | Milestone |
|---|---|
| Feb 2, 2025 | Prohibitions on unacceptable-risk AI practices take effect (social scoring, manipulative AI, untargeted facial scraping) |
| Aug 2, 2025 | Governance provisions apply. GPAI model obligations take effect. Member states designate national competent authorities. |
| Aug 2, 2026 | Full compliance for high-risk AI systems. Articles 9, 12, 13, 14 and all Chapter III requirements. |
| Aug 2, 2027 | Extended deadline for AI embedded in regulated products (medical devices, vehicles). |
If your AI agents operate in a high-risk domain (Annex III), you have until August 2, 2026 to implement risk management, record-keeping, transparency, and human oversight controls.
What applies to AI agents
AI agents are not a separate regulatory category in the EU AI Act. They qualify as "AI systems" under Article 3(1) because they operate with "varying levels of autonomy" and "exhibit adaptiveness after deployment." The regulatory profile depends on what the agent does, not its internal architecture.
An agent screening job applications triggers Annex III high-risk classification. An agent summarizing meeting notes triggers only Article 50 transparency obligations. The classification is domain-based:
Article 9: Risk management
Article 9 requires a "continuous iterative process" throughout the AI system's entire lifecycle. For autonomous agents, this means:
- - Identify risks from the agent's external actions, data flows, connected systems, and affected persons.
- - Evaluate under intended use AND foreseeable misuse. What happens if the agent is prompt-injected? What if it receives adversarial input?
- - Adopt mitigation measures. Article 9(5) specifically requires measures that "eliminate or reduce identified risks through design."
- - Test before deployment with predefined metrics aligned to the intended purpose.
Pre-dispatch policy enforcement is risk reduction by design. Instead of hoping the agent behaves correctly, every action is evaluated against risk rules before execution. This is exactly what Article 9(5) describes.
Article 12: Record-keeping
Article 12(1): "High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system."
For autonomous agents, this means logging:
- - Every tool call, API invocation, and external action
- - The policy decision for each action (allow, deny, require approval)
- - The policy version and rule that produced each decision
- - Approval records: who approved, when, under which policy snapshot
- - Execution outcomes: success, failure, timeout, quarantine
A critical compliance gap identified by researchers: "high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the essential requirements of the AI Act." Without versioned snapshots of operational state and automated drift detection, you cannot demonstrate compliance.
Article 13: Transparency
Article 13(1) requires systems to be "sufficiently transparent" for deployers to "interpret outputs appropriately." For agents, this means:
- - Each policy decision surfaces the matched rule ID, reason, and constraints applied
- - Deployers can inspect why any action was allowed or denied
- - When agents affect third parties (sending emails, posting content, calling APIs), transparency obligations extend to all affected individuals
The arxiv compliance paper notes that "fewer than 10% of AI agent developers report external safety evaluations." Transparency without structured decision records is not verifiable.
Article 14: Human oversight
This is the most misunderstood requirement. A system prompt saying "ask before acting" is NOT Article 14 compliance.
Article 14 requires that high-risk systems "can be effectively overseen by natural persons during the period in which they are in use." Specifically, oversight persons must be able to:
- - (a) Understand capacities and limitations, monitor operation
- - (b) Remain aware of automation bias
- - (c) Correctly interpret system output
- - (d) Decide to disregard or override output
- - (e) Intervene or stop the system safely (kill-switch)
For autonomous agents, this translates to three technical requirements:
Risk-tiered approval workflows that block execution until a human reviews and approves. The approval is bound to the policy snapshot and action hash at submission time.
Ability to deny all agent actions immediately via a single policy change. Article 14(4)(e) explicitly requires the ability to "intervene or stop the system safely."
Oversight mechanisms must be external constraints, not internal instructions. The gate must live outside the model's reasoning loop where no prompt can bypass it.
Stanford/Berkeley research found that only 37-40% of enterprises have true containment controls (kill-switch capability), despite 58-59% reporting monitoring and oversight.
Penalties
| Violation | Maximum fine |
|---|---|
| Prohibited AI practices (social scoring, manipulative AI) | EUR 35M or 7% global turnover |
| High-risk system obligations (Articles 9, 12, 13, 14) | EUR 15M or 3% global turnover |
| Supplying incorrect information to authorities | EUR 7.5M or 1% global turnover |
For SMEs and startups, the lower absolute amount applies. Each EU member state must designate at least one national competent authority with investigation and enforcement powers.
Global regulatory landscape
The EU is not alone. Six jurisdictions now have binding or near-binding AI agent regulations:
| Jurisdiction | Regulation | Status | Scope |
|---|---|---|---|
| EU | AI Act (2024/1689) | Binding. High-risk deadline: Aug 2, 2026 | Risk-based. Agents classified by domain, not architecture. |
| US (Federal) | NIST AI RMF + Agent Standards Initiative | Voluntary. Agent Interoperability Profile: Q4 2026 | Four functions: Govern, Map, Measure, Manage. SP 800-53 overlays for agents planned. |
| US (State) | Colorado AI Act (SB 205) | Binding. Enforcement: June 30, 2026 | Algorithmic discrimination. Annual impact assessments. $20K per violation per consumer. |
| Singapore | Model AI Governance Framework for Agentic AI | Voluntary. Published Jan 22, 2026 | World's first agentic AI framework. Four dimensions: risk bounding, accountability, technical controls, user responsibility. |
| China | Generative AI Measures + Labeling Rules | Binding. 748 services filed by Dec 2025 | Mandatory filing, content labeling, 50+ standards in development. |
| International | ISO/IEC 42001:2023 | Certification standard. 15 bodies accredited | AI Management System standard. Covers governance, ethics, risk, transparency. |
Notable: Canada's AIDA (AI and Data Act) died on the order paper in January 2025. The UK has no binding AI legislation and the AI Safety Institute was rebranded to the AI Security Institute in February 2025, shifting focus to national security. The US revoked Executive Order 14110 in January 2025 and is pursuing a "minimally burdensome" federal framework.
Technical compliance mapping
How each regulatory requirement maps to specific technical controls:
| Requirement | Technical control | Implementation |
|---|---|---|
| Article 9: Risk management | Pre-dispatch policy evaluation | Every agent action evaluated against risk rules before execution. Risk tags, capability scoping, and environment restrictions defined in policy-as-code. |
| Article 12: Record-keeping | Immutable audit trails | Every decision logged with policy version, timestamp, actor identity, action details, and outcome. Queryable for incident review and regulatory audit. |
| Article 13: Transparency | Decision explainability | Each policy decision surfaces the matched rule ID, reason, and constraints applied. Deployers can inspect why any action was allowed or denied. |
| Article 14: Human oversight | Approval workflows | Risk-tiered approval gates that block execution until a human approves. Kill-switch capability via deny-all policy. Override via policy bypass with audit binding. |
| Article 14(4)(e): Intervention | Fail-closed mode | When the governance layer is unavailable, all agent actions are blocked (fail-closed). Configurable per environment. |
| Singapore: Risk bounding | Constraint enforcement | Runtime limits, network allowlists, tool capability scoping, output safety quarantine. Agents operate within defined boundaries. |
| NIST: Monitoring | Real-time governance metrics | Policy decision distribution (allow/deny/approve rates), approval latency, drift detection, fail-open alerts. |
Here is what a compliance-ready policy configuration looks like:
version: v1
rules:
# Article 14: Human oversight for high-risk actions
- id: require-approval-production-writes
match:
risk_tags: ["write"]
labels:
environment: production
decision: require_approval
reason: "EU AI Act Art. 14: production writes require human oversight"
# Article 9: Risk management for destructive operations
- id: deny-destructive-actions
match:
risk_tags: ["destructive"]
decision: deny
reason: "EU AI Act Art. 9: destructive operations blocked by policy"
# Singapore framework: Risk bounding via constraints
- id: constrain-external-api-calls
match:
risk_tags: ["network"]
decision: allow_with_constraints
constraints:
max_runtime_seconds: 30
network_allowlist: ["api.internal.com"]
reason: "Risk-bounded execution per Singapore Agentic AI Framework"Industry data
7%
have fully embedded AI governance (Trustmarque 2025)
97%
of AI-related breaches lacked proper access controls (IBM 2025)
$492M
AI governance spending projected for 2026 (Gartner)
40%
of enterprise apps will feature AI agents by 2026 (Gartner)
63%
of breached organizations lack formal AI governance policies (IBM)
37%
have kill-switch capability for AI agents (Stanford/Berkeley)
The gap is clear: 93% of organizations use AI, 40% will have AI agents by 2026, but only 7% have fully embedded governance. AI governance spending is projected at $492 million in 2026 and over $1 billion by 2030. Organizations with governance platforms are 3.4x more likely to achieve high governance effectiveness (Gartner).
Compliance checklist
For teams with AI agents in high-risk domains, before August 2, 2026:
What top resources cover vs miss
| Source | Strong coverage | Missing piece |
|---|---|---|
| EU AI Act Official Text | Full regulation text with article-by-article navigation. Strong legal reference. | No mapping to technical controls for AI agents. No implementation guidance for engineering teams. |
| Arxiv: AI Agents Under EU Law | Deep compliance architecture analysis. Identifies behavioral drift as a compliance blocker. | Academic framing. No actionable implementation checklist or tooling recommendations. |
| Gartner: AI Governance Market Forecast | Market sizing and adoption data. Governance spending projected at $492M in 2026. | Analyst-level. No regulatory mapping or technical controls for engineering teams. |
Frequently asked questions
Yes. AI agents qualify as AI systems under Article 3(1) because they operate with 'varying levels of autonomy' and 'exhibit adaptiveness after deployment.' The regulatory classification depends on what the agent does (its domain), not its internal architecture.
August 2, 2026 is the deadline for full compliance of high-risk AI systems, including Articles 9 (risk management), 12 (record-keeping), 13 (transparency), and 14 (human oversight). Prohibited practices were already enforceable as of February 2, 2025.
Agents operating in Annex III domains are automatically high-risk: employment/recruiting, credit scoring, healthcare triage, critical infrastructure, education admissions, law enforcement, and migration/border control. Agents in other domains may still trigger obligations under GDPR, NIS2, or sector-specific regulations.
No. Article 14 requires oversight mechanisms as external constraints, not internal instructions. A system prompt can be ignored, hallucinated past, or bypassed by prompt injection. Compliance requires API-level enforcement: pre-dispatch policy gates, approval workflows, and kill-switch capability outside the model's reasoning loop.
Article 12 requires automatic event logging covering: every decision and its outcome, risk-relevant situations, and operational monitoring data. For AI agents, this means logging every tool call, policy evaluation, approval decision, and execution result with the policy version that produced each decision.
Up to EUR 35 million or 7% of global annual turnover for prohibited practices. Up to EUR 15 million or 3% for high-risk system violations (Articles 9-17). Up to EUR 7.5 million or 1% for supplying incorrect information to authorities.
Not at the federal level. Executive Order 14110 was revoked in January 2025. NIST's AI Agent Standards Initiative (February 2026) is voluntary. However, Colorado's AI Act (SB 205) is binding at the state level with enforcement starting June 30, 2026, and fines up to $20,000 per violation per affected consumer.
Published January 22, 2026, it is the world's first governance framework specifically for agentic AI. It covers four dimensions: assessing and bounding risks upfront, making humans meaningfully accountable, implementing technical controls (least-privilege, sandboxing, guardrails), and enabling end-user responsibility. It is voluntary but organizations remain legally accountable for their agents' actions.
Cordum provides the technical infrastructure for Articles 9, 12, 13, and 14: pre-dispatch policy enforcement (risk management), immutable audit trails (record-keeping), decision explainability (transparency), and approval workflows with kill-switch capability (human oversight). Every agent action is evaluated against versioned policy rules before execution, and every decision is logged with evidence.
No. The EU AI Act, NIST AI RMF, Singapore framework, and ISO 42001 share common requirements: risk assessment before execution, human oversight at defined checkpoints, audit trails with decision evidence, and transparency about how decisions are made. A governance layer that implements these controls satisfies overlapping requirements across jurisdictions.
Next step
Start with your highest-risk agent. Identify which Annex III domain it touches. Add a single pre-dispatch policy rule that requires approval for its most dangerous action. Verify the approval gate works. Then expand.