Armadillo
← EU AI Act Hub · Requirements

Human Oversight Requirements

How to implement human oversight for AI systems under the EU AI Act.

· 6 min read

What is human oversight?

Human oversight means humans can understand, monitor, and when necessary, intervene in AI system operation and outcomes.

Under the EU AI Act, human oversight is a cornerstone requirement for high-risk AI systems. The regulation recognizes that AI should augment human decision-making, not replace it in high-stakes situations.

Key principle: The human must be genuinely capable of intervening, not just formally designated. Oversight must be meaningful, not theatrical.

Article 14 of the EU AI Act specifies human oversight requirements:

Understanding

Natural persons overseeing the system must be able to:

  • Properly understand the AI system’s capacities and limitations
  • Interpret the AI’s output correctly
  • Decide when and how to use the AI system

Monitoring

The system must allow humans to:

  • Monitor the AI’s operation
  • Detect anomalies, dysfunctions, and unexpected performance
  • Remain aware of automation bias

Intervention

Humans must be able to:

  • Decide not to use the AI system’s output
  • Override the AI’s output or recommendations
  • Interrupt or stop the AI system’s operation

Escalation

There must be mechanisms to:

  • Flag concerns about AI performance
  • Report issues to appropriate parties
  • Escalate decisions to higher authority when needed

How to implement

Step 1: Define oversight roles

Identify who will provide oversight:

  • What competencies do they need?
  • What authority do they have?
  • What training will they receive?

Example roles:

  • Primary user (day-to-day oversight)
  • Manager (escalation point)
  • AI governance team (policy oversight)
  • Technical team (system monitoring)

Step 2: Design oversight interfaces

The AI system must support oversight:

  • Clear display of AI recommendations
  • Explanation of AI reasoning (where feasible)
  • Easy mechanisms to override or reject
  • Audit logs of human decisions

Step 3: Establish processes

Document how oversight works:

  • When must humans review AI output?
  • How do humans override AI decisions?
  • When should concerns be escalated?
  • How are incidents reported?

Step 4: Train oversight personnel

Ensure humans can actually provide oversight:

  • AI system capabilities and limitations
  • Common error patterns
  • Proper interpretation of outputs
  • Override and escalation procedures

Step 5: Monitor and improve

Oversight is ongoing:

  • Track override rates and reasons
  • Review escalated concerns
  • Update training as needed
  • Adapt processes based on experience

Common patterns

Human-in-the-loop

Human reviews and approves every AI decision before it takes effect.

Best for:

  • High-stakes individual decisions
  • Novel situations
  • Learning phase for new AI systems

Example: HR specialist reviews AI screening before rejecting candidates.

Human-on-the-loop

Human monitors AI operation and can intervene when needed.

Best for:

  • High-volume, lower-stakes decisions
  • Systems with good track records
  • Situations where speed matters

Example: Fraud detection alerts humans to suspicious patterns for investigation.

Human-over-the-loop

Human sets policies and reviews aggregate performance; AI operates within boundaries.

Best for:

  • Very high volume operations
  • Well-understood, stable processes
  • Situations where policies can be clearly defined

Example: Content moderation with human review of edge cases and policy updates.

Documentation

You must document your human oversight approach:

Policy documentation

  • Roles and responsibilities
  • Decision authority
  • Escalation paths
  • Training requirements

Technical documentation

  • System capabilities for oversight
  • Interface specifications
  • Logging and audit trails

Operational documentation

  • Standard operating procedures
  • Override guidelines
  • Incident response procedures

Records

  • Training records
  • Override decisions and rationale
  • Incident reports
  • Performance reviews

The key insight: Human oversight must be genuine, not just compliant. A human who rubber-stamps every AI decision isn’t providing meaningful oversight. Design for humans who can truly understand, question, and when necessary, override AI systems.