01. Risk Classification: Where Does Your LLM Fall?
The EU AI Act groups AI systems into four risk tiers. The tier determines your obligations. Most LLM applications in healthcare, legal, finance, HR, and critical infrastructure fall into high-risk -- which triggers the most demanding requirements.
| Risk Level | LLM Application Examples | Obligations |
|---|---|---|
| Prohibited | Social scoring systems, real-time biometric surveillance in public, manipulative AI that causes harm | Banned outright in the EU |
| High Risk | Medical diagnosis assistants, AI in hiring/screening, credit scoring AI, LLM-powered legal advice tools, AI in education scoring, remote biometric authentication | Conformity assessment, full technical documentation, audit trails, human oversight, EU database registration |
| Limited Risk | Chatbots, AI content generators (text, image, audio), customer service AI | Transparency obligations (disclose AI interaction), data governance requirements |
| Minimal Risk | AI recommendation systems, spam filters, personal productivity tools | No mandatory requirements -- code of conduct encouraged |
If your LLM application touches patient care, medical decisions, legal advice, HR screening, or financial decisions -- you are high-risk under Annex III. Do not assume you are limited risk based on the chatbot exception.
The classification depends on the sector (Annex III) and the intended use. A general-purpose chatbot that gets fine-tuned for medical triage becomes high-risk the moment it is deployed for that purpose. Start your classification analysis before your LLM goes into production.
02. High-Risk AI: What the Act Requires
If your LLM application is classified as high-risk, the EU AI Act mandates a comprehensive set of requirements before you can place it on the EU market. These are not optional best practices -- they are legal obligations.
Risk Management System (Article 9)
You must implement a documented risk management system that identifies known risks, analyzes potential risks, evaluates risk severity, and defines mitigation measures. For LLM applications, this means documenting:
- What could go wrong if the model produces incorrect, harmful, or discriminatory outputs
- Failure modes in the RAG pipeline, vector database, or prompt injection attacks
- Mitigation controls and residual risk acceptance documentation
Data Governance (Article 10)
Training data and input data must be subject to data governance practices covering data collection, labeling, processing, fitness for purpose, and bias detection. For fine-tuned or RAG-powered LLMs:
- Document the provenance and licensing of all training/fine-tuning data
- Implement bias testing across protected characteristics (gender, ethnicity, age, disability)
- Define data retention, access controls, and lineage tracking
Technical Documentation (Article 11)
Every high-risk AI system must have technical documentation that is kept up to date and sufficient to demonstrate compliance. This is the core audit artifact -- regulators and notified bodies will request it. See full documentation requirements below.
Human Oversight (Article 14)
High-risk AI must be designed to allow human oversight -- enabling humans to understand, correctly use, and override AI outputs. For LLM applications:
- Document the human-in-the-loop decision points and escalation procedures
- Implement override mechanisms with clear audit logs
- Define what outputs require human review before downstream action
Accuracy, Robustness, and Cybersecurity (Article 15)
Systems must achieve appropriate accuracy, robustness, and cybersecurity standards. For LLMs this means:
- Published accuracy and performance metrics with defined evaluation procedures
- Documented handling of out-of-distribution inputs
- Defense against adversarial prompts, injection attacks, and model extraction
Conformity Assessment (Articles 43-48)
High-risk systems must undergo a conformity assessment -- either by a notified body (for certain categories) or self-assessment (for others). The assessment must be completed before market deployment, and results must be registered in the EU AI database.
03. Mandatory Technical Documentation
Article 11 requires a technical file containing everything needed to demonstrate compliance. Regulators and auditors will ask for this. For LLM applications, it must include:
Incomplete or missing technical documentation is the most common enforcement trigger. Even a fully compliant system can be fined if documentation is absent or outdated.
04. Audit Trail Requirements for LLM Applications
Article 12 requires that high-risk AI systems be designed to log events automatically -- providing the evidence trail regulators, auditors, and notified bodies need to verify compliance. For LLM applications, every call must produce a tamper-resistant, complete record.
What Must Be Logged (LLM-Specific)
Retention and Access Controls
The Act does not specify a single retention period -- it references what is appropriate given the risk and the system purpose. For healthcare and legal AI, regulators expect minimum 5-year retention. For financial AI, industry-specific regulations often mandate 7 years.
Access to audit logs must be restricted to authorized personnel with documented roles. Logs must be tamper-evident (append-only or cryptographic integrity verification) and available for inspection upon request within a reasonable time period.
# SentinelGate audit event -- what gets logged on every LLM call { // Compliance metadata "event_type": "proxy_request", "event_id": "evt_01JXM...", "occurred_at": "2026-05-03T10:22:14.331Z", // System identification "api_key_id": "key_...abc", "policy_version": "v3.2", "model": "gpt-4o", // Policy evaluation "input_policy_result": "allow", "output_policy_result": "allow", "pii_detected": false, "jailbreak_score": 0.02, "human_review_triggered": false, // Operational metadata "latency_ms": 387, "tokens_in": 1423, "tokens_out": 891, "cost_usd": 0.0042, // EU AI Act compliance fields "eu_ai_risk_category": "high_risk_healthcare", "data_region": "EU-WEST", "retention_policy_id": "healthcare_7yr" }
Embedding audit logging into your application code creates gaps, inconsistency, and coverage blind spots. A policy-layer proxy like SentinelGate automatically captures every LLM call with the complete schema -- no application code changes required.
05. Compliant AI Governance with SentinelGate
Building EU AI Act compliance from scratch means stitching together policy enforcement, audit logging, PII detection, and human oversight -- for every LLM call, in real time. SentinelGate handles all of it in a single proxy layer.
+ How SentinelGate Covers EU AI Act Requirements
One base URL change routes all LLM traffic through SentinelGate policy layer. Every request and response is logged, evaluated, and governed -- with the compliance evidence you need for audit.
Integration: Three Lines of Code
Add SentinelGate to any existing LLM application without modifying your application code. Point your base URL at the SentinelGate proxy -- all other configuration stays the same.
# No code changes -- just change the base URL # Before (your current code) OPENAI_API_BASE=https://api.openai.com/v1 # After (SentinelGate governance layer) OPENAI_API_BASE=https://gateway.sentinelgate.polsia.app/v1 # Everything else stays identical OPENAI_API_KEY=your_sentinel_gate_key
# Python (OpenAI SDK) from openai import OpenAI client = OpenAI( api_key="your_sentinel_gate_key", base_url="https://gateway.sentinelgate.polsia.app/v1" # SentinelGate proxy ) response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What is the diagnosis for..."}] ) # Audit log, policy enforcement, PII redaction -- all applied automatically
On every call, SentinelGate: (1) evaluates input against your configured policies, (2) routes the request to the upstream model, (3) evaluates the output for PII leakage and policy violations, (4) logs the complete audit event to your account, and (5) triggers human review if a policy threshold is breached.
All audit data is retained per your configured policy and available for export as compliance reports in SOC 2, GDPR Article 30, or EU AI Act format -- on demand.
Get your free API key -- audit-ready in 5 minutes
No credit card required. No application code changes. Route your LLM traffic through SentinelGate and have a complete compliance audit trail before your next sprint ends.
06. Full Compliance Checklist
Use this checklist to assess your current compliance posture. Every item that shows Not started or Partial is a gap between now and August 2026 enforcement.
Risk Classification
Technical Documentation (Article 11)
Audit Trail and Logging (Article 12)
Human Oversight (Article 14)
Conformity and Registration
Building a complete compliance framework takes time -- risk analysis, documentation, audit trail implementation, conformity assessment. If you have not started, the enforcement date is not a buffer -- it is a deadline.