EU AI Act Compliance for Self-Hosted AI Systems
The EU AI Act (in force since August 2024) classifies AI systems into 4 risk classes. High-risk systems face mandatory requirements from August 2026 — Moltbot automates 4 of 7.
4 Risk Classes — Classification
PROHIBITED — cannot deploy
Mandatory: risk management, data governance, technical docs, human oversight, accuracy, robustness, cybersecurity
Transparency: must disclose AI interaction, label synthetic content
No mandatory requirements — voluntary codes of conduct encouraged
High-Risk: 7 Mandatory Requirements
Continuous risk management throughout lifecycle. Identify, analyze and evaluate risks. Test residual risks before deployment.
Training, validation and testing data: relevant, representative, free of errors. Document data sources, characteristics and potential biases.
Before placing on market: complete technical documentation. General description, design specs, training methodology, performance metrics.
Automatic event logging for lifespan. Traceability of AI decisions. Retention period appropriate to intended purpose.
Deployers must receive sufficient information to use correctly. User-facing documentation mandatory.
Technical measures enabling human oversight. Ability to interrupt, stop or override AI system. Monitor for anomalies and risks.
Appropriate accuracy for intended purpose. Resilience to errors, faults and adversarial inputs. Cybersecurity measures throughout lifecycle.
Timeline
Frequently Asked Questions
When does the EU AI Act apply?
The EU AI Act entered into force August 2024. Key dates: February 2025 — prohibited AI systems banned. August 2025 — GPAI model obligations apply. August 2026 — high-risk AI system requirements fully applicable. August 2027 — high-risk AI in regulated products (medical devices, machinery). Self-hosted AI systems deployed in the EU that qualify as high-risk must comply by August 2026.
Is my self-hosted Moltbot/AI agent system high-risk?
Most internal IT management AI (security monitoring, log analysis, infrastructure automation) falls under 'minimal risk' — no mandatory requirements. You are high-risk if your AI system makes or significantly influences decisions about: hiring, credit/insurance, education admission, benefits, law enforcement, border control, or medical treatment. Self-hosted AI for DevOps, security, and monitoring is generally minimal risk.
What is required for high-risk AI systems?
Seven mandatory requirements: 1) Risk management system (ongoing). 2) Data governance documentation. 3) Technical documentation before deployment. 4) Automatic event logging with tamper-proof records. 5) Transparency information for deployers and users. 6) Human oversight mechanisms (ability to override/stop). 7) Appropriate accuracy, robustness, and cybersecurity measures. Moltbot automates requirements 1, 4, 6, and 7.
How does the EU AI Act relate to GDPR?
The EU AI Act and GDPR are complementary, not alternatives. If your AI processes personal data: both apply simultaneously. GDPR covers data protection (lawful basis, data subject rights, breach notification). The AI Act covers AI system safety, transparency, and human oversight. For high-risk AI processing personal data: data protection impact assessment (DPIA) under GDPR + conformity assessment under AI Act are both required.