Zum Hauptinhalt springen
LIVE Intel Feed
"Not a Pentest" Notice: Compliance guide — not legal advice.
Solutions · EU AI Act

EU AI Act Compliance for Self-Hosted AI Systems

The EU AI Act (in force since August 2024) classifies AI systems into 4 risk classes. High-risk systems face mandatory requirements from August 2026 — Moltbot automates 4 of 7.

4
Risk classes
Aug 2026
High-risk deadline
7
High-risk requirements
4/7
Automated with Moltbot

4 Risk Classes — Classification

Unacceptable RiskPROHIBITED

PROHIBITED — cannot deploy

Social scoring by governmentsReal-time biometric surveillance in publicEmotion recognition at work/schoolSubliminal manipulation
High Risk

Mandatory: risk management, data governance, technical docs, human oversight, accuracy, robustness, cybersecurity

AI in medical devicesCritical infrastructure managementEducation/vocational trainingEmployment/HR decisionsLaw enforcementBorder controlAdministration of justice
Limited Risk

Transparency: must disclose AI interaction, label synthetic content

ChatbotsDeepfake generatorsAI-generated content
Minimal Risk

No mandatory requirements — voluntary codes of conduct encouraged

AI-powered gamesSpam filtersAI recommendations

High-Risk: 7 Mandatory Requirements

Art. 9Risk Management System
Automatable

Continuous risk management throughout lifecycle. Identify, analyze and evaluate risks. Test residual risks before deployment.

Art. 10Data Governance
Manual

Training, validation and testing data: relevant, representative, free of errors. Document data sources, characteristics and potential biases.

Art. 11Technical Documentation
Manual

Before placing on market: complete technical documentation. General description, design specs, training methodology, performance metrics.

Art. 12Record-Keeping / Logging
Automatable

Automatic event logging for lifespan. Traceability of AI decisions. Retention period appropriate to intended purpose.

Art. 13Transparency & Information
Manual

Deployers must receive sufficient information to use correctly. User-facing documentation mandatory.

Art. 14Human Oversight
Automatable

Technical measures enabling human oversight. Ability to interrupt, stop or override AI system. Monitor for anomalies and risks.

Art. 15Accuracy, Robustness, Cybersecurity
Automatable

Appropriate accuracy for intended purpose. Resilience to errors, faults and adversarial inputs. Cybersecurity measures throughout lifecycle.

Timeline

Aug 2024EU AI Act entered into forceActive
Feb 2025Prohibited AI systems (Art. 5) — NOW ACTIVEActive
Aug 2025GPAI model obligations (GPT-4 class)
Aug 2026High-risk AI requirements fully applicable
Aug 2027High-risk AI in regulated products (medical devices, machinery)

Frequently Asked Questions

When does the EU AI Act apply?

The EU AI Act entered into force August 2024. Key dates: February 2025 — prohibited AI systems banned. August 2025 — GPAI model obligations apply. August 2026 — high-risk AI system requirements fully applicable. August 2027 — high-risk AI in regulated products (medical devices, machinery). Self-hosted AI systems deployed in the EU that qualify as high-risk must comply by August 2026.

Is my self-hosted Moltbot/AI agent system high-risk?

Most internal IT management AI (security monitoring, log analysis, infrastructure automation) falls under 'minimal risk' — no mandatory requirements. You are high-risk if your AI system makes or significantly influences decisions about: hiring, credit/insurance, education admission, benefits, law enforcement, border control, or medical treatment. Self-hosted AI for DevOps, security, and monitoring is generally minimal risk.

What is required for high-risk AI systems?

Seven mandatory requirements: 1) Risk management system (ongoing). 2) Data governance documentation. 3) Technical documentation before deployment. 4) Automatic event logging with tamper-proof records. 5) Transparency information for deployers and users. 6) Human oversight mechanisms (ability to override/stop). 7) Appropriate accuracy, robustness, and cybersecurity measures. Moltbot automates requirements 1, 4, 6, and 7.

How does the EU AI Act relate to GDPR?

The EU AI Act and GDPR are complementary, not alternatives. If your AI processes personal data: both apply simultaneously. GDPR covers data protection (lawful basis, data subject rights, breach notification). The AI Act covers AI system safety, transparency, and human oversight. For high-risk AI processing personal data: data protection impact assessment (DPIA) under GDPR + conformity assessment under AI Act are both required.

Further Resources

🔒 Quantum-Resistant Mycelium Architecture
🛡️ 3M+ Runbooks – täglich von SecOps-Experten geprüft
🌐 Zero Known Breaches – Powered by Living Intelligence
🏛️ SOC2 & ISO 27001 Aligned • GDPR 100 % compliant
⚡ Real-Time Global Mycelium Network – 347 Bedrohungen in 60 Minuten
🧬 Trusted by SecOps Leaders worldwide