Zum Hauptinhalt springen
LIVE Intel Feed
"Not a Pentest" Notice: Compliance-Leitfaden — kein Rechtsrat.
Solutions · EU AI Act

EU AI Act Compliance für Self-Hosted KI-Systeme

Der EU AI Act (in Kraft seit August 2024) klassifiziert KI-Systeme in 4 Risikoklassen. Für High-Risk-Systeme gelten ab August 2026 verpflichtende Anforderungen — Moltbot automatisiert 4 von 7.

4
Risikoklassen
Aug 2026
High-Risk Deadline
7
High-Risk Anforderungen
4/7
Mit Moltbot automatisiert

4 Risikoklassen — Klassifizierung

Unacceptable RiskPROHIBITED

PROHIBITED — cannot deploy

Social scoring by governmentsReal-time biometric surveillance in publicEmotion recognition at work/schoolSubliminal manipulation
High Risk

Mandatory: risk management, data governance, technical docs, human oversight, accuracy, robustness, cybersecurity

AI in medical devicesCritical infrastructure managementEducation/vocational trainingEmployment/HR decisionsLaw enforcementBorder controlAdministration of justice
Limited Risk

Transparency: must disclose AI interaction, label synthetic content

ChatbotsDeepfake generatorsAI-generated content
Minimal Risk

No mandatory requirements — voluntary codes of conduct encouraged

AI-powered gamesSpam filtersAI recommendations

High-Risk: 7 Pflichtanforderungen

Art. 9Risk Management System
Automatisierbar

Continuous risk management throughout lifecycle. Identify, analyze and evaluate risks. Test residual risks before deployment.

Art. 10Data Governance
Manuell

Training, validation and testing data: relevant, representative, free of errors. Document data sources, characteristics and potential biases.

Art. 11Technical Documentation
Manuell

Before placing on market: complete technical documentation. General description, design specs, training methodology, performance metrics.

Art. 12Record-Keeping / Logging
Automatisierbar

Automatic event logging for lifespan. Traceability of AI decisions. Retention period appropriate to intended purpose.

Art. 13Transparency & Information
Manuell

Deployers must receive sufficient information to use correctly. User-facing documentation mandatory.

Art. 14Human Oversight
Automatisierbar

Technical measures enabling human oversight. Ability to interrupt, stop or override AI system. Monitor for anomalies and risks.

Art. 15Accuracy, Robustness, Cybersecurity
Automatisierbar

Appropriate accuracy for intended purpose. Resilience to errors, faults and adversarial inputs. Cybersecurity measures throughout lifecycle.

Zeitplan

Aug 2024EU AI Act in Kraft getretenAktiv
Feb 2025Verbotene KI-Systeme (Art. 5) — JETZT AKTIVAktiv
Aug 2025GPAI-Modell-Anforderungen (GPT-4-Klasse)
Aug 2026High-Risk KI-Anforderungen vollständig anwendbar
Aug 2027High-Risk KI in regulierten Produkten (Medizinprodukte, Maschinen)

Häufige Fragen

When does the EU AI Act apply?

The EU AI Act entered into force August 2024. Key dates: February 2025 — prohibited AI systems banned. August 2025 — GPAI model obligations apply. August 2026 — high-risk AI system requirements fully applicable. August 2027 — high-risk AI in regulated products (medical devices, machinery). Self-hosted AI systems deployed in the EU that qualify as high-risk must comply by August 2026.

Is my self-hosted Moltbot/AI agent system high-risk?

Most internal IT management AI (security monitoring, log analysis, infrastructure automation) falls under 'minimal risk' — no mandatory requirements. You are high-risk if your AI system makes or significantly influences decisions about: hiring, credit/insurance, education admission, benefits, law enforcement, border control, or medical treatment. Self-hosted AI for DevOps, security, and monitoring is generally minimal risk.

What is required for high-risk AI systems?

Seven mandatory requirements: 1) Risk management system (ongoing). 2) Data governance documentation. 3) Technical documentation before deployment. 4) Automatic event logging with tamper-proof records. 5) Transparency information for deployers and users. 6) Human oversight mechanisms (ability to override/stop). 7) Appropriate accuracy, robustness, and cybersecurity measures. Moltbot automates requirements 1, 4, 6, and 7.

How does the EU AI Act relate to GDPR?

The EU AI Act and GDPR are complementary, not alternatives. If your AI processes personal data: both apply simultaneously. GDPR covers data protection (lawful basis, data subject rights, breach notification). The AI Act covers AI system safety, transparency, and human oversight. For high-risk AI processing personal data: data protection impact assessment (DPIA) under GDPR + conformity assessment under AI Act are both required.

Weiterführende Ressourcen

🔒 Quantum-Resistant Mycelium Architecture
🛡️ 3M+ Runbooks – täglich von SecOps-Experten geprüft
🌐 Zero Known Breaches – Powered by Living Intelligence
🏛️ SOC2 & ISO 27001 Aligned • GDPR 100 % compliant
⚡ Real-Time Global Mycelium Network – 347 Bedrohungen in 60 Minuten
🧬 Trusted by SecOps Leaders worldwide