Master the world's first comprehensive AI regulation with confidence
The EU AI Act fundamentally changes how we handle artificial intelligence. Here you'll learn everything about risk categories, compliance requirements, and German implementation. From prohibitions to best practices – so you can design your AI strategy to be legally compliant and future-proof.
The EU AI Act classifies AI systems into four risk categories, each carrying different requirements and prohibitions. This classification determines which compliance measures you must take for your AI applications.
Status: Banned since 2 February 2025
Penalty: Up to 35 million € or 7% of global turnover
Status: Regulated from 2 August 2026
Requirements: Risk assessment, quality management, human oversight
Status: Regulated from 2 August 2026
Requirements: Transparency obligations, user information
Status: No additional obligations
Requirements: Voluntary codes of conduct recommended
Foundation models like ChatGPT, GPT-4, or Claude fall under a special category of the AI Act. These General Purpose AI (GPAI) systems have specific compliance requirements you should know about.
GPAI obligations have been active since 2 August 2025. If you use or develop foundation models, these requirements must already be fully implemented. The EU AI Office oversees compliance at the European level.
The EU AI Act is being implemented gradually. Here you can see all important milestones so you're prepared in time and don't miss any deadlines.
| Date | Milestone | Status | What You Should Consider |
|---|---|---|---|
| 1 August 2024 | AI Act comes into force | Active | Initial orientation and inventory of your AI systems |
| 2 February 2025 | Prohibitions become effective | Active | Check immediately: Are you using prohibited AI systems? |
| 2 August 2025 | GPAI obligations active | Active | Foundation model compliance must be implemented now |
| 2 February 2026 | Commission Guidelines (Art. 6) | Delayed | EC missed its legal deadline. Guidelines on high-risk AI classification expected March/April 2026. Creates legal uncertainty for businesses. |
| 2 August 2026 | Full Applicability | Urgent | Only ~5 months away! All AI systems must be compliant – unless the Digital Omnibus introduces a delay. |
| Nov. 2025 – 2026 | Digital Omnibus (Proposal) | In Process | EC proposes delay: High-risk AI (Annex III) pushed to Dec. 2027 at latest, Annex I to Aug. 2028. Not yet law – monitor the legislative process! |
| 2 August 2027 | Legacy System Compliance | Future | Older GPAI models must also be compliant (potentially extended by Digital Omnibus) |
Prohibitions are in force. Social scoring, manipulative AI, and real-time biometrics must be shut down. Inventory of all AI systems is mandatory.
GPAI compliance is running. Technical documentation, training data summaries, and copyright compliance for foundation models are now binding.
All high-risk AI systems must be compliant. Implement risk management, quality assurance, technical documentation, and human oversight. Note: The Digital Omnibus could extend high-risk deadlines – but nothing is decided yet. Plan based on current law!
Depending on whether you develop, operate, or supervise AI systems, you have different obligations. Here you'll find an overview of your specific compliance requirements.
Your Core Obligations: Conformity assessment, risk assessment, technical documentation, data quality assurance. For high-risk systems additionally quality management system and post-market surveillance.
Your Core Obligations: Human oversight, system monitoring, record-keeping, personnel AI competence. You must monitor data inputs and correctly interpret outputs. Public bodies must additionally conduct a Fundamental Rights Impact Assessment (FRIA) before deploying high-risk AI.
Your Core Obligations: Market surveillance, compliance monitoring, enforcement measures, guideline provision. Special responsibility in cross-border cooperation.
Your Core Obligations: Training data summaries, copyright compliance, content labeling, systemic risk assessment. Abuse prevention and user guidelines are essential.
As operators of critical infrastructures (KRITIS), energy providers are subject to the strictest rules of the AI Act. AI systems used for control, operation, and safety of energy networks are explicitly classified as high-risk applications. Additionally, the NIS2 Directive imposes parallel cybersecurity requirements for critical infrastructure operators that must be addressed in an integrated compliance strategy.
AI systems that control power flows in real time, distribute loads, or respond to fluctuations from renewable energies are high-risk applications. They require the highest levels of reliability and transparency in their decision-making processes (N-1 security).
Systems that predict energy demand are critical for grid stability and pricing. Erroneous forecasts can have serious consequences, which is why high requirements apply to data quality and model validation.
AI for predicting failures in critical components (e.g. transformers, turbines) is classified as high-risk. Reliability must be demonstrated through rigorous testing and continuous monitoring.
Algorithms controlling virtual power plants (VPPs), microgrids, and smart grids must ensure safety, fairness, and data protection. Pipeline integrity monitoring also falls into this category.
The use of these systems requires compliance with a strict catalog of obligations:
The AI Act affects different economic sectors to varying degrees. Here you'll learn what special challenges and opportunities arise for your sector.
Risk: High Risk. Challenges: Overlap with Medical Device Regulation, double obligations, patient safety. Particularly affected: Diagnostic AI, robot-assisted surgery.
Risk: High Risk. Challenges: Credit scoring bias, algorithmic transparency, BaFin supervision. Integration into MaRisk compliance and discrimination prevention required.
Risk: High Risk. Challenges: Ethics of autonomous driving, safety-critical decisions, liability issues. German ethics guidelines: Protection of human life takes priority.
Risk: High Risk/Prohibited. Challenges: Balancing fundamental rights, limited transparency. Real-time biometrics mostly prohibited, judicial approvals required.
Risk: High Risk. Challenges: As part of critical infrastructure (KRITIS), highest requirements apply to reliability and cybersecurity. AI systems for grid control must be robust and transparent.
Germany is taking a pioneering role in AI regulation, pursuing a dual strategy: implementing EU requirements while strengthening the innovation location. Here you'll learn how the German government is implementing the EU AI Act and what additional initiatives are relevant for you.
For companies in Germany, this means: In addition to pure compliance with the EU AI Act, there are numerous funding opportunities and support programs to advance AI innovations.
32 million € budget for AI quality standards and SME innovation. If you're an SME, you can benefit from consulting and funding.
Germany must provide sandboxes by August 2026. You can test innovative AI in controlled environments - even free of charge for SMEs.
"AI for the Common Good" - if your AI solves social problems, you can benefit from expert consulting and funding.
German AI quality standards can give you international competitive advantage - "Trusted AI Made in Germany".
Germany's federal structure can lead to different interpretations in the 16 federal states. The federal government is working to create uniform standards.
Germany has already established regulatory sandboxes in various sectors. This gives your company the opportunity to test innovative AI solutions in a safe legal framework before full regulation takes effect.
Requirement: Every EU member state must have at least one AI regulatory sandbox by 2 August 2026
Benefits for Companies:
Germany has already institutionalized regulatory sandboxes in various sectors
AI compliance is not just a legal obligation but a strategic competitive advantage. Companies that become compliant early position themselves as trustworthy AI providers in the global market.
As a compliant-first company, you gain trust with customers and partners. "EU AI Act compliant" becomes a quality seal for your AI products.
EU standards often become global benchmarks. Early compliance prepares you for international expansion and opens new markets.
Regulatory Sandboxes enable risk-free innovation. You can develop groundbreaking AI solutions without taking compliance risks.
Proactive compliance protects against existential penalties of up to 35 million € and preserves your reputation from damage due to violations.