The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence, adopted by the European Parliament in March 2024. It takes a risk-based approach, categorizing AI systems into four risk levels with corresponding regulatory requirements.
Risk categories include: Unacceptable Risk (banned — social scoring, real-time biometric surveillance in public), High Risk (strict requirements — AI in hiring, credit scoring, law enforcement, healthcare), Limited Risk (transparency obligations — chatbots, deepfakes), and Minimal Risk (no specific requirements — spam filters, AI in video games).
High-risk AI systems must meet requirements including: risk management systems, data governance, technical documentation, record-keeping, transparency and information provision, human oversight, accuracy and robustness, and cybersecurity measures.
Key timelines: prohibitions on unacceptable-risk AI apply from February 2025, provisions for general-purpose AI from August 2025, and the full regulation applies from August 2026. Non-compliance penalties can reach €35 million or 7% of global annual turnover.
The Act impacts any organization deploying AI systems in the EU market, regardless of where the organization is based, making it critical for global enterprises.
