The European Union has taken a significant leap forward in the regulation of artificial intelligence (AI) by having its member states approve the final text of the AI Act. Thierry Breton, Commissioner for Internal Market, heralded this development as a historic, global first on the social media platform X, marking the culmination of political agreements reached in December 2023. The AI Act introduces a risk-based approach to governing AI applications, addressing critical areas such as governmental use of AI in biometric surveillance, regulation of AI systems like ChatGPT, and establishing transparency requirements prior to market entry.
The journey to this milestone began with the transformation of the December political consensus into a final text, leading to the coreper vote on February 2, involving permanent representatives from all 27 EU member states. The Act responds to growing concerns around deepfakes—sophisticated, AI-generated videos that blur the line between reality and fiction on social media, challenging the integrity of public discourse.
Margrethe Vestager, Executive Vice President for A Europe Fit for the Digital Age, underscored the Act’s principle that the higher the AI risk, the greater the responsibility on developers. This is particularly pertinent in sensitive applications such as job selection or educational admissions, where the AI Act zeroes in on high-risk scenarios.
The path to the Act’s approval saw significant developments, including France dropping its objections and Germany endorsing the Act after reaching a compromise. This paves the way for the AI Act to move towards formal legislation, with a crucial EU lawmaker committee vote scheduled for February 13, followed by a European Parliament vote expected in the spring. The Act is anticipated to come into effect in 2026, with certain provisions being implemented sooner.
In preparation for the AI Act’s enforcement, the European Commission is setting up an AI Office to oversee compliance with foundational models deemed to carry systemic risk. The Commission is also proactively supporting the EU’s AI development community, including enhancements to the EU’s supercomputing capabilities to facilitate training of generative AI models, demonstrating a comprehensive approach to fostering innovation while ensuring ethical and secure AI use across the Union.