What's happened
The EU has introduced a voluntary code of practice for AI companies, focusing on transparency, copyright protections, and safety. Enforcement of the broader AI Act begins in August 2026, with potential fines up to 7% of global revenue. Industry pushback and delays are ongoing.
What's behind the headline?
The EU's approach reflects a cautious but assertive stance on AI regulation, balancing innovation with safety. The voluntary nature of the code allows companies to prepare, but the looming enforcement in 2026 raises the stakes. Industry resistance, especially from US tech giants, highlights tensions between regulation and competitiveness. The focus on transparency and copyright protections could set global standards, but delays and lobbying suggest the regulation's future remains uncertain. The emphasis on environmental and safety monitoring indicates a comprehensive strategy, yet the effectiveness will depend on enforcement and industry compliance. The risk of regulatory fragmentation remains, as some companies call for postponements, fearing overreach could stifle innovation.
What the papers say
Ars Technica reports that the EU's proposed rules, which are still under review, will initially be voluntary and focus on transparency, copyright, and safety. The rules require companies like Google, Meta, and OpenAI to disclose training data and energy consumption, with enforcement starting in August 2026. The New York Times highlights that the broader AI Act will impose fines up to 7% of global revenue for violations, but industry resistance persists, with some companies urging delays. Politico notes that industry leaders criticize the regulations as potentially harmful to Europe's AI competitiveness, advocating for postponements. TechCrunch emphasizes that major firms have been lobbying against the regulation, citing concerns over overregulation and the need for a more innovation-friendly approach. Overall, the sources depict a landscape of cautious regulation amid significant industry pushback, with the EU aiming to balance safety and innovation.
How we got here
The EU's AI regulation efforts stem from last year's AI Act, aiming to regulate general-purpose AI like chatbots. The new code complements this legislation, emphasizing transparency, copyright, and safety. Industry leaders have urged delays, citing competitiveness concerns, while the EU insists on timely implementation to ensure safe AI deployment.
Go deeper
Common question
-
Why Are Prime Day Discounts So Weak This Year?
Prime Day 2025 has extended to four days, but many shoppers are noticing that discounts aren't as deep as in previous years. This shift is driven by economic pressures like tariffs and inflation, which are limiting sellers' ability to offer big discounts. Despite the smaller discounts, online shopping activity is expected to rise, making this a unique shopping event worth understanding. Below, we explore the reasons behind the subdued discounts and what it means for consumers and retailers alike.
More on these topics
-
The European Union is a political and economic union of 27 member states that are located primarily in Europe. Its members have a combined area of 4,233,255.3 km² and an estimated total population of about 447 million.
-
Henna Maria Virkkunen is a Finnish politician who serves as Executive Vice-president of the European Commission for Technological Sovereignty, Security and Democracy since 1 December 2024.
-
The Artificial Intelligence Act is a proposed regulation by the European Commission which aims introduce a common regulatory and legal framework to limit algorithmic discrimination by classifying artificial intelligence systems by risk.
-
Facebook, Inc. is an American social media conglomerate corporation based in Menlo Park, California. It was founded by Mark Zuckerberg, along with his fellow roommates and students at Harvard College, who were Eduardo Saverin, Andrew McCollum, Dustin Mosk