News Digest: What the EU’s Delay of High-Risk AI Rules Means for the Future of AI Policy
November 20, 2025
The European Commission’s decision to postpone enforcement of its most stringent “high-risk AI” rules from August 2026 to December 2027 marks one of the most consequential shifts in the EU’s technology regulation landscape in years. The move reflects intense industry pressure, internal regulatory bottlenecks, and a wider recalibration of Europe’s digital strategy.
For Big Tech, it is a win, but with caveats. They get 18 more months to build compliance pipelines, set technical standards, and establish conformity processes, areas where they claimed they lacked time and clarity. On the other hand, the revisions to the GDPR and the Data Act may allow them broader access to EU data for AI training, something that they have been heavily lobbying for. The delay helps U.S. firms maintain innovation velocity while Europe refines its rules. It may also encourage more AI research labs to operate in the EU rather than avoid it.
However, privacy and civil-rights advocates warn that the delay could weaken the momentum behind algorithmic accountability, especially against Big Tech, and may allow risky systems (e.g., facial recognition, credit scoring AI) to proliferate without oversight until 2027
For the EU regulators, this delay will undoubtedly move the EU closer to the US administration, especially as the U.S. government is already revising its own AI guidelines. The postponement of high-risk AI rules is not simply a regulatory delay: it’s a strategic reset. Europe is reassessing how to balance innovation and competitiveness against consumer protection and geopolitical influence. The next two years will be decisive.
End Notes
