
The European Union is preparing to delay the rollout of its high risk artificial intelligence rules until 2027 following strong opposition from major technology companies. The shift marks a significant change in the timeline for Europe’s landmark AI regulation, which had been expected to take effect much sooner.
EU officials say the delay comes after industry leaders warned that the rules were too restrictive and could slow innovation. Companies argued that several obligations were unclear and that compliance costs would be far higher than originally estimated. These concerns prompted lawmakers to reconsider the pace of implementation.
The high risk category covers systems used in sectors such as healthcare, transport, security and public administration. These technologies must meet strict transparency, testing and oversight standards. Industry groups claim they need more time to adapt their development pipelines and safety procedures to meet these requirements.
Brussels had initially planned for the rules to start applying gradually as early as 2025. After months of meetings, consultations and political pressure, the Commission now plans to establish a transition period that extends to 2027 to give developers and regulators additional preparation time.
European policymakers insist the delay does not weaken their commitment to trustworthy AI. Instead, they say the extended timeline will help avoid rushed implementation and ensure that rules are clear, enforceable and technologically realistic. Officials also emphasise that the EU continues to lead globally in setting standards for responsible AI governance.
Critics of the postponement warn that a longer transition could leave gaps in oversight. Civil society groups argue that without early safeguards, the rapid expansion of AI tools could increase risks related to privacy, discrimination and public safety. They are urging Brussels to maintain strong interim monitoring measures.
Tech companies, however, view the revised schedule as a chance to align innovation goals with regulatory expectations. Many firms hope the extra time will allow for better collaboration with EU authorities and for more practical guidance on how to meet compliance demands.
Member states are now coordinating on the next steps and preparing for national level adjustments to support the new timeline. The EU is expected to release updated guidance later this year, outlining how the transition will work and what obligations will apply before 2027.
As artificial intelligence becomes increasingly central to Europe’s economy, the outcome of this delay will shape how the region balances innovation with safety. The coming years will be crucial in defining Europe’s role in the global AI landscape and ensuring that new technologies serve the public interest without compromising trust or security.




