CEN-CENELEC’s decision to accelerate the drafting of harmonised standards under the EU AI Act has sparked debate in Brussels and beyond. A process designed for consensus has entered crisis mode and reveals that the EU’s delegated model of technical governance struggles to find the balance between speed and legitimacy that once defined it.
Last month, CEN and CENELEC’s Technical Boards adopted an “exceptional package of measures” to accelerate the delivery of harmonised standards to fulfil the Standardisation Request in support of the AI Act. The decision authorises (i) direct publication of standards after a positive Enquiry vote, which means skipping the Formal vote stage; and (ii) the formation of a small drafting group of already active experts to finalise delayed texts before circulating them to working groups and public Enquiry. An Enquiry in this context is the stage where a draft standard is released for public consultation and national members vote on whether it is broadly acceptable. CEN-CENELEC describe these as ‘temporary’ and lawful steps to ensure standards are available by the (extended) deadline when standards need to be ready, Q4 2026, while reaffirming consensus and inclusiveness as core principles.
Only days later, ISO/IEC committees suspended collaboration on several streams after viewing Europe’s move as a pause or circumvention of the consensus-based process on which standard-setting stands. Senior figures within CEN-CENELEC’s Joint Technical Committee 21 for artificial intelligence (JTC 21) acknowledged internal dissent and requested a reconsideration of the exceptionality measure as a shield against political pressure. The controversy is part of a wider debate that began earlier this year, when experts urged the Commission to deliver the enforcement of the AI Act as to avoid that delays could signal a loss of nerve and invite geopolitical pressure. That stance, co-authored by JTC 21’s Chair, who defends the justification of exceptionality, frames standards as integral to a values-based strategy rather than a bureaucratic afterthought.
What is happening is the result of a collision of logics. On the one hand, the regulatory clock is ticking toward August 2026 full application of the AI Act, which adds time pressure to have harmonised standards ready. On the other, CEN and CENELEC are expected to work through deliberation, public enquiry and broad participation but exceptionality displaces consensus-based legitimacy.
This tension comes against a legal background that makes standardisation a critical piece in AI regulation. The AI Act’s compliance architecture relies on presumption of conformity through harmonised standards (Article 40). When standards lag or fail, the Commission can adopt Common Specifications by implementing act (Article 41), an ‘exceptional fallback solution’ that later sunsets when standards are ready, i.e. a solution also meant to be exceptional and temporary (Recital 121).
Policy and regulatory implications of fast-tracking standardisation
“Fast-tracking” in this context means compressing classic procedural steps and concentrating drafting power in a small expert circle to hit the Q4 2026 target. A choice that matters more than it may seem for several reasons.
First, the presumption of conformity is at stake. If standards are delayed, providers (especially SMEs) face uncertainty and costly compliance assessments. However, if standards are rushed, the result may be thin and underlegitimised technical norms that offer formal compliance without substantive trust.
Second, the EU’s international standing is also on the line. After the “consensus pause”, ISO and IEC reservations to continue parallel development may result in normative divergence precisely when geopolitical competition over AI governance is intensifying. In this light, what began as an internal efficiency measure risks eroding Europe’s credibility in global standardisation fora. For instance, if Europe proceeds with a divergent set of standards, or does so under contested legitimacy, it may lose leverage in global standard-setting fora, which will transfer de facto rulemaking to alternative hubs (US/China).
Third, the institutional model underpinning the EU’s New Legislative Framework (NLF) is being stress-tested. That approach assumes private standardisation can translate public-law goals into technical details. The AI Act appears to have stretched this logic to its limit, as transforming fundamental-rights obligations into verifiable technical metrics carries concrete legal consequences. It determines the scope of the presumption of conformity, the validity of compliance assessments and the enforceability of rights protections within the EU’s product-safety framework. It is true that the absence of quantifiable fairness metrics in the AI Act and argue that standardisation must fill that gap, but also that metrics are inherently contested because turning fundamental rights obligations into technical standards involves epistemic work that cannot be fully divorced from power relations. In practice, if these metrics are poorly designed or conceptually inadequate, conformity with harmonised standards could grant legal safe harbours to AI systems although these would infringe fundamental rights while, at the same time, they could effectively narrow judicial review and shift accountability from public authorities to private standard-setting bodies. The technical definitions set by the private standard-setting bodies, such as what counts as “human oversight” or “fairness testing”, would become de facto legal benchmarks outside direct democratic control.
Finally, enforcement capacity remains a critical blind spot. Even the most robust standards matter little without the infrastructure that turns them into practice: notified bodies, market-surveillance authorities and competent national regulators able to verify compliance in real time. Under the AI Act, these actors will be responsible for assessing conformity, certifying high-risk systems and ensuring post-market monitoring. Yet, as of late 2025, the institutional scaffolding is still under construction: 1) many Member States have not designated their national authorities; 2) accreditation processes for notified bodies have barely begun; and 3) the Commission’s horizontal coordination mechanisms remain embryonic. As it currently appears, most harmonised standards are unlikely to be cited in the Official Journal before 2026, which leaves only a few months before key obligations enter into force. This creates a temporal asymmetry in which the compliance architecture will be operational on paper but incomplete in practice. The result is that providers will face uncertainty over which standards to follow, conformity-assessment bodies will lack interpretive guidance and market-surveillance authorities will be under pressure to act without common benchmarks.
—
The constitutional meaning of exceptionality
The acceleration of AI standardisation exposes four constitutional tensions at the core of the EU’s delegated governance model.
1. Is a positive Enquiry vote equivalent to the full consensus intended in earlier standard-setting practice? While skipping the Formal Vote is legally permissible under CEN-CENELEC rules (Internal Regulations Part 2 Clause 11.2), the shift from a two-stage voting process to direct publication after Enquiry changes the normative meaning of consensus. Procedural transparency, broad stakeholder access and avenues for contestation matter as much as the technical content itself. European case law increasingly confirms the constitutional expectation that decision-making with public effects must be open to scrutiny and contestation. In this light, curtailing deliberation in the name of speed challenges the input legitimacy of delegated governance and raises proportionality concerns under EU (private) administrative law.
2. How much stakeholder input and redress are minimally acceptable to legitimise a standard? A hallmark of the EU standardisation system is the promise of multistakeholder deliberation: national standardisation bodies, industry, civil-society, SMEs, consumer organisations (e.g., ANEC) must participate. However, when drafting is entrusted to a “small expert group” in order to meet deadlines, the risk of agenda-setting capture by large firms or dominant national bodies increases. The decision to accelerate dismantles that procedural ideal. Moreover, acceleration also produces internal asymmetries. SMEs, civil society and consumer groups are structurally disadvantaged in a process that rewards those with permanent Brussels presence and technical staff. At a systemic level, the marginalisation of these actors in accelerated drafting processes also has constitutional implications. The CJEU has consistently held that EU governance must respect the principle of equality of access to decision-making procedures that produce binding effects. When only well-resourced actors can effectively participate, the standardisation process risks contravening both Article 11 TEU (duty to maintain open dialogue with civil society) and the democratic legitimacy aspiration of Regulation 1025/2012. In constitutional terms, this imbalance transforms standardisation from a pluralistic mechanism of co-regulation into a closed structure of privilege that undermines the legitimacy of EU secondary law that relies on such privately produced norms.
3. Does acceleration of drafting compensate for or exacerbate the downstream capacity gaps? The volume and specificity of the deliverables for high-risk systems go well beyond traditional product-safety standards. With the original deadline of 30 April 2025 passed, many work-items are only expected in mid-2026 or later. This bottleneck signals a mismatch between regulatory ambition and actual standardisation capacity because consensus processes across 1000+ experts from 20+ countries cannot be instantly compressed. From a political economy perspective this reveals that the expectation placed on private standardisation bodies to deliver public-law outcomes at speed is structurally fragile.
4. Can Common Specifications remain exceptional? Conceived as a temporary safeguard, Common specifications risk becoming the default if the current acceleration does not work after all. In such a scenario, the EU may escape the legitimacy risks of private governance only by embracing its own exceptionalism, i.e. governing directly through executive acts. However, what was meant as a fallback option could thus evolve into a new normal of technocratic legislation without parliamentary oversight. This dynamic reiterates long-standing concerns about executive drift in comitology and delegated acts. Excessive executive rulemaking would blur the boundary between law and implementation.
The current tensions thus crystallise a broader predicament dilemma: Europe (now) wants AI regulation that is innovation-friendly, legitimate and globally credible, yet its governance machinery can seldom deliver all three at once. By invoking exceptionality, CEN-CENELEC and the Commission have revealed both the adaptability and the fragility of the model they rely on. The coming months will show whether the Union can recalibrate without hollowing out the very principles that make its regulatory project distinctive or whether AI will mark the point at which the consensus-based paradigm quietly gives way to something more precarious.
—
Posted by Marta Cantero Gamito, University of Tartu and EUI

