AI Regulation vs Innovation: Can Governments Regulate Without Slowing Progress?

17.03.2026


The Rise of AI and the Urgency of Regulation

Artificial intelligence is no longer a technology of the future. It diagnoses diseases in hospitals, determines loan approvals in financial institutions, influences hiring decisions in corporations, guides autonomous vehicles, and shapes the information citizens receive online. Within the span of a decade, AI has moved from the research laboratory to the heart of public life — and it has done so faster than any legal system has been designed to adapt.

This speed creates a governance dilemma that policymakers across the world are now confronting directly: can the laws they already have — built on centuries of legal tradition, refined for an analogue world — actually govern the algorithmic systems now reshaping it? Or does artificial intelligence represent a genuinely new kind of challenge, one that demands new legal paradigms rather than the stretching of old ones?

These questions are no longer theoretical. Governments, courts, regulators, and scholars are actively debating them. The European Union has enacted the world's first comprehensive AI regulation. The United States remains without federal AI legislation and is instead relying on a patchwork of sector rules, agency guidance, and state laws. China has built a layered, state-guided framework combining data law, algorithmic registration, and content governance. The United Kingdom is pursuing a principles-first strategy that deliberately defers legislation. And the Council of Europe has opened the first binding international AI treaty for signature.

Yet across all of this activity, a fundamental question persists: is what exists — or what is being built — actually enough? This article examines the global AI regulatory landscape and assesses whether current legal frameworks are adequate to govern artificial intelligence, or whether the technology's distinctive characteristics demand genuinely new legal instruments.

The Global Legal Architecture: What Exists Today

The European Union: A Landmark but Incomplete Experiment

The EU AI Act, which entered into force in August 2024 and is being applied in staggered phases through 2027, is the most comprehensive attempt to regulate AI through dedicated legislation anywhere in the world. It takes a proportionate, risk-based approach, classifying AI systems into four tiers: unacceptable risk (prohibited), high risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated).

The prohibited category covers some of the most consequential AI applications: real-time remote biometric identification in public spaces, AI systems that manipulate behavior through subliminal techniques, social scoring systems operated by public authorities, and emotion recognition in workplaces and schools. These prohibitions became enforceable in February 2025. High-risk systems — deployed in healthcare, critical infrastructure, hiring, credit scoring, and law enforcement — face stringent requirements including risk management frameworks, technical documentation, mandatory human oversight, and pre-market conformity assessments. For General Purpose AI (GPAI) models such as large language models, new transparency and safety obligations became applicable in August 2025. Penalties for non-compliance can reach €35 million or 7% of global annual turnover, whichever is higher.

The EU AI Act is a genuine achievement. But it is also already under pressure. In November 2025, the European Commission proposed a "Digital Omnibus on AI" package — a set of simplification amendments acknowledging implementation difficulties, including delays in member state designation of competent authorities and a lack of harmonized technical standards. The proposed changes include extended deadlines and reduced documentation burdens for SMEs, suggesting that the regulatory ambition of the Act has run ahead of the institutional capacity needed to enforce it.

The United States: Fragmented by Design

The United States has no federal AI law. What it has instead is a complex mosaic: sector-specific regulation by agencies such as the FDA (medical AI), the FTC (consumer protection), and the NHTSA (autonomous vehicles); a voluntary AI Risk Management Framework published by NIST; executive orders from successive administrations with sharply different orientations; and an explosion of state-level legislation. As of 2025, over 500 AI-related bills were introduced across 42 states, covering facial recognition, deepfakes, hiring algorithms, and biometric data.

Despite hundreds of AI-related bills introduced in Congress since the 115th Congress, fewer than 30 have been enacted as of mid-2025, and most of these have been components of broader appropriations or defense legislation rather than standalone AI governance measures. The current Trump administration's stance is explicitly deregulatory: executive orders aimed at removing barriers to AI development have signaled that federal preemption — rather than comprehensive regulation — may be the federal government's primary contribution to AI governance. A proposed 10-year moratorium on state-level AI regulations, embedded in 2025 reconciliation legislation, would, if enacted, eliminate even the patchwork of state protections currently in place.

The result is what legal scholars describe as overlapping jurisdiction, regulatory ambiguity, and uneven accountability. In a system that produces no binding national standards, enforcement depends on whether a victim can fit their injury into the existing doctrines of tort, contract, consumer protection, or employment discrimination law — categories not designed with algorithmic systems in mind.

China: Comprehensive but State-Directed

China has built one of the world's most extensive AI regulatory systems, combining its Cybersecurity Law, Data Security Law, and Personal Information Protection Law with AI-specific instruments governing algorithms, generative AI services, deep synthesis technologies, and AI-generated content labelling. The Interim Measures for Generative AI Services (2023) require providers to register their models, submit to security assessments, and ensure content compliance. As of early 2026, CSL amendments formally bring AI into national law for the first time.

China's framework is dense and enforceable. Thousands of algorithm filings have been approved; hundreds of generative AI platforms are registered. But the system is oriented primarily toward state control and social stability rather than individual rights protection — a distinction that matters enormously when assessing its broader adequacy as a governance model.

The United Kingdom and Beyond: Principles Before Legislation

The UK has deliberately chosen a principles-based, sector-led approach, relying on existing regulators to apply five cross-sectoral principles (safety, transparency, fairness, accountability, and contestability) within their respective domains. A comprehensive UK AI Bill has been deferred to at least 2026. The approach positions the UK between the EU's prescriptive model and the US's voluntary one — prioritizing regulatory agility but risking legal ambiguity.

Elsewhere, South Korea finalized its AI Framework Act in January 2025, strengthening transparency and safety requirements while including promotional measures for research and development. Canada's proposed Artificial Intelligence and Data Act remains in legislative limbo following the prorogation of Parliament in early 2025. Japan maintains a largely voluntary governance framework. Singapore has updated its Model AI Governance Framework, focusing on ethical guidance. And the Council of Europe, in September 2024, opened for signature the world's first legally binding international AI treaty, establishing baseline obligations for human oversight and transparency while remaining technology-neutral.

The landscape is active, diverse, and fragmentary. Whether that diversity represents a healthy pluralism or a dangerous governance gap is the central question.

Three Philosophies, Three Problems

Behind the technical differences in regulatory approach lie fundamentally different answers to the question of what regulation is for.

The EU frames AI governance as a rights protection exercise. Regulation exists to safeguard fundamental rights, enforce democratic accountability, and ensure that the benefits of AI do not come at the cost of human dignity or autonomy. This produces comprehensive, ex ante rules that apply before systems are deployed — a precautionary logic that prioritizes preventing harm.

The US frames AI governance primarily through the lens of market competition and innovation incentives. Regulation is viewed with suspicion when it constrains economic dynamism, and liability law is treated as a more appropriate mechanism than pre-market authorization for managing technological risk. This produces reactive, post-hoc governance that addresses harms after they materialize — a corrective logic that prioritizes flexibility.

China frames AI governance through the lens of national development and social stability. Regulation exists to support state objectives, prevent disruption, and ensure that AI infrastructure serves the country's strategic interests. This produces proactive, dense, and rapidly deployable rules — but ones that embed state authority as a structural feature of the technology ecosystem.

Each philosophy produces its own blind spots. The EU's approach risks regulatory over-reach that slows innovation and imposes disproportionate compliance costs on smaller actors. The US approach risks leaving significant populations unprotected against algorithmic harms for which no adequate remedy exists. China's approach risks building a governance architecture that serves state power rather than human welfare. Understanding these trade-offs is essential to evaluating whether existing frameworks — taken together — are sufficient.

The Case for Existing Frameworks: What Defenders Argue

There is a credible argument that the existing legal toolkit, properly deployed, can address many of the risks that AI poses — at least for now.

Existing law already reaches much AI behavior. Data protection regimes such as the GDPR and CCPA, consumer protection statutes, product liability frameworks, employment discrimination law, and competition law all cover conduct that AI systems may perform or enable. When an AI hiring tool discriminates against protected groups, employment law applies. When an AI system misrepresents a product, consumer protection law applies. When a facial recognition database is assembled without consent, data protection law applies. The legal instruments are not purpose-built for AI, but they are not silent either.

Flexibility preserves space for adaptation. Technology law has often found that specific, rigid legislation creates problems over time as technologies evolve in ways legislators did not anticipate. Principles-based regulation and adaptive liability frameworks can keep pace with change more effectively than statute books that fix requirements for systems that may not resemble the AI of a decade from now. The UK's sector-led approach, whatever its limitations, reflects a genuine insight: that good governance of AI may require regulatory agility more than it requires comprehensive legislation.

Voluntary frameworks and standards are filling gaps. In the absence of binding legislation, organizations such as NIST, ISO, and OECD have produced frameworks and principles that are increasingly adopted as de facto standards. The NIST AI Risk Management Framework has gained substantial international traction. ISO/IEC 42001 for AI management systems provides practical guidance. These instruments cannot enforce compliance, but they establish normative baselines and support organizational governance in ways that may be more responsive to technological change than law.

New regulations are coming. In any assessment of "are current frameworks sufficient," one must account for the regulatory momentum already in train. The EU AI Act, the Council of Europe AI Convention, South Korea's AI Framework Act, and China's evolving corpus of AI rules represent substantial additions to the global regulatory architecture. Declaring existing frameworks insufficient may underestimate the degree to which the governance gap is already being closed.

The Case Against: Where Existing Frameworks Fall Short

The arguments for adequacy are real but incomplete. There are structural reasons why existing legal frameworks, even when augmented by new AI-specific regulations, may be inadequate to govern AI systems as they currently exist and are likely to develop.

The liability gap is fundamental. Traditional liability doctrine assigns responsibility to identifiable human or corporate actors who make specific decisions. AI systems — particularly deep learning models that learn from data rather than following explicit instructions — make decisions through processes that their designers may not fully understand or predict. As one analysis has observed, AI's "complexity, modification through updates or self-learning, and limited predictability make it more difficult to determine what went wrong and who should bear liability if it does." Fault-based negligence struggles with algorithmic behaviour that could not be foreseen. Even product liability doctrine, which does not require proof of fault, presumes a clear causal "defect" — a concept that may not capture systemic AI failures that emerge from the interaction of data, model architecture, and deployment context.

The practical consequences of this gap are already visible. In the US, a recent case achieving nationwide class action certification involved an AI screening system that rejected job applicants based on age at scale — illustrating how a single biased algorithm can multiply discrimination across hundreds of employers and thousands of applicants simultaneously. Yet the structure of legal liability makes it unclear who bears ultimate responsibility: the vendor who built the system, the employer who deployed it, or neither. Meanwhile, 88% of AI vendors in one analysis were found to impose liability caps limiting their exposure to monthly subscription fees — effectively transferring accountability risk to deployers who cannot audit the systems they are using.

Regulatory fragmentation creates arbitrage. AI systems operate across borders; data flows internationally; a model trained in one jurisdiction is deployed in another. Where regulatory requirements diverge — as they currently do dramatically between the EU, US, and China — companies can exploit gaps between systems. A company subject to strict EU rules may structure its operations to minimize EU exposure, or may offer products that would be prohibited in Europe in less regulated markets. The absence of international coordination turns regulatory diversity from a feature into a vulnerability.

AI introduces genuinely novel risks without legal antecedents. Some of what AI makes possible falls entirely outside existing legal categories. Large-scale synthetic media capable of fabricating the words and likeness of real individuals; autonomous decision-making systems operating at speeds and scales that preclude meaningful human review; AI systems that influence public opinion through personalized content feeds in ways that cannot be attributed to any identifiable act of editorial choice — these are not risks that tort law, contract law, or consumer protection statutes were designed to address. The Amazon AI hiring tool, widely documented for systematically discriminating against female applicants due to bias in training data, illustrates the problem: existing employment discrimination law can address the outcome but not the structural mechanism that produced it.

Enforcement capacity lags ambition. Even where AI-specific regulation exists, the institutional capacity to enforce it may not. Persistent gaps have been identified in enforceability, proportionality, and auditability under even the EU's flagship AI Act. Small and medium-sized enterprises face compliance asymmetries — substantial obligations but limited resources to meet them. National AI authorities in EU member states were supposed to be designated by August 2025; many are still in the process of becoming operational. The EU AI Office, which handles GPAI oversight, is a new and still-developing institution. Regulation without enforcement is aspiration rather than governance.

The Innovation-Regulation Dilemma

No honest assessment of AI legal frameworks can avoid the political tension that shapes them: the regulation of transformative technology always involves trade-offs between enabling innovation and preventing harm, and reasonable people disagree about where to strike that balance.

The EU AI Act has attracted significant criticism from the technology industry and some member state governments on the grounds that its compliance requirements — estimated at an average of €29,277 per AI product in an EU impact assessment — impose disproportionate costs, particularly on smaller developers. The proposed Digital Omnibus amendments, seeking to simplify implementation, reflect this pressure. At the same time, critics from the opposite direction argue that the Act's prohibitions do not go far enough and that high-risk requirements are too easily circumvented.

In the United States, the Trump administration's deregulatory posture reflects a genuine policy view that innovation primacy should govern AI governance, and that premature regulation risks ceding technological leadership to competitors. The proposed 10-year moratorium on state AI laws — which would, if enacted, replace emerging state accountability mechanisms with no accountability mechanism at all — illustrates how the innovation argument can be used to justify regulatory vacuums rather than regulatory balance.

The evidence from other technology sectors suggests that this dilemma is not resolvable through a single formula. The GDPR, widely viewed as having imposed excessive burdens on businesses at its introduction, is now broadly accepted as having set standards that improved data practices globally — while also creating a "Brussels Effect" through which European privacy norms were adopted by companies serving global markets. A similar dynamic may eventually emerge from the EU AI Act, if it survives its implementation challenges.

What is clear is that treating regulation and innovation as inherently opposed is a false binary. The quality of regulation determines whether it enables or stifles innovation; poor regulation can harm both. The goal of AI governance should not be to minimize regulation but to build regulatory systems capable of distinguishing harms worth preventing from innovation worth enabling.

The Global Governance Gap

Perhaps the most fundamental inadequacy of current legal frameworks is structural rather than substantive: AI is a global technology, but its governance remains resolutely national.

AI systems are trained on data sourced from across the world, deployed in multiple jurisdictions simultaneously, and capable of producing effects — on democratic processes, labor markets, public discourse, and individual rights — that cross borders as easily as data packets. Yet the regulatory instruments designed to govern these effects are national laws, drafted within national legal traditions, enforced by national authorities, and subject to national political pressures. The result is a global governance gap: a space between the transnational reality of AI and the territorial limits of law.

International soft-law frameworks — the OECD AI Principles, the G7 Hiroshima AI Process, UNESCO's AI Ethics Recommendation — provide normative guidance but lack enforcement mechanisms. The Council of Europe AI Convention is legally binding for signatories but technology-neutral and thus limited in its prescriptive reach. The UN has adopted resolutions on AI but has not established enforceable governance mechanisms. Meanwhile, the US and China, the two dominant AI powers, are engaged in explicit competition to set global norms — a competition whose outcome will depend not only on the quality of their respective governance proposals but on their geopolitical influence.

As scholars have noted, the borderless nature of AI presents challenges that necessitate global regulatory coordination. Without it, diverging national policies create inconsistencies in safety standards, ethical considerations, and liability rules — and provide opportunities for regulatory arbitrage that can undermine the effectiveness of even well-designed national frameworks.

Enough for Today, But Not for Tomorrow

The most honest answer to the question posed in this article's title is that current legal frameworks are partially sufficient — and that their sufficiency is declining.

For the risks that AI currently poses in its most common forms — algorithmic bias in hiring or lending, privacy violations through data misuse, consumer manipulation through personalized content — existing frameworks, when properly applied and augmented by new AI-specific regulations, provide a meaningful baseline of protection. The EU AI Act's prohibitions, the GDPR's data rights, employment discrimination law, and consumer protection statutes collectively address a substantial range of current AI harms.

But the baseline is eroding. AI capabilities are advancing faster than regulatory systems are adapting. The emergence of autonomous agentic AI systems capable of taking consequential actions without meaningful human oversight; the development of AI models with capabilities approaching or exceeding human expertise in specialized domains; the integration of AI into military systems and critical infrastructure — these developments are moving governance challenges beyond the reach of frameworks designed for a simpler technological environment. The responsibility gap, the enforcement gap, and the international coordination gap are all widening, not closing.

There is an uncomfortable asymmetry at the heart of AI governance: the systems being regulated are designed to improve with scale and iteration; the regulatory institutions governing them are not. Addressing this asymmetry is the defining challenge of AI law and policy for the next decade.

Toward Better Frameworks: Recommendations for the Future

Current legal frameworks do not need to be abandoned; they need to be adapted, augmented, and coordinated. The following recommendations reflect a growing convergence in academic and policy literature on what more adequate AI governance would require.

International governance mechanisms. The fragmented international landscape demands coordination mechanisms with real teeth. The Council of Europe AI Convention provides a template for how binding international obligations on AI governance could be structured without prescribing specific technologies. A broader multilateral framework — drawing on the institutional models of the International Atomic Energy Agency or the Financial Stability Board — could establish common baseline standards for AI safety, mandatory incident reporting, and mutual recognition of conformity assessments across jurisdictions. Achieving such a framework will require political will that is currently absent, but the institutional groundwork can and should begin now.

Adaptive regulatory mechanisms. Regulatory systems designed for static technologies are poorly suited to AI, which evolves continuously through retraining, fine-tuning, and deployment in new contexts. Regulatory sandboxes — already mandated under the EU AI Act and piloted in the UK — provide one mechanism for enabling controlled experimentation while maintaining oversight. More fundamentally, regulatory frameworks need to build in review cycles, sunset clauses, and institutional mandates for continuous assessment that allow rules to keep pace with capability changes.

Mandatory transparency and disclosure. Opacity is one of the core challenges of AI governance: without visibility into how AI systems work, regulators cannot assess risk, individuals cannot exercise rights, and accountability cannot be established. Mandatory disclosure requirements — for training data sources, model architectures, known failure modes, and deployment contexts — should be a baseline expectation for all high-stakes AI systems, not a feature available only to regulators with the resources to demand it. The EU AI Act's training data summary requirement for GPAI models represents a step in this direction; it needs to become a global standard.

Fit-for-purpose AI liability regimes. The liability gap documented in this article cannot be closed by stretching traditional tort doctrine. New liability instruments — including strict liability for high-risk AI applications, mandatory insurance requirements, shared liability frameworks that apportion responsibility across developers and deployers, and reversal of the burden of proof where AI opacity makes it impossible for harmed individuals to demonstrate causation — are needed. Several US states have begun experimenting with such approaches; federal legislation, and eventually international coordination, should follow.

Multidisciplinary governance. Legal regulation alone is insufficient for AI governance. Technical standards, certification systems, independent auditing requirements, and ethics review processes must be integrated into the governance architecture alongside legal rules. ISO/IEC 42001, the NIST AI Risk Management Framework, and sector-specific technical standards provide building blocks; their integration with legal enforcement mechanisms is the missing link. Academic researchers, civil society organizations, and affected communities must also be incorporated into governance processes that currently are dominated by government and industry actors.

These recommendations do not offer a blueprint; AI governance is too contextually variable for that. But they point toward the architecture of a more adequate system — one that begins from the recognition that the current patchwork, however valuable as a starting point, was not designed for the technology it now governs, and will require sustained institutional investment to be made fit for purpose.

The question is not whether AI law needs to change. It does. The question is whether policymakers will build the governance capacity to anticipate what is coming, or whether they will continue reacting to each crisis after it has already materialized.

© 2026 Law of Tomorrow. This work is licensed for non-commercial use with attribution.

Share