How China's AI Policies Challenge the World

17.03.2026


The Global AI Governance Race

Artificial intelligence has become one of the defining policy questions of the twenty-first century. Within the span of a few years, it has moved from a niche research concern to a matter of active legislative urgency across every major economy. Governments are no longer asking whether to regulate AI, but how — and who will set the terms first.

The divergence in answers is striking. The European Union has positioned itself as the standard-bearer of rights-based, precautionary regulation, codifying its values into the world's first comprehensive horizontal AI law. The United States, particularly under the current Trump administration, has doubled down on an innovation-first posture, treating market competition and technological dominance as the primary organizing principles of AI policy. And China — governed by a single-party state with distinct developmental priorities — has constructed an elaborate, state-guided governance architecture that defies easy comparison with either Western model.

This divergence is not merely technical. It reflects deeper disagreements about the relationship between the state and its citizens, between security and freedom, between national interest and global norms. As AI systems increasingly underpin critical infrastructure, information environments, and economic competition, the governance choices made in Beijing, Brussels, and Washington will have consequences far beyond their borders.

Understanding China's regulatory model is therefore not optional for any serious student of global AI policy. It is, in many ways, the most consequential governance experiment currently underway — in scale, speed, and ambition — and one whose global implications remain underappreciated in Western policy discourse.

China's AI Policy Framework: A Layered Architecture

China's approach to AI governance has developed through a multi-layered architecture of laws, regulations, administrative measures, and technical standards. Rather than a single comprehensive statute, China has opted for a sectoral, issue-specific regulatory strategy that is dense, enforceable, and expanding rapidly.

The foundational layer consists of three landmark data governance laws: the Cybersecurity Law (CSL), the Data Security Law (DSL), and the Personal Information Protection Law (PIPL). Together, these create a comprehensive legal environment for data handling that shapes how AI systems can be trained, deployed, and operated in China. Cross-border data transfers, a practical necessity for multinational AI operations, face particularly stringent restrictions under this framework.

Built atop this foundation are a series of AI-specific measures targeting specific risk domains. The Algorithm Recommendation Rules (2022) impose obligations on algorithmic transparency, fairness, and content moderation, alongside a mandatory algorithm filing requirement with the Cyberspace Administration of China (CAC). The Deep Synthesis Regulations (2023) govern the use of AI to generate or alter video, voice, text, and images online. Most significantly, the Interim Measures for Generative AI Services (2023) marked a turning point in China's regulatory posture: providers of generative AI tools accessible to the public must ensure content is lawful, register their models with regulators, display filing numbers, and comply with content governance requirements.

Since these measures came into force, the CAC has approved and registered hundreds of generative AI platforms, including DeepSeek and Baidu's Ernie Bot, demonstrating that China's regulatory machinery is actively shaping the market rather than merely observing it.

The regulatory architecture continued to deepen in 2024 and 2025. In September 2024, the National Technical Committee 260 published the AI Safety Governance Framework 1.0, establishing a lifecycle approach to AI risk management and classifying AI risks into inherent and application-related categories. In September 2025, mandatory labelling measures for AI-generated content took effect, requiring visible AI symbols on chatbot outputs, synthetic voices, and digitally altered images — backed by a binding national technical standard. Most recently, amendments to the Cybersecurity Law that came into effect in January 2026 brought AI formally into national law for the first time, enshrining support for AI research and development alongside risk governance obligations.

The institutional backbone of this system is the Cyberspace Administration of China, which functions as the primary regulator for online AI services, algorithm registration, and content governance. The CAC's broad mandate — combined with the involvement of the Ministry of Industry and Information Technology, the Ministry of Public Security, and various standards bodies — creates an overlapping enforcement environment that is comprehensive by design.

As of April 2025, China had filed over 1.57 million AI patents, representing 38.6% of the global total. This is not a country regulating from the sidelines; it is a jurisdiction that is simultaneously one of the world's most prolific AI producers and one of its most active regulators.

The Regulatory Philosophy: Control, Development, and Stability

To understand China's AI regulation, one must understand its underlying philosophy — which differs from Western frameworks in ways that go beyond technical detail.

China's regulatory model is best described as state-centered developmental governance. AI is treated not merely as a commercial product requiring consumer protection, nor as a technology that poses potential rights violations, but as a socio-technical infrastructure of national significance, requiring central coordination to serve strategic objectives. As one authoritative framing puts it, China has built its regulatory framework through "legislation first, ethical guidance and classified governance," with the key tension being how to balance technology innovation with risk control.

Three core characteristics define this philosophy.

First, integration of innovation and control. China does not treat regulation and development as opposites. The state actively promotes AI through industrial policy — the "AI Plus" initiative, announced in August 2025, sets targets of 70% penetration for AI agents and intelligent terminals by 2027, rising to 90% by 2030. Simultaneously, regulation is used to govern the deployment of these technologies. This dual-track approach reflects a view that the state, rather than the market or civil society, is the appropriate arbiter of how innovation proceeds.

Second, social stability as a regulatory objective. Chinese AI law is distinctly attentive to the risks that AI poses to public order, national security, and social cohesion — concerns that do not appear in EU or US regulatory frameworks in the same form. Generative AI providers must ensure their outputs align with "core socialist values." Algorithmic applications with the capacity to mobilize public opinion or shape social consciousness are subject to stricter oversight. In December 2024, the Ministry of Public Security announced criminal enforcement actions against individuals who used AI tools to fabricate rumors or spread disinformation — illustrating how AI governance intersects with China's broader information control apparatus.

Third, technological sovereignty. China's regulatory choices are inseparable from its geopolitical posture. The requirement that cross-border data transfers receive government approval, the algorithm registration system, and the state's review of generative AI models before public deployment all serve a dual function: they govern risk and they secure domestic control over AI infrastructure. China's stated ambition to achieve global AI leadership by 2030 — articulated in the 2017 New Generation AI Development Plan — provides the strategic context in which all regulatory decisions are made.

This philosophy produces a governance model that is proactive, dense, and capable of rapid deployment — but also one that embeds state oversight as a structural feature of the AI ecosystem, not an external constraint upon it.

Comparative Perspectives: EU, US, and China

The three major regulatory models now competing for global influence embody fundamentally different answers to the same question: what is AI governance for?

The European Union has produced the world's first comprehensive horizontal AI law. The EU AI Act, which entered into force in August 2024 and will be fully applicable by 2026–2027, takes a risk-tiered approach. AI systems are classified as posing unacceptable, high, limited, or minimal risk, with escalating obligations at each level. Unacceptable-risk applications — including government social scoring systems and certain real-time biometric surveillance tools — are prohibited outright. High-risk systems, deployed in healthcare, hiring, or critical infrastructure, must satisfy stringent requirements around data governance, transparency, human oversight, and conformity assessment. As one European Parliament member observed in a widely cited remark, regulation is not merely about rules but about expressing values. The EU AI Act is, above all, a values project — an assertion that AI development in Europe must be accountable to fundamental rights frameworks.

The United States, by contrast, has no comparable federal AI statute. Regulation proceeds through sector-specific agencies — the FDA for medical AI, the FTC for consumer protection, the NHTSA for autonomous vehicles — with voluntary frameworks like NIST's AI Risk Management Framework providing broader guidance. At the state level, the picture is active but fragmented: over 500 AI-related bills were introduced across 42 states in 2025 alone, covering facial recognition, hiring algorithms, and deepfakes. The Trump administration's current posture is overtly innovation-first, with the newly established White House Office of Strategic AI Innovation charged with maximizing US global AI leadership, and federal agencies directed to prioritize enforcement only in cases of clear harm. Where the EU asserts regulatory leadership as a form of normative power, the US relies on its technological and commercial dominance to set de facto standards.

China occupies a third position distinct from both. Its framework combines the regulatory ambition of the EU — dense, enforceable, comprehensive — with the developmental orientation of industrial policy, and overlays it with the state's particular interest in information control and social stability. Where the EU frames AI regulation primarily through the lens of individual rights, and the US frames it through the lens of market competition, China frames it through the lens of national governance. Centralized decision-making enables rapid regulatory responses; the absence of multi-party deliberation eliminates friction that slows European and American processes.

One practical consequence of this difference is pace. China has issued as many national AI requirements in the first half of 2025 as it did in the preceding three years. No democratic regulatory system — with its consultations, legislative procedures, and judicial review — can match that tempo. Whether this represents an advantage or a vulnerability depends on one's prior assumptions about what governance is for.

How China's AI Policy Challenges the World

China's AI governance model poses a challenge to the international order in at least four distinct ways.

As an alternative model. Western AI policy discourse has long assumed, often implicitly, that liberal democratic governance frameworks represent the appropriate template for AI regulation globally. China's emergence as a prolific regulator disrupts that assumption. A state can regulate AI rapidly, comprehensively, and in ways that shape market behavior without adopting rights-based frameworks or multi-stakeholder deliberation. China offers a third model, and its existence complicates efforts to build global consensus around any single governance paradigm.

Through international standard-setting. China has been systematically expanding its presence in international standard-developing organizations, including the International Telecommunication Union, the International Organization for Standardization, and the International Electrotechnical Commission. Research has found that China is escalating its presence in these bodies with a view to expanding its normative vision of digital governance. China does not yet dominate these organizations, and evidence of undue influence is limited. But its growing footprint may present opportunities to constrain efforts to encode democratic norms into global technical standards, even without actively promoting its own alternatives.

Through commercial and technological influence. China's policy of producing free or cheap open-weight AI models means that companies around the world are increasingly building services on Chinese AI. When developers adopt Chinese models, they inherit the governance assumptions embedded in those systems. Technology transfer under Belt and Road Initiative digital cooperation agreements extends China's AI governance practices into developing economies, creating dependencies that may shape future regulatory orientations.

Through its Global AI Governance Action Plan. In July 2025, China announced a 13-point Global AI Governance Action Plan at the World AI Conference in Shanghai. Published just days after the US released its own AI Action Plan, China's document positions the United Nations as the principal venue for global AI governance, frames rulemaking around "bridging the digital divide" and "inclusive benefit-sharing," and calls for standardization processes in which developing countries have a meaningful voice. In a pointed formulation, China's Global AI Governance Initiative declares opposition to "drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI" — a thinly veiled critique of US semiconductor export controls. The Action Plan is simultaneously a policy document and a geopolitical positioning exercise, seeking to claim the mantle of "AI for Good" for a governance model that serves China's own interests.

Comparative Evaluation

China's regulatory model attracts substantial criticism from scholars, civil society organizations, and foreign governments — criticisms that are important to take seriously.

The most fundamental critique concerns freedom of expression. China's requirement that AI outputs align with "core socialist values" is not a technical safety standard — it is a content restriction that extends the state's existing information control apparatus into AI-generated content. When generative AI must comply with politically defined content norms, the technology becomes an extension of state ideology rather than a neutral infrastructure. This is a qualitatively different kind of AI governance than that practiced in liberal democracies.

Transparency is limited. While China's algorithm registration system and model filing requirements are genuinely rigorous, they are administered by and disclosed to regulatory authorities rather than to the public, civil society, or independent auditors. The EU AI Act, by contrast, mandates public disclosures, independent conformity assessments, and civil society access to information. The absence of external oversight mechanisms in China's system means that regulatory compliance cannot be independently verified.

Surveillance normalization. Some scholars argue that China's AI governance framework actively normalizes the use of advanced algorithms in surveillance operations. While the EU AI Act imposes strict restrictions on real-time biometric surveillance in public spaces, China's framework does not prohibit such applications, and the state is itself a major deployer of AI surveillance technologies. This represents not merely a regulatory gap but a deliberate policy choice that is difficult to reconcile with the human rights principles advanced by the OECD, UNESCO, and the Council of Europe.

Challenges for foreign companies. The combination of mandatory algorithm registration, pre-deployment security assessments for generative models, strict data localization requirements, and content governance obligations creates a compliance environment that is uniquely burdensome for foreign AI companies. The requirement that AI systems influencing public opinion register with the CAC — combined with requirements that training data comply with broad "important data" categories that may not leave China without government approval — creates structural advantages for domestic incumbents.

Limited bottom-up feedback. Domestic research has noted that China's governance model produces inconsistencies, with strong central directives coexisting with uneven regional implementation and limited mechanisms for feedback from those most affected by AI deployments. The absence of meaningful civil society input into regulatory design is not merely a democratic deficit — it is a functional limitation that may reduce the quality and adaptability of governance over time.

On the other hand, a balanced analysis requires acknowledging what China's model does well — and several scholars and policy observers have made this case compellingly.

Regulatory speed and coherence. Fragmented regulatory systems — whether the EU's member-state patchwork or the US's federal-state dissonance — struggle to keep pace with AI development. China's centralized system can identify a risk, design a response, consult (in a limited form), and implement a rule more quickly than any democratic counterpart. In a technology environment where regulatory lag is itself a governance failure, this matters.

Practical enforceability. China's regulatory framework is not merely aspirational. The algorithm registration system has resulted in thousands of active filings; the CAC has approved and registered hundreds of generative AI platforms. Pre-deployment security assessments are conducted and enforced. Penalties for non-compliance are real and include business suspension and criminal liability. Compared to the EU AI Act, whose full provisions will not apply until 2026–2027, China's system has achieved operational density that Western frameworks have not yet matched.

Development-oriented governance. For the Global South, China's framing of AI governance as infrastructure development rather than risk management offers a potentially attractive alternative. China's Action Plan explicitly supports developing countries in establishing AI training centers, provides for technology sharing under open-weight model policies, and frames global AI governance around equitable access rather than regulatory compliance. Whether these are genuine commitments or rhetorical positioning is contested — but the framing resonates in regions that have felt marginalized from Western-led governance conversations.

Precedent value. China has demonstrated that states can mandate AI content labelling, require pre-deployment safety assessments, and register algorithmic systems at scale. Whatever one thinks of the political context, the technical and administrative feasibility of these measures is no longer theoretical. China has served as a real-world test bed for regulatory instruments that other jurisdictions may eventually adopt in modified form.

Strict, But Not Always Visionary: A Final Reflection

China's AI governance system is, by any reasonable measure, among the most extensive in the world. It is dense, enforceable, rapidly evolving, and backed by state capacity that democratic systems cannot easily replicate. It represents a coherent — if politically distinct — answer to the question of how a state should govern transformative technology.

And yet, there is a reasonable case that China's approach is strict without being normatively visionary in the sense that the EU aspires to be. China's AI regulation is primarily oriented toward domestic control and technological competitiveness. Its international governance proposals are sophisticated positioning exercises — but they do not offer a compelling positive vision for how AI should serve humanity beyond the interests of states. The Global AI Governance Action Plan's commitment to "AI for Good" and "bridging the digital divide" is framed in the language of multilateralism, but the underlying logic is one of competing national interests, not shared global governance.

The EU AI Act, whatever its practical limitations, represents a genuine attempt to articulate universal principles — fundamental rights, democratic accountability, human oversight — and encode them in binding legal form. China's framework articulates national priorities and encodes state interests. These are different projects, and conflating them obscures what is genuinely at stake in the governance competition.

What China's model does illuminate is a trajectory that the global AI governance system may increasingly follow, regardless of normative preference: the combination of state-driven innovation incentives with tighter oversight mechanisms, deployed at speed, through centralized institutions. As AI capabilities grow and their societal impacts deepen, the pressure on democratic governments to adopt more active governance postures — whether through mandatory registration, pre-deployment assessments, or content standards — will intensify. The question is not whether states will assume greater control over AI infrastructure, but on what normative basis they will do so.

Understanding China's regulatory choices — their logic, their limitations, and their global influence — is therefore essential preparation for anticipating the next phase of AI governance. The governance race is not only about who builds the best models; it is also, increasingly, about whose regulatory architecture shapes the world in which those models operate.

© 2026 Law of Tomorrow. This work is licensed for non-commercial use with attribution.

Share