Law of Tomorrow: AI Policy Tracker
Law of Tomorrow · AI Policy Intelligence Last reviewed: March 2026 — reflects publicly available sources
AI Governance Intelligence · 2025–2026

Law of Tomorrow:
AI Policy Tracker

A structured map of the current AI legal and regulatory landscape across key jurisdictions worldwide. This tracker organises enacted legislation, national AI strategies, executive directives, and regulatory guidance to help policymakers, legal practitioners, researchers, and organisations understand where the law stands today — and where it is heading.

Covers: Binding law · National AI strategy · Regulatory guidance · Soft law · Sector-specific frameworks · Intergovernmental instruments | 19 jurisdictions & international frameworks featured

19
Jurisdictions Tracked
6
Binding AI Laws In Force
5
International Frameworks
Quick Overview

The Global AI Policy Landscape at a Glance

How the featured jurisdictions break down by regulatory posture, legal form, and enforcement readiness.

6
Binding AI-Specific Laws in Force
EU AI Act, China sectoral rules, South Korea Basic AI Act, Japan AI Promotion Act, Singapore PDPA sector rules
8
Primarily Soft-Law / Strategy Led
USA, UK, Canada, Hong Kong, India, UAE, Australia, African Union
5
Active International Frameworks
Council of Europe Convention, OECD Principles, G7 Hiroshima Process, UN Global Digital Compact, UNESCO Recommendation
5
Draft AI Laws Under Development
Türkiye, Canada (AIDA), Brazil, India (Digital India Act), Australia

Most Active Policy Leaders (2024–2026)

🇪🇺 European Union 🇨🇳 China 🇺🇸 United States 🇬🇧 United Kingdom 🇸🇬 Singapore 🇹🇷 Türkiye 🇨🇦 Canada 🇰🇷 South Korea 🇯🇵 Japan 🇦🇪 UAE
Global Framework

AI Policy World Map

Interactive map of tracked jurisdictions. Hover or tap a highlighted region or pin to view the policy status. Click any marker to jump to the full jurisdiction profile below.

Binding Law in Force
Active Strategy / Soft Law
Draft Legislation
Intergovernmental / Emerging
US CA BR EU UK TR UAE CN IN JP KR SG AU HK AF
→ Click to view full profile

Hover or tap a highlighted region or marker to view details. Shapes are illustrative. Click any marker to scroll to the full country profile.

Jurisdiction Tracker

Country & Region Profiles

Click any jurisdiction bar to expand the full profile including key instruments, binding status, approach notes, and direct official source links. Filter by region or regulatory posture.

International & Multilateral Frameworks
🇺🇸
United States
Federal Framework · Sector-Specific Regulation
Strategy-Led

The United States leads globally in total AI policy volume but has no single federal AI act. Governance is distributed across federal agencies, with the NIST AI Risk Management Framework as the primary voluntary backbone. The Biden Executive Order on AI (Oct 2023) directed agency-level risk assessments; the Trump administration (Jan 2025) revoked much of this and issued an AI Action Plan prioritising competitiveness and deregulation. A December 2025 executive order sought to establish a national AI policy framework and preempt conflicting state laws. Sector-specific rules in finance, healthcare, and critical infrastructure remain the main binding layer. States are filling the federal gap: Colorado’s AI Act (effective June 2026), California SB 53 (effective January 2026), and Texas TRAIGA (effective September 2025) are among the most significant.

Key Policy Instruments
  • Executive Order 14110 on Safe, Secure, and Trustworthy AI (Biden, Oct 2023) — partially revoked 2025Exec. Order
  • Presidential Action on AI (Trump, Jan 2025) — Removing Barriers to American AI LeadershipExec. Order
  • EO on Ensuring a National Policy Framework for AI (Trump, Dec 2025) — state preemptionExec. Order
  • NIST AI Risk Management Framework (AI RMF 1.0, 2023) — voluntary, widely adoptedGuidance
  • NIST Generative AI Profile (NIST AI 600-1, 2024)Guidance
  • National AI Initiative Act 2020 — statutory basis for federal AI coordinationLaw
  • Colorado AI Act (SB 24-205) effective June 2026 — algorithmic discrimination protectionsState Law
  • Texas TRAIGA — effective September 2025; California SB 53 — effective January 2026State Law
  • FTC, CFPB, SEC sector guidance on AI in consumer-facing applicationsSector Guidance
Current Approach

Voluntary national framework (NIST RMF) plus sector-specific binding rules plus executive-order-driven federal agency mandates. No unified federal AI Act. The 2025 Trump AI Action Plan prioritises US competitiveness and deregulation. Sub-federal state legislation is increasingly filling the legislative gap.

What to Watch: Federal preemption debate over state AI laws following Dec 2025 EO. Colorado and California AI act enforcement from 2026. FTC AI enforcement actions. NIST framework revision cycles.
🇪🇺
European Union
Supranational · Horizontal Binding Regulation
In Force / Phasing

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and is the world’s first comprehensive AI law. It applies to providers and deployers of AI systems in the EU regardless of provider location, giving it significant extraterritorial scope. The regulation classifies AI systems across four risk tiers: unacceptable-risk (prohibited from August 2025), high-risk (strict conformity assessment requirements), limited-risk (transparency obligations), and minimal-risk (no specific obligations). General-purpose AI models face transparency and copyright obligations, with additional systemic-risk requirements for the most powerful models. Fines reach €35 million or 7% of global annual turnover. The GPAI Code of Practice was finalised in August 2025. The Digital Omnibus Package (November 2025) proposed simplification amendments including a delayed high-risk enforcement timeline to as late as December 2027 to allow standards to be finalised.

Key Policy Instruments
  • EU AI Act (Regulation 2024/1689) — in force August 2024, full application phased to 2026–2027Binding Law
  • GPAI Code of Practice — finalised August 2025, applies to general-purpose AI providersBinding Standard
  • Digital Omnibus Package (Nov 2025) — proposed simplification amendments, delays for SMCsDraft Amendment
  • EU AI Liability Directive (proposed) — civil liability for AI-related harmDraft Law
  • GDPR — applies to AI personal data processingBinding Law
  • EU AI Office — supervisory body for GPAI and systemic-risk enforcementRegulator
  • Coordinated Plan on Artificial Intelligence (2021, updated 2024)Strategy
Current Approach

Comprehensive horizontal risk-based regulation. Sets the global legislative benchmark. Enforced by national market surveillance authorities and the EU AI Office.

What to Watch: Digital Omnibus Package outcome and revised high-risk deadlines (possibly December 2027). EU AI Liability Directive progress. EU AI Office first enforcement actions. GPAI systemic-risk threshold implementation.
🇬🇧
United Kingdom
Sector-Regulator Model · Pro-Innovation
Strategy + Guidance

The UK deliberately avoided a single AI Act, instead tasking existing regulators — FCA (financial services), ICO (data/privacy), CMA (competition), MHRA (medicines) — to apply five cross-cutting AI principles within their domains: safety and security; transparency and explainability; fairness; accountability and governance; contestability and redress. The AI Safety Institute (AISI) evaluates frontier models for systemic risks. The 2025 AI Action Plan commits significant national compute investment and embeds AI across public services. The Data (Use and Access) Act 2025 updated data governance including AI training data exemptions. As of late 2025, the Government was considering converting AISI into a legal entity with binding powers, and signalled possible AI liability legislation.

Key Policy Instruments
  • AI Regulation White Paper: A Pro-Innovation Approach to AI Regulation (2023)Policy
  • AI Action Plan 2025 — compute investment, public sector AI adoptionStrategy
  • AI Safety Institute (AISI) — frontier model evaluation, Bletchley Declaration signatoryRegulator Body
  • Data (Use and Access) Act 2025 — updated data governance including AI training exemptionsBinding Law
  • ICO Guidance on AI and Data Protection (2024 refresh)Guidance
  • FCA AI Discussion Paper (DP5/22) and ongoing Dear CEO letters on AI governanceSector Guidance
  • Online Safety Act 2023 — AI-generated content and online harmBinding Law
Current Approach

Principles-based, sector-led, pro-innovation. No dedicated AI Act yet. Coordination through the Digital Regulation Cooperation Forum (DRCF). Internationally active through AISI, G7, and G20 AI governance forums.

What to Watch: Whether the UK enacts primary AI legislation in 2026. AISI gaining binding statutory powers. FCA AI governance enforcement for financial firms. UK-EU AI governance cooperation post-Brexit.
🇹🇷
Türkiye
National Strategy Active · Draft Legislation
Strategy + Draft Law

Türkiye is among the world’s most prolific AI policy producers by OECD count, with 36 documented AI initiatives as of 2024, ranking it top 10 globally. The National Artificial Intelligence Strategy 2021–2025 sets targets across six axes: R&D and innovation, qualified workforce, data and infrastructure, regulatory adaptation, international cooperation, and public sector adoption. KVKK (under Law No. 6698) has published guidance on automated decision-making relevant to AI. BTK oversees digital platforms including AI-enabled services. Public Sector Generative AI Use Guidelines were issued in 2024. A draft AI Act reflecting EU AI Act principles is under parliamentary consideration. Turkey’s EU candidacy creates strong incentive for regulatory convergence.

Key Policy Instruments
  • National Artificial Intelligence Strategy 2021–2025 (Ulusal Yapay Zeka Stratejisi)Strategy
  • Personal Data Protection Law No. 6698 (KVKK) — automated processing provisionsBinding Law
  • KVKK Guidance on Automated Decision-Making under Article 11Guidance
  • BTK Digital Platform Regulations — covers AI-enabled content servicesRegulation
  • Public Sector Generative AI Use Guidelines (2024) — Presidential Digital Transformation OfficeGuidance
  • Draft AI Act (under preparation, EU-aligned, as of 2025)Draft
Current Approach

Strategy-driven with growing regulatory infrastructure. KVKK provides partial AI coverage through data protection law. BTK covers digital platforms. A dedicated AI law aligned with EU standards is expected once enacted.

What to Watch: Parliamentary progress of the draft AI Act. Implementation of 2021–2025 National Strategy deliverables. KVKK interpretive guidance on AI systems processing personal data. EU AI Act alignment during Türkiye’s accession process.
🇨🇳
China
Binding Sectoral Rules · National AI Strategy
Multiple Rules In Force

China has enacted multiple binding AI-specific administrative regulations targeting distinct application layers. Algorithmic recommendation services (2022), deep synthesis/deepfake technology (2023), and generative AI services (2023) are all regulated with specific compliance obligations, including mandatory pre-market security assessments filed with the Cyberspace Administration of China (CAC). Measures for Labeling AI-Generated and Synthetic Content (September 2025) require clear disclosure across all digital platforms. An amended Cybersecurity Law explicitly referencing AI became enforceable on 1 January 2026, adding AI security review and data localisation requirements. A draft regulation targeting Anthropomorphic AI was published for consultation in December 2025. China’s national AI strategy targets global AI leadership by 2030. The Global AI Governance Action Plan (July 2025) sets out China’s vision for international AI norms.

Key Policy Instruments
  • Measures for the Management of Algorithmic Recommendations (2022)Binding Regulation
  • Measures for the Management of Deep Synthesis Internet Information Services (2023)Binding Regulation
  • Interim Measures for Generative Artificial Intelligence Services (Aug 2023)Binding Regulation
  • Measures for Labeling AI-Generated and Synthetic Content (Sep 2025)Binding Regulation
  • Cybersecurity Law (amended, effective Jan 2026) — AI security reviews and data localisationBinding Law
  • Draft Regulation on Anthropomorphic AI — AI Companions (Dec 2025 consultation)Draft
  • Global AI Governance Action Plan (Jul 2025)Strategy
  • New Generation AI Development Plan (2017, 2030 roadmap)Strategy
  • Personal Information Protection Law (PIPL, 2021)Binding Law
Current Approach

Layer-by-layer enforceable regulations for each AI application category, combined with pre-market security assessment requirements and mandatory content labelling. Primary regulator: Cyberspace Administration of China (CAC). Simultaneously a major AI technology developer with state-backed industrial policy.

What to Watch: Comprehensive AI Law development expected to consolidate existing regulations. Anthropomorphic AI regulation finalization. China’s AI standardisation efforts at ISO/IEC level. CAC enforcement actions against generative AI providers.
🇯🇵
Japan
Innovation-First Approach · Light-Touch Regulation
AI Promotion Act In Force

Japan enacted the AI Promotion Act in May 2025, a landmark but notably light-touch regulation. Rather than imposing rigid, punitive mandates, the Act encourages companies to cooperate with government safety measures and empowers the government to publicly name companies that use AI to violate human rights (‘name and shame’ mechanism). This ‘innovation-first’ approach prioritises adoption over strict safety guarantees, contrasting with the EU model. The Act is backed by earlier voluntary frameworks: Social Principles of Human-Centric AI (2019) and AI Guidelines for Business Version 1.0 (April 2024). Japan also signed the Council of Europe AI Convention as the only Asian observer state. The Japan AI Safety Institute (JAISI) was established to conduct AI safety evaluation research. Japan leads through the G7 Hiroshima AI Process under its 2023 G7 presidency.

Key Policy Instruments
  • AI Promotion Act (enacted May 2025) — ‘name and shame’, cooperation duties, no criminal penaltiesBinding Law
  • AI Guidelines for Business Version 1.0 (Apr 2024) — risk-based voluntary frameworkGuidance
  • Social Principles of Human-Centric AI (2019) — overarching national AI ethics visionStrategy
  • Japan AI Safety Institute (JAISI) — safety evaluation and international cooperationRegulator Body
  • G7 Hiroshima AI Process Comprehensive Policy Framework (co-led, 2023)International
  • Act on the Protection of Personal Information (APPI) — privacy framework covering AI dataBinding Law
Current Approach

Voluntary-led governance transitioning to a principles-based statutory framework. Non-punitive but non-trivial: reputational pressure and cooperation duties motivate compliance. Japan leads internationally through the G7 Hiroshima AI Process and the Council of Europe AI Convention.

What to Watch: Whether Japan enacts more prescriptive AI legislation targeting foundation model developers. JAISI’s international alignment with UK AISI and EU AI Office. Japan’s copyright law amendments permitting AI training on protected works.
🇰🇷
South Korea
First Comprehensive AI Law in Asia
Basic AI Act In Force (Jan 2026)

South Korea became the first country in Asia to enact comprehensive AI legislation with its Basic AI Act (AI Framework Act), finalized in January 2025 and entering into force on 22 January 2026. The Act establishes a risk-based regulatory framework targeting ‘high-impact AI’ systems including generative AI, with requirements for transparency, safety, and fairness. It creates a national AI control tower, a dedicated AI Safety Institute, and a government AI committee. South Korea is also investing heavily in AI infrastructure, announcing the world’s highest-capacity AI data centre and launching the AI Open Innovation Hub platform. The country’s existing Act on Promotion of Information and Communications Network Utilization and Information Protection, the Personal Information Protection Act (PIPA), and sector rules in finance also apply to AI.

Key Policy Instruments
  • Basic AI Act (AI Framework Act) — enacted Jan 2025, in force Jan 22, 2026Binding Law
  • National AI Control Tower — cross-ministry AI governance coordinationGov. Body
  • South Korea AI Safety Institute — model safety evaluation and researchRegulator Body
  • Personal Information Protection Act (PIPA) — covers AI training data and profilingBinding Law
  • AI Open Innovation Hub — national AI development support platformNational Initiative
  • Act on Promotion of Information and Communications Network Utilization (ICTNU)Binding Law
Current Approach

Risk-based framework law modelled partly on EU principles but with a promotion-first orientation. First in Asia to have a comprehensive AI Act. Combines regulatory oversight with substantial state investment in AI infrastructure and R&D support.

What to Watch: Enforcement of the Basic AI Act from January 2026. High-impact AI system classification guidance. South Korea’s AI Safety Institute operational capacity. Interaction between the Basic AI Act and sector-specific financial and telecom AI rules.
🇸🇬
Singapore / ASEAN
Model Governance Framework · Sector Binding Rules
Governance Framework Active

Singapore has built the most complete voluntary AI governance ecosystem globally. The Model AI Governance Framework (2nd edition, 2020) provides granular guidance on human oversight, decision-making explainability, and responsible AI deployment. The AI Verify testing toolkit enables organisations to make verifiable responsible-AI claims against international standards. The Monetary Authority of Singapore (MAS) has issued enforceable expectations for AI use in financial services through its Technology Risk Management Guidelines. AI Singapore 2.0 (2024) commits S$1 billion in national compute investment. In 2025, Singapore introduced an Expanded Global AI Assurance Sandbox enabling real-world testing of AI applications. Singapore leads ASEAN’s regional AI governance agenda through Project Moonshot, the Expanded Guide on AI Governance and Ethics for Generative AI (January 2025), and the ASEAN Responsible AI Roadmap 2025–2030.

Key Policy Instruments
  • Model AI Governance Framework for Private Sector (2nd ed., PDPC, 2020)Framework
  • AI Singapore 2.0 — National AI Strategy (2024) — S$1bn compute investmentStrategy
  • MAS Technology Risk Management Guidelines — AI obligations for financial servicesSector Guidance
  • AI Verify Testing Framework and Toolkit (IMDA / AI Verify Foundation)Governance Tool
  • Expanded Global AI Assurance Sandbox (2025) — real-world AI testing environmentGovernance Tool
  • Personal Data Protection Act (PDPA) — covers AI personal data processingBinding Law
  • ASEAN Responsible AI Roadmap 2025–2030Regional Strategy
  • Expanded Guide on AI Governance & Ethics — Generative AI (ASEAN, Jan 2025)Regional Guidance
Current Approach

Voluntary model framework supplemented by sector-specific binding guidance. AI Verify toolkit enables verifiable responsible AI behaviour, widely used by multinationals. Singapore drives ASEAN-wide AI interoperability and governance alignment.

What to Watch: Whether Singapore moves from voluntary to mandatory requirements. AI Verify adoption across ASEAN. MAS AI governance expectations evolution. ASEAN AI Roadmap deliverables through 2030.
🇮🇳
India
Soft-Law Approach · Draft Digital India Act
Strategy + Emerging Framework

India has taken a ‘sandbox-to-regulation’ model for AI governance, encouraging innovation while tightening oversight in specific high-harm domains. The Ministry of Electronics and Information Technology (MeitY) issued AI content labelling requirements in 2024, requiring AI-generated content labels and government approval before launching AI models likely to produce unreliable content. The National Strategy for AI (2018, updated) positions India as an ‘AI garage’ for emerging economies. In late 2025, the Government unveiled India AI Governance Guidelines to steer safe, inclusive, and responsible adoption. The Digital Personal Data Protection Act (DPDPA) 2023 has been passed but is yet to fully come into effect. The comprehensive Digital India Act, which would address AI-generated content governance, remains in the drafting and consultation phase. India’s National AI Programme (IndiaAI Mission) allocates ₹10,372 crore for AI infrastructure through 2028.

Key Policy Instruments
  • National Strategy for AI (NITI Aayog, 2018) — positioned India as AI garageStrategy
  • MeitY AI Content Labelling Requirements (2024) — mandatory labels on AI-generated contentSector Rule
  • India AI Governance Guidelines (late 2025) — safe and inclusive AI adoption frameworkGuidance
  • Digital Personal Data Protection Act (DPDPA) 2023 — enacted, implementation phasedBinding Law
  • Digital India Act (draft, consultation phase) — would include AI content governanceDraft
  • IndiaAI Mission ₹10,372 crore allocation — national AI compute and infrastructureNational Initiative
  • Reserve Bank of India (RBI) and TRAI sector guidance on AI risk mitigationSector Guidance
Current Approach

Soft-law first, hard law where harm is evident. MeitY plays primary coordinating role for AI policy. Sector-specific regulators (RBI, TRAI, SEBI) have issued AI guidance in their domains. Comprehensive AI governance framework is expected to emerge through the Digital India Act once enacted.

What to Watch: Digital India Act passage and AI governance provisions. DPDPA implementation rules. MeitY follow-up on AI labelling enforcement. IndiaAI Mission compute infrastructure progress. OECD and G20 AI principles alignment.
🇦🇺
Australia
Safe-by-Design · Voluntary AI Safety Standard
Strategy + Voluntary Standard

Australia has no AI-specific law as of early 2026 but is actively developing a regulatory framework. The Australian Government released a Voluntary AI Safety Standard (2025) outlining 10 guardrails for responsible AI use, particularly for organisations deploying AI in high-risk settings such as health, credit, and employment. The Privacy Act (amended) now requires disclosures about Automated Decision-Making Technology (ADMT) affecting individuals. The Online Safety Codes, developed under the Online Safety Act 2021, impose mandatory safety obligations on platforms including AI-generated content obligations (new age-restricted codes from December 2025). Australia participates in the International Network of AI Safety Institutes (alongside UK, US, EU, Japan, and others). The government has signalled intent to mandate the AI Safety Standard guardrails for high-risk AI uses by mid-2026.

Key Policy Instruments
  • Voluntary AI Safety Standard (2025) — 10 guardrails for responsible AI, phasing to mandatoryVoluntary Standard
  • Privacy Act (amended) — ADMT disclosure requirements for automated decision-makingBinding Law
  • Online Safety Act 2021 and Online Safety Codes (updated Dec 2025) — AI content obligationsBinding Law
  • National AI Centre — government AI adoption and guidance bodyGov. Body
  • Australia’s National Artificial Intelligence Strategy — foundational policy frameworkStrategy
  • International Network of AI Safety Institutes membership (2024)International
Current Approach

Voluntary-led with binding sector rules in online safety and privacy. Government has flagged mandatory guardrails for high-risk AI by mid-2026. Australia is positioning itself as a ‘trusted AI’ jurisdiction through AI Safety Institute cooperation.

What to Watch: Whether the Voluntary AI Safety Standard becomes mandatory for high-risk uses. Privacy Act ADMT disclosure enforcement. Australia’s AI regulatory framework consultation and legislation timeline. Participation in international AI Safety Institute cooperation.
🇨🇦
Canada
Proposed AI Act · Active Strategy
Bill Under Consideration

Canada is advancing the Artificial Intelligence and Data Act (AIDA), introduced as Part 3 of Bill C-27 in June 2022. AIDA would establish obligations for ‘high-impact’ AI systems, require harm risk mitigation, mandate transparency disclosures to individuals affected by AI decisions, and create a new AI and Data Commissioner. Bill C-27 remained under parliamentary scrutiny as of 2025. In parallel, the Directive on Automated Decision-Making (updated 2023) applies to federal government AI use on a tiered basis. Canada established the AI and Data Standardization Collaborative in 2025 to develop multistakeholder AI standards. The Pan-Canadian AI Strategy supports a globally recognised academic and research ecosystem. The Voluntary Code of Conduct on Responsible Development of Advanced Generative AI (2023) provides interim soft-law bridge.

Key Policy Instruments
  • Artificial Intelligence and Data Act (AIDA) — Bill C-27 (2022, under parliamentary review)Draft Law
  • Directive on Automated Decision-Making (Treasury Board, updated 2023) — federal government onlyBinding Directive
  • AI and Data Standardization Collaborative (2025) — multistakeholder AI standards developmentNational Initiative
  • Pan-Canadian AI Strategy (Phase 1 & 2) — research and talent investmentStrategy
  • Personal Information Protection and Electronic Documents Act (PIPEDA)Binding Law
  • Canadian Voluntary Code of Conduct on Responsible Development of Advanced Generative AI (2023)Voluntary Code
Current Approach

Federal government AI use is already regulated through the Directive on ADM. Private sector AI regulation awaits AIDA enactment. PIPEDA and OPC guidance provide interim coverage.

What to Watch: AIDA final form and Senate passage. Whether C-27 is split to advance privacy modernisation separately. OPC enforcement actions. AI and Data Standardization Collaborative outputs.
🇧🇷
Brazil
Latin America’s Binding AI Framework
Bill Under Legislative Review

Brazil is leading Latin America’s AI regulation with Bill No. 2338/2023, which received Senate approval in December 2024 and is undergoing further legislative process in 2025, including a dedicated committee and public hearings. The bill is heavily aligned with the EU AI Act’s risk-based approach, banning ‘excessive risk’ AI systems and establishing strict liability provisions. It proposes a dedicated regulatory authority to oversee compliance and promote innovation, and aligns with Brazil’s General Data Protection Law (LGPD). Core measures are projected to take effect in 2026 once enacted. Brazil has also committed to investing BRL 4 billion in domestic AI capabilities through its AI investment plan (EBIA), and is an active participant in G20 and UN AI governance discussions.

Key Policy Instruments
  • Bill No. 2338/2023 — Senate approved Dec 2024, under further legislative review in 2025Draft Law
  • Brazilian Artificial Intelligence Strategy (EBIA) — BRL 4bn investment commitmentStrategy
  • General Data Protection Law (LGPD) — covers AI processing of personal dataBinding Law
  • National Council for Artificial Intelligence (CNAIAE) — advisory governance bodyGov. Body
Current Approach

EU-inspired risk-based approach adapted for Brazil’s domestic context. Risk tiers, strict liability for high-risk AI, and fundamental rights protection are core principles. Regulatory authority to be created upon enactment.

What to Watch: Final passage of Bill 2338/2023 and regulatory authority establishment. Interaction between the AI Bill and the LGPD. Brazil’s G20 AI governance role. Timeline for BRL 4bn AI investment implementation.
🇦🇪
United Arab Emirates
National AI Strategy 2031 · Innovation Hub
Strategy + Institutional Framework

The UAE has steadily built a national AI framework since appointing the world’s first Minister of State for Artificial Intelligence in 2017. The UAE National AI Strategy 2031 outlines goals for integrating AI across education, transportation, health, and other sectors, targeting global AI leadership. The UAE does not yet have dedicated standalone AI legislation, but has established institutional mechanisms including the Artificial Intelligence and Advanced Technology Council (AIATC), the world’s first AI-enabled Regulatory Intelligence Office (April 2025), and the UAE AI Seal to attract ethical AI businesses. Stargate UAE, a 5GW AI data centre initiative in Abu Dhabi (phased from 2026), reflects the country’s strategic AI infrastructure ambitions. In June 2025, it was announced the UAE will adopt a National AI System as an advisory member of Cabinet and federal entities from January 2026 — a global first for AI in government decision-making.

Key Policy Instruments
  • UAE National AI Strategy 2031 — comprehensive AI integration across sectorsStrategy
  • Artificial Intelligence and Advanced Technology Council (AIATC) — national AI governance bodyGov. Body
  • AI-Enabled Regulatory Intelligence Office (April 2025) — AI-driven law drafting and monitoringInnovation
  • UAE AI Ethics Guidelines — transparency, accountability, and fairness principlesGuidance
  • UAE AI Procurement Guidelines — government AI system acquisition standardsGuidance
  • DIFC AI Licence — Dubai International Financial Centre AI company licensingSector Rule
  • National AI System as Cabinet Advisory Member (from January 2026)National Initiative
Current Approach

Innovation-first, institution-led approach. No standalone AI law but an expanding institutional and ethical governance framework. UAE positions AI as a critical national asset and strategic economic driver under Vision 2031 and UAE Centennial 2071.

What to Watch: Launch of the National AI System as Cabinet advisory member (January 2026). Progress on DIFC and broader AI licensing frameworks. Stargate UAE data centre build-out. Whether UAE introduces standalone AI legislation as part of Vision 2031 goals.
🇭🇰
Hong Kong SAR
Guidance-Led · Financial Sector Rules
Guidance Active

Hong Kong has taken a principles-based, guidance-led approach to AI governance, relying primarily on existing regulatory frameworks. The Hong Kong Monetary Authority (HKMA) has been the most active regulator, issuing circulars and supervisory guidance requiring banks and financial institutions to maintain responsible AI governance, manage model risk, and ensure explainability of AI-driven credit and risk decisions. The PCPD in June 2024 published a Model Personal Data Protection Framework for AI covering procurement, implementation, and use. The 2023 Policy Address includes AI as a strategic development focus. No standalone AI law was in force as of early 2026.

Key Policy Instruments
  • HKMA Circular on Use of AI (financial institutions) — model risk governanceSector Guidance
  • HKMA Supervisory Policy Manual (SA-1: Technology Risk Management)Sector Guidance
  • Personal Data (Privacy) Ordinance (PDPO) — Cap 486 — data processing governanceBinding Law
  • PCPD: Guidance on Ethical Development and Use of AI (2021)Guidance
  • PCPD: AI Model Personal Data Protection Framework (June 2024)Guidance
  • SFC Circular on Technology Risk in Asset Management — covers algorithmic tradingSector Guidance
Current Approach

Primarily sector-regulator-driven with HKMA as the most active AI governance body. PDPO provides data-processing coverage. No horizontal AI act. Government focus is on positioning Hong Kong as an AI innovation hub while maintaining financial sector safeguards.

What to Watch: Whether HK introduces a dedicated AI governance ordinance. HKMA AI governance expectations for banks. PDPO amendments to address generative AI. Interaction with Mainland China’s AI regulations for cross-boundary services.
🌎
African Union
Intergovernmental · Continental Strategy
Continental Strategy

The African Union adopted the Continental Artificial Intelligence Strategy in February 2024 — a landmark intergovernmental framework establishing AI governance principles for all 55 AU member states. The strategy focuses on data sovereignty, digital infrastructure, AI skills development, and active African participation in global AI governance forums. The AU Data Policy Framework (2022) establishes data governance principles applicable to AI training data. The Continental Health Data Governance Framework (2025) extends these to healthcare. Individual member states have varying regulatory maturity: South Africa’s National AI Policy Framework (2024), Rwanda’s National AI Policy (2023), Kenya’s National AI Strategy (2023), Egypt’s National Open Data Policy (Sep 2025), and Nigeria’s National AI Strategy (August 2024 draft) are among the more active developments. Cameroon launched a 7-pillar National AI Strategy in July 2025 with ambitions to become Central Africa’s AI hub by 2040.

Key Policy Instruments
  • African Union Continental AI Strategy (adopted Feb 2024) — 55 member statesContinental Strategy
  • AU Data Policy Framework (2022) — data governance principles for AI contextFramework
  • Malabo Convention on Data Protection (2014, growing ratifications 2023–) — data law basisTreaty
  • Continental Health Data Governance Framework (2025)Framework
  • Smart Africa Alliance AI workstream — multi-country AI governance coordinationIntergovt. Body
  • South Africa: National AI Policy Framework (2024); Rwanda: National AI Policy 2023Member States
  • Egypt: National Open Data Policy (Sep 2025); Nigeria: National AI Strategy (draft Aug 2024)Member States
  • Cameroon: National AI Strategy (Jul 2025) — 7-pillar plan, AI hub by 2040 targetMember State
Current Approach

Intergovernmental strategy-setting with voluntary member state implementation. The Continental AI Strategy provides the normative framework and political impetus for national legislation. The AU explicitly frames AI governance within development goals, food security, healthcare, and economic inclusion.

What to Watch: Member state adoption of national AI laws aligned with the Continental Strategy. Malabo Convention ratification progress. AU engagement in UN AI governance processes. South Africa and Nigeria developing more detailed national AI legislation.
🇪🇺
Council of Europe — AI Convention
First Legally Binding International AI Treaty
Adopted & Open for Ratification

The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225) was formally adopted by the Committee of Ministers on 17 May 2024. It is the first international legally binding treaty specifically designed to ensure AI activities are consistent with human rights, democracy, and the rule of law. The Convention applies across the entire lifecycle of AI systems and requires signatories to adopt domestic legislation consistent with its principles. It is intentionally ‘open’: beyond the 46 CoE member states, it is available for signature by non-European states and international organisations, reflecting AI governance’s global nature. The EU AI Act is intended to work in harmony with the Convention, with the Act providing detailed market regulations and the Convention providing overarching human rights obligations. The US, UK, EU, and other key players are signatories. Japan participated in drafting as the only Asian observer state.

Key Policy Instruments
  • Framework Convention on AI (CETS No. 225) — adopted May 17, 2024; open for ratificationInternational Treaty
  • Key principles: human dignity, non-discrimination, privacy, transparency, accountability, reliabilityCore Principles
  • Remedies and procedural rights — right to challenge AI decisions, complaint mechanismsRights Framework
  • Interoperability mechanism — risk assessments from one jurisdiction potentially recognised in othersGovernance Tool
  • Committee on Artificial Intelligence (CAI) — drafting and monitoring bodyIntergovt. Body
Current Approach

First legally binding international AI treaty. Signatories commit to adopting domestic legislation conforming to the Convention. The open treaty structure allows global participation. Focuses on AI lifecycle, human rights compliance, and remedy mechanisms rather than technical market standards.

What to Watch: Ratification progress across member states. How the Convention interacts with the EU AI Act in practice. Whether major non-European states ratify. Implementation timelines for domestic conforming legislation.
🌐
OECD — AI Principles & Policy Observatory
Global Standards · 44 Member Countries
Revised Principles Active (2024)

The OECD AI Principles, first adopted in 2019 and revised in May 2024, provide the foundational intergovernmental standards for trustworthy AI across 44 OECD member countries and are also adopted by the G20. The Principles address transparency, accountability, robustness, safety and security, and human-centred values and fairness. The OECD AI Policy Observatory (oecd.ai) is the primary global database tracking over 1,000 AI policy initiatives across 70+ countries. In 2024, the OECD introduced the G7 Hiroshima Process Reporting Framework — the first global mechanism for organisations to voluntarily report how they implement the G7 Code of Conduct for advanced AI systems. The OECD also published Due Diligence Guidance for Responsible AI (February 2026), providing operational guidance for organisations implementing AI due diligence. The Global Partnership on AI (GPAI), with 44 member countries, operates under OECD auspices.

Key Policy Instruments
  • OECD AI Principles — first adopted 2019, revised May 2024 — 44 member countriesInternational Standard
  • G7 Hiroshima Process Reporting Framework (2024) — first global AI risk management reporting mechanismGovernance Tool
  • Due Diligence Guidance for Responsible AI (Feb 2026) — operational implementation guidanceGuidance
  • OECD AI Policy Observatory (oecd.ai) — global AI policy tracking database, 70+ countriesResearch Tool
  • Global Partnership on AI (GPAI) — 44 member countries, multistakeholder AI researchInternational Body
Current Approach

Non-binding but highly influential normative framework. OECD AI Principles have directly shaped the EU AI Act, US NIST RMF, South Korea’s Basic AI Act, Council of Europe Convention, and over 40 national AI strategies. The Reporting Framework is the closest thing to a global voluntary AI risk management disclosure standard.

What to Watch: OECD AI Principles further revision cycles as AI capabilities advance. GPAI programming under OECD auspices. Due Diligence Guidance adoption by businesses. Reporting Framework participation growth.
🇹🇬
G7 Hiroshima AI Process
International AI Governance · Voluntary Code of Conduct
Comprehensive Policy Framework Active

The G7 Hiroshima AI Process was launched under Japan’s 2023 G7 presidency to promote safe, secure, and trustworthy AI, particularly for generative and advanced AI systems. The process produced the Hiroshima Process Comprehensive Policy Framework (endorsed December 2023), comprising International Guiding Principles on AI and a voluntary International Code of Conduct for Organisations Developing Advanced AI Systems. The Code of Conduct covers risk identification and mitigation throughout the AI lifecycle, incident reporting, transparency, and cooperation with governments. In 2024, under Italy’s G7 presidency, the G7 supported the OECD in developing a Reporting Framework for voluntary disclosure of Hiroshima Code of Conduct compliance. The Hiroshima AI Process Friends Group, supported by 49 countries and regions, aims to extend these principles beyond the G7. Major AI developers including Amazon, Anthropic, Google, Microsoft, and OpenAI have committed to the inaugural reporting cycle.

Key Policy Instruments
  • Hiroshima AI Process Comprehensive Policy Framework — endorsed Dec 2023 by G7 Digital & Tech MinistersInternational Framework
  • International Guiding Principles on AI — voluntary, 11 principles for advanced AI systemsInternational Standard
  • International Code of Conduct for Organisations Developing Advanced AI Systems — voluntaryVoluntary Code
  • G7 Hiroshima AI Process Reporting Framework (OECD, 2024) — voluntary disclosure mechanismGovernance Tool
  • Hiroshima AI Process Friends Group — 49 countries and regions supporting adoptionInternational Initiative
Current Approach

Voluntary international framework targeting advanced AI developers. Sets the tone for responsible AI development globally without binding legal obligations. Backed by the world’s largest AI developers and by G7 governments. Feeds into OECD, UN, and Council of Europe AI governance work.

What to Watch: Uptake of the Reporting Framework beyond G7 companies. Hiroshima AI Process Friends Group expansion. Whether the voluntary principles transition to binding standards in any jurisdiction. G7 2026 AI Process priorities.
🇺🇳
United Nations — Global Digital Compact
Multilateral Consensus · Global AI Governance
GDC Adopted Sep 2024

The United Nations adopted the Global Digital Compact (GDC) in September 2024 at the Summit of the Future, establishing a multilateral framework for digital cooperation including AI governance. The GDC aims to bridge the global AI divide through capacity building, data governance, and principles for safe and beneficial AI. It facilitated the establishment of an Independent International Scientific Panel on AI and annual Global Dialogues on AI Governance (first in Geneva, 2026). The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), adopted by all 193 UNESCO member states, provides the broadest international normative foundation for ethical AI, addressing algorithmic transparency, accountability, and the right to an effective remedy against harmful AI. The UN General Assembly also passed resolutions in 2024 affirming the importance of safe, secure, and trustworthy AI.

Key Policy Instruments
  • UN Global Digital Compact (GDC) — adopted Sep 2024 at Summit of the FutureInternational Framework
  • UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) — 193 member statesInternational Standard
  • UN General Assembly Resolution on Safe, Secure, and Trustworthy AI (Aug 2024)International Resolution
  • Independent International Scientific Panel on AI — established under GDCIntergovt. Body
  • Annual Global Dialogues on AI Governance (first dialogue, Geneva 2026)International Process
  • Resolution for AI Scientific Panel (Aug 2025)International Resolution
Current Approach

Multilateral consensus framework. UN instruments are generally non-binding on member states but carry significant normative weight and shape national policy agendas. The GDC and UNESCO Recommendation represent the broadest global consensus on AI ethics and governance principles available.

What to Watch: First Global Dialogue on AI Governance (Geneva 2026) outcomes. Independent Scientific Panel on AI initial reports. How the GDC’s AI provisions interact with the Council of Europe Convention and regional AI laws. UNESCO Recommendation implementation across member states.
Direct Access

Official Document Library

Curated links to primary official sources. Secondary/reference sources are labelled separately.

🇪🇺 European Union

Binding Law · Strategy · Regulator

🇺🇸 United States

Executive Orders · Frameworks

🇬🇧 United Kingdom

Policy Papers · Regulator Guidance

🇹🇷 Türkiye

National Strategy · Data Authority

🇯🇵 Japan

AI Promotion Act · Guidelines

🇰🇷 South Korea

Basic AI Act · National Initiative

🇸🇬 Singapore / ASEAN

Model Framework · Financial Sector

🇧🇷 Brazil

Draft AI Bill · LGPD

🇦🇪 UAE

National AI Strategy 2031

🇨🇦 Canada

Draft Legislation · Directive

🌎 African Union

Continental Strategy · Treaty

🌐 International Reference Sources

Cross-Jurisdictional · Research Databases
Side-by-Side Comparison

Jurisdiction Comparison Matrix

A structured comparison across featured jurisdictions on the primary dimensions of AI legal and regulatory architecture.

JurisdictionAI-Specific Law?Binding Regulation?National Strategy?Risk-Based Approach?Transparency Obligations?Current Status
🇪🇺 European Union✔ AI Act 2024✔ In Force✔ Coordinated Plan✔ 4-tier risk model✔ MandatoryPhased In
🇨🇳 China✔ Sectoral regulations✔ Multiple in force✔ AI 2030 Plan◐ Application-specific✔ Labelling requiredIn Force
🇰🇷 South Korea✔ Basic AI Act (Jan 2026)✔ In Force Jan 2026✔ National AI Plan✔ Risk-based framework✔ Transparency requiredIn Force
🇯🇵 Japan✔ AI Promotion Act (May 2025)◐ No penalties, cooperation duties✔ AI Strategy Council◐ Principles-based, voluntary◐ Guidelines onlyIn Force (Light-Touch)
🇺🇸 United States✖ No federal AI Act◐ Sector rules + state laws✔ EO + AI Action Plan◐ NIST RMF voluntary◐ Sector-specificStrategy-Led
🇬🇧 United Kingdom✖ No dedicated AI Act◐ Sector rules; Online Safety Act✔ AI Action Plan 2025✔ Principles-based◐ Sector-specificPro-Innovation
🇸🇬 Singapore✖ No AI-specific law◐ MAS sector rules; PDPA✔ AI Singapore 2.0✔ Model Framework✔ Framework obligs.Framework Active
🇹🇷 Türkiye◐ Draft law in preparation◐ KVKK / BTK partial✔ National AI Strategy 2021–2025◐ Draft EU alignment◐ KVKK automated ADM rulesStrategy + Draft
🇨🇦 Canada◐ AIDA (Bill C-27) pending◐ Directive on ADM (federal only)✔ Pan-Canadian AI Strategy◐ High-impact AI concept◐ Voluntary code onlyBill Pending
🇮🇳 India✖ No AI-specific law◐ MeitY content labelling rules✔ National AI Strategy◐ Sandbox-to-regulation model◐ MeitY labelling rulesStrategy + Emerging
🇦🇺 Australia✖ No AI-specific law yet◐ Privacy Act ADMT; Online Safety Codes✔ National AI Strategy◐ Voluntary AI Safety Standard◐ Privacy ADMT disclosuresVoluntary Standard
🇧🇷 Brazil◐ Bill 2338/2023 — pending enactment◐ LGPD covers AI data processing✔ EBIA AI Investment Plan◐ Risk-based (draft)◐ LGPD transparency obligs.Bill Under Review
🇦🇪 UAE✖ No standalone AI law◐ DIFC AI Licence; sector rules✔ National AI Strategy 2031✖ Ethics-based, not risk-tiered◐ AI Ethics GuidelinesStrategy + Institution
🇭🇰 Hong Kong SAR✖ No AI-specific law◐ HKMA / SFC sector guidance◐ Policy Address commitments✖ Not formalised◐ Financial sector onlyGuidance-Led
🌎 African Union✖ No binding AI law (continental)◐ Malabo Convention (data)✔ Continental AI Strategy 2024✖ Not yet formalised◐ Strategy principles onlyStrategy Adopted

✔ = Yes / In place  |  ◐ = Partial / Sector-specific / Voluntary  |  ✖ = Not yet in place  |  Status reflects publicly available information as of March 2026.

Methodology & Disclaimer

How This Tracker Was Built & Its Limitations

Scope and Research Approach: This tracker was built by reviewing publicly available AI policy, regulatory, and legal materials from official government portals, regulatory authority websites, parliamentary record systems, intergovernmental body publications, and established secondary research sources including the OECD AI Policy Observatory, IAPP Global AI Law and Policy Tracker, White & Case Global AI Regulatory Tracker, and comparative law research publications. Live site content from lawoftomorrow.com was also incorporated into the research synthesis.

Classification of Instruments: We apply four primary classifications — binding law/regulation (enacted and enforceable), adopted but phasing in (legally in force but with transitional timelines), draft/proposal (under legislative or consultation process), and strategic guidance only (government strategy documents, white papers, voluntary frameworks). These classifications reflect the legal character of the instrument in each jurisdiction’s own legal system.

Jurisdictions and Frameworks Selected: The 14 national/regional jurisdictions and 5 international frameworks represent major regulatory archetypes: comprehensive horizontal AI law (EU, South Korea), binding sectoral law (China, Japan), strategy-and-guidance-led (US, UK, Canada, Australia, India), voluntary governance ecosystem (Singapore), institutional-strategy (UAE), emerging regulatory economy (Türkiye, Brazil), financial-sector-led (Hong Kong), intergovernmental framework (African Union), and legally binding and soft-law international instruments (Council of Europe, OECD, G7, UN/UNESCO).

Source Preference: Primary sources — official government, regulator, parliament, commission, and intergovernmental body publications — are preferred and cited directly. Secondary sources are clearly labelled as reference sources.

Important Disclaimers

  • This tracker is for informational purposes only and does not constitute legal advice.
  • AI regulation is one of the fastest-evolving areas of law globally. Details change frequently.
  • Always consult official government and regulator sources for current, authoritative status.
  • Implementation timelines may shift — especially for the EU AI Act and South Korea’s Basic AI Act.
  • Content reflects publicly available information as of March 2026.
  • Sub-national variations within the EU, US, and AU are not fully captured.
  • Legal classification reflects the general nature of instruments, not jurisdiction-specific technical legal analysis.
  • Links to official documents are provided in good faith; document URLs may change.