Deepfakes and Democracy: Can Law Protect Truth in the AI Era?

17.03.2026


The Rise of Synthetic Media

In January 2024, thousands of New Hampshire voters received telephone calls in what sounded unmistakably like the voice of President Joe Biden, instructing them not to vote in the upcoming Democratic primary. The voice was not Biden's. It was an AI-generated synthetic recreation — a deepfake, commissioned by a Democratic political consultant who claimed to be raising alarms about AI misuse in elections. The creator was fined $6 million by the Federal Communications Commission and indicted on criminal charges. The episode became one of the most prominent examples of a technology that has moved from science fiction to political reality with unsettling speed.

Deepfakes — AI-generated synthetic media that can realistically fabricate or manipulate audio, images, and video — represent one of the most consequential applications of modern generative AI. The technology enables the creation of content that depicts real individuals saying things they never said, doing things they never did, and appearing in places they never were. What was once the exclusive province of well-resourced film studios with sophisticated visual effects capabilities can now be produced by individuals with consumer hardware and freely available software.

The democratic implications are profound. Elections, political communication, and public accountability all depend on a shared epistemic foundation — the broadly held assumption that what public figures demonstrably say and do can be verified and trusted as the basis for political judgment. Deepfakes erode that foundation. And in eroding it, they do not merely create false information; they generate what legal scholars and media researchers have called the "liar's dividend": an environment in which the proliferation of synthetic media makes it increasingly plausible to dismiss authentic evidence as fabricated.

Can law protect truth — and by extension, democratic integrity — in this environment? This article examines the deepfake challenge in its full dimensions: the technology, the documented harms, the emerging legal responses, and the profound limits of what regulation alone can achieve.

Understanding Deepfake Technology

Deepfakes are produced primarily through deep learning techniques, most notably Generative Adversarial Networks (GANs) — a computational architecture in which two neural networks compete against each other, one generating synthetic content and one attempting to detect it as artificial. Through iterative competition, the generative network improves its outputs until they become indistinguishable from authentic media. More recently, diffusion models — which generate images by progressively refining noise — have overtaken GANs as the dominant approach for high-quality image synthesis, while transformer-based architectures have produced voice cloning systems of remarkable fidelity.

The accessibility and affordability of these tools have transformed the threat landscape. What requires sophisticated expertise and computing resources to produce at professional quality a decade ago can now be generated in minutes on consumer hardware using open-source tools. Free applications and web platforms allow users to swap faces, clone voices, and alter video with minimal technical knowledge. The cost of producing convincing synthetic media has collapsed; the barrier to entry for malicious use has dropped dramatically.

The quality threshold for harm, meanwhile, may be lower than commonly assumed. Research has found that deepfakes do not need to be technically sophisticated to cause political damage. Studies consistently show that humans perform poorly at identifying high-quality deepfakes — performing only slightly better than random guessing in controlled experiments. And for electoral interference, the relevant metric may not be whether a fabrication passes expert scrutiny, but whether it spreads widely enough through social media in the hours before an election to shape perceptions before corrections can catch up.

Deepfakes and Democratic Risks

The relationship between deepfakes and democratic harm is multidimensional, documented across a growing number of jurisdictions, and only partly captured by headline incidents.

The Scale of Electoral Exposure

Research by Surfshark found that since 2021, deepfake incidents have affected elections in 38 countries, exposing a combined population of 3.8 billion people. Among the 87 countries that held elections from 2023 onwards, 33 experienced documented deepfake incidents. A 2024 report by the cybersecurity firm Recorded Future documented 82 pieces of AI-generated deepfake content targeting public figures across 38 countries in a single year, with a disproportionate number focused on elections.

The incidents span every type of democratic harm. In India's 2024 general election — the world's largest democratic exercise — political parties spent an estimated $50 million on AI-generated content, and millions of voters were exposed to deepfakes mimicking politicians, celebrities, and even deceased leaders. In Slovakia, deepfake audio emerged just before elections, spreading disinformation about electoral fraud. In Taiwan's 2024 presidential election, a wave of deepfake videos and fabricated audio clips sought to discredit the ruling party's candidate, many believed to originate from foreign state actors. In Romania, the 2024 presidential election results were annulled after evidence of AI-powered foreign interference using manipulated videos — one of the first cases where documented deepfake interference had direct constitutional consequences. In Germany, the Russian-run "Storm-1516" network deployed more than 100 AI-powered websites distributing deepfake videos and disinformation targeting politicians ahead of national elections.

The Gendered Dimension of Political Deepfakes

The harms of deepfakes are not distributed evenly. In 2024, more than 30 high-profile female British politicians were targeted with deepfake technology, with their images uploaded to sexually explicit websites in the period before the UK general election. Research consistently finds that non-consensual sexual deepfakes disproportionately target women, including female political figures, creating harms that extend from personal dignity to the democratic participation of women in public life.

The Liar's Dividend: A Structural Epistemic Threat

Beyond specific incidents, deepfakes introduce what scholars call the "liar's dividend" — a structural corruption of public epistemics that may be more damaging in the long run than any individual fabrication. As synthetic media grows ubiquitous, political actors gain the ability to dismiss authentic unfavorable recordings as deepfakes. This inversion is already observable: politicians have questioned genuine crowd sizes, medical footage, and unflattering recordings as potentially AI-generated, seeding doubt without providing evidence. The liar's dividend describes an environment in which the proliferation of false content makes it easier to deny true content — undermining not just specific pieces of evidence but the epistemic foundations of democratic accountability itself.

A Note on Proportionality

Analytical honesty requires acknowledging that the deepfake threat, while real and growing, has so far been more measured than the most alarming predictions anticipated. Researchers at the Knight First Amendment Institute analyzed 78 documented cases of AI use in 2024 elections worldwide and found that traditional "cheap fakes" — edited or out-of-context content produced without AI — were used seven times more often than AI-generated material. In the United States, the feared wave of deceptive deepfakes disrupting the 2024 presidential election did not fully materialize in the form most feared. As one Berkeley media researcher noted, deepfakes shaped perceptions without clearly determining outcomes. This does not reduce the urgency of governance responses, but it suggests that regulation should be calibrated to actual documented harms rather than worst-case projections.

Legal Responses to Deepfakes: A Global Survey

Governments worldwide have recognized the threat and responded with a rapidly expanding body of legislation, though the legislative landscape remains fragmented and the effectiveness of these frameworks is uncertain.

The United States: A Patchwork in Motion

The United States lacks a comprehensive federal deepfake law, and attempts to establish one have faced significant legal and political obstacles. As of mid-2025, more than 45 states have enacted some form of deepfake legislation. California's Defending Democracy from Deepfake Deception Act (2024) required platforms to block or label AI-generated political content during the 120-day period before an election — but portions were struck down in August 2025 by a federal judge who found conflicts with Section 230 of the Communications Decency Act and raised constitutional concerns about compelled labeling requirements. Minnesota's deepfake ban was challenged by X (formerly Twitter) on First Amendment and Section 230 grounds. The proposed 10-year moratorium on state AI laws in the House reconciliation bill, if enacted, would eliminate even these fragmented protections.

At the federal level, progress has been incremental. The TAKE IT DOWN Act, passed by the House in April 2025, addresses non-consensual intimate imagery including deepfakes. The DEFIANCE Act passed the US Senate unanimously in January 2026, establishing federal civil remedies for victims of non-consensual sexual deepfakes. The Protect Elections from Deceptive AI Act, introduced in March 2025, would prohibit the knowing distribution of materially deceptive AI-generated electoral content. The NO FAKES Act, introduced in April 2025, would make it unlawful to create or distribute AI-generated replicas of a person's voice or likeness without consent, with exceptions for satire, commentary, and reporting. None of these federal bills has yet become law.

The European Union: Transparency as Architecture

The EU has approached the deepfake challenge through a combination of instruments rather than a standalone regulation. The EU AI Act, which entered into force in August 2024, classifies deepfake generation systems as high-risk AI applications subject to stringent transparency obligations. Under the Act, AI systems that generate deepfake content must clearly disclose that the media is artificially created. The EU AI Act also specifically prohibits AI systems used to manipulate individuals through subliminal techniques — a provision applicable to malicious deepfake campaigns. The Digital Services Act, fully applicable since February 2024, imposes systemic risk assessment and mitigation obligations on very large online platforms regarding content including synthetic media.

The EU's AI Act took a further significant step in September 2025 when mandatory labelling measures for AI-generated content entered into force, requiring visible labels on chatbot outputs, synthetic voices, digitally altered images, and AI-generated video. These obligations apply to internet information service providers and represent one of the most concrete implementations of synthetic media transparency requirements in force anywhere in the world. France has separately amended its Penal Code to criminalize non-consensual sexual deepfakes, with penalties of up to two years' imprisonment and fines of €60,000.

China: Mandatory Labeling and State-Directed Control

China introduced its Measures for Labeling AI-Generated Synthetic Content in March 2025, entering into force on September 1, 2025. The framework requires both visible labels for consumers and invisible metadata markers embedded in AI-generated content. If a piece of content cannot be verified but is strongly suspected to be AI-generated, platforms must label it as "suspected synthetic." This two-layered approach places responsibility on both creators and platforms, establishing one of the most operationally dense content labeling regimes globally. The framework builds on China's earlier Deep Synthesis Regulations (2022) and Interim Measures for Generative AI Services (2023), which already prohibited the use of synthetic content to spread disinformation, undermine national security, or distort historical facts.

Other Jurisdictions

The UK's Online Safety Act 2023, implemented progressively through 2024 and 2025, criminalizes sharing non-consensual intimate deepfakes; 2025 amendments target creators directly, making the intentional crafting of sexually explicit deepfakes without consent punishable by up to two years in prison. Australia passed the Criminal Code Amendment (Deepfake Sexual Material) Act in 2023, establishing federal criminal penalties for creating or distributing deepfake sexual content. South Korea enacted the Act on Special Cases Concerning the Punishment of Sexual Crimes (amended) to address deepfake sexual content. Singapore is advancing content authenticity standards. Brazil's Senate approved a comprehensive AI Bill in December 2024 that includes synthetic media governance provisions under an EU-style risk-based model.

Arguments That Law Can Protect Democratic Integrity

The legal optimist's case rests on several plausible claims about the effects of well-designed regulation.

Transparency requirements create information for judgment. Mandatory labeling of AI-generated content, enforced across major platforms, gives users the opportunity to approach synthetic media with appropriate skepticism. Even imperfect labeling — which may miss sophisticated forgeries or face circumvention by malicious actors — raises the baseline cost of deception and provides affected individuals with procedural recourse they would not otherwise have. The EU AI Act's labeling requirements and China's dual-layer marking system represent concrete implementations of this logic at scale.

Election-specific rules raise the costs of electoral manipulation. Targeted prohibitions on materially deceptive AI-generated electoral content, combined with civil and criminal liability, create deterrence structures for the most consequential category of political deepfakes. The FCC's $6 million fine against the New Hampshire robocall creator, and his subsequent criminal indictment, illustrate that existing and emerging legal frameworks can produce meaningful accountability in documented cases.

Platform accountability regimes can operationalize compliance. Content labeling and moderation obligations imposed on large platforms shift responsibility toward actors with the technical capacity and legal exposure to implement them. Meta, TikTok, Google, and YouTube have all developed AI content detection and labeling systems, partly in anticipation of regulatory requirements. The combination of regulatory obligation and platform self-interest in avoiding liability may produce compliance at scale that individual-actor enforcement cannot achieve.

Legal frameworks signal social norms. Even imperfectly enforced law shapes behavior through norm articulation — communicating that the deliberate use of deepfakes to deceive voters is not merely unethical but legally sanctionable. This norm-setting function may have independent value in contexts where outright enforcement is difficult.

The Limits of Legal Regulation

The case for legal skepticism is equally substantial, rooted in structural features of the technology and the regulatory environment that no legislation can readily overcome.

The pace problem. Deepfake technology evolves faster than legislative cycles. Regulations calibrated to the capabilities and distribution mechanisms of 2023 deepfakes may be technically obsolete before their enforcement mechanisms are operational. Detection tools face a persistent cat-and-mouse dynamic: as generators improve, detectors must constantly retrain, and the generative technology consistently leads. Legal definitions of "materially deceptive" or "synthetic" content may struggle to accommodate capabilities that did not exist when the legislation was drafted.

The enforcement problem. Identifying deepfake creators — particularly those operating pseudonymously, from foreign jurisdictions, or using commercially available tools — is exceptionally difficult. Attribution, which is already hard in cybersecurity contexts, is especially challenging when the creation of harmful synthetic media requires no specialized infrastructure. A 2024 research analysis found that 46% of documented election deepfake incidents had "no known source." Where attribution is impossible, criminal or civil liability is practically unreachable.

The Section 230 and First Amendment problem. In the United States, two structural features of communications law create persistent obstacles to deepfake regulation. Section 230 of the Communications Decency Act shields platforms from liability for user-generated content — meaning that platform-focused deepfake regulations face legal challenges that may render them unenforceable. The First Amendment's broad protection of political speech, including satire, parody, and commentary, makes blanket prohibitions on synthetic political media constitutionally precarious. California's 2024 deepfake election law was struck down precisely because it ran into both obstacles. Narrowly tailored disclosure requirements that preserve space for satire and commentary face significant design challenges.

The cross-border problem. Deepfake content is created and distributed globally. A Russian-linked influence operation deploying AI-generated videos targeting German elections is not subject to German law, nor to EU enforcement mechanisms. China's domestic labeling requirements do not constrain foreign state actors conducting influence operations through Chinese-developed open-weight models used by actors outside China's jurisdiction. National regulatory frameworks address national actors but are structurally limited in confronting foreign state-sponsored disinformation operations — some of the most consequential users of synthetic media in democratic contexts.

The labeling circumvention problem. Labeling requirements, even well-designed ones, face multiple circumvention vectors. Deepfakes distributed through private messaging apps fall outside platform moderation systems. Labels can be stripped — research found that when a deepfake video with C2PA cryptographic provenance metadata was uploaded to eight major platforms, only YouTube surfaced any label, and most platforms stripped the metadata entirely. And as one media analysis observed, the liar's dividend means that even effective labeling of some synthetic media does not restore confidence in authentic media in an environment already saturated with synthetic content.

Technology and Platforms: The Governance Layer Between Law and Reality

The gap between legal aspiration and practical effect in deepfake governance is partly filled — and partly widened — by the choices of technology companies and digital platforms.

AI detection systems have become a primary industry response. Major platforms have deployed machine learning tools to identify synthetic media, flagging or labeling suspected deepfakes in automated workflows. These tools are improving rapidly, though the cat-and-mouse dynamic between generation and detection technology means no detection system provides reliable, comprehensive coverage. Humans perform poorly at identifying high-quality deepfakes in controlled experiments; automated detection tools have variable accuracy across different types of synthetic content.

Content Provenance and Authentication represents a more promising systemic approach. The Coalition for Content Provenance and Authenticity (C2PA) has developed an open standard that enables media files to carry cryptographic metadata recording their origin, creation method, and modification history. If a file was captured by a C2PA-compliant camera or created by a C2PA-participating AI system, that provenance record travels with the file and can be verified by platforms or consumers. Companies including Adobe, Microsoft, Canon, the Associated Press, the BBC, the New York Times, and multiple AI labs have adopted or committed to the standard. The EU AI Act's content labeling requirements and its code of practice for GPAI models both reference provenance tools as implementation mechanisms.

The limitation is practical adoption: integrating C2PA into smartphones, consumer cameras, and the full distribution pipeline remains years away. And even where provenance metadata exists, platform incentives may not consistently surface it to users.

Platform voluntary commitments — including the agreements signed by major tech companies at the 2024 Paris AI Action Summit, committing to reasonable technical precautions against AI-generated electoral interference — represent another governance layer. These commitments are significant as norm-setting exercises, but their enforceability depends entirely on company willingness to comply, making them inadequate substitutes for binding regulatory frameworks.

The 2025 ITU report on AI and deepfakes called for shared global watermarking standards and provenance tools, noting that any durable solution to the synthetic media problem requires coordinated infrastructure across the global platform ecosystem — not merely voluntary adoption by individual companies.

Ethical and Democratic Considerations

Deepfake regulation raises genuine tensions between democratic goods that cannot be dissolved by technical fixes or enforcement innovations.

Freedom of expression and political satire. Political satire, parody, and commentary have been central to democratic discourse across all historical periods. Satirical exaggeration, fictional political scenarios, and creative commentary on public figures serve functions of critique, humor, and persuasion that contribute to democratic discourse rather than undermining it. A regulatory framework that prohibits all synthetic media involving political figures would eliminate these forms of expression along with harmful misinformation — an outcome that virtually all legal analysts regard as both unconstitutional and democratically damaging. The design challenge for deepfake regulation is distinguishing between malicious deception and protected creative expression — a line that is genuinely difficult to draw in legal text.

Censorship risks. The same tools deployed to detect and remove harmful deepfakes could be used by governments to suppress legitimate political content. China's deepfake labeling framework serves both anti-misinformation and information control objectives simultaneously, and the distinction between them is not always transparent. In less constrained democratic systems, the concentration of detection and labeling power in large platforms creates its own accountability concerns: who decides what counts as "materially deceptive" in political contexts, and on what basis?

The credibility collapse problem. A less-discussed consequence of comprehensive deepfake governance may be the creation of a two-tier media environment: in which content from well-resourced major newsrooms and verified official sources carries strong authentication credentials, while content from citizen journalists, grassroots political movements, and informal information networks does not. This asymmetry could create new forms of information privilege that disadvantage democratic participation by actors outside institutional media ecosystems.

Can Law Protect Truth?

The question posed in this article's title does not admit a comfortable answer. Law cannot protect truth in the AI era in any comprehensive sense. The forces that deepfakes harness — the human propensity to respond to vivid audiovisual content, the structural incentives of algorithmic distribution platforms, the ease of synthetic media production, and the institutional difficulty of attribution and enforcement — are not amenable to legal resolution alone.

What law can do is more limited but still significant. Transparency obligations can reduce the information asymmetry between creators of synthetic media and the audiences who consume it. Criminal and civil liability can create deterrence for the most egregious forms of electoral manipulation. Platform accountability regimes can operationalize compliance at the scale necessary to affect the actual information environment. And the norm-articulating function of legislation — the declaration that certain uses of synthetic media are socially unacceptable — shapes behavior independently of enforcement.

What law cannot do is outpace the technology, enforce across borders against foreign state actors, resolve the constitutional tensions between deepfake prohibition and political free speech, or eliminate the underlying epistemic vulnerability that makes synthetic media a democratic threat: the human tendency to trust compelling visual and auditory evidence.

The challenge of protecting democratic integrity from synthetic media is therefore not primarily a legislative challenge — it is an epistemic, institutional, and cultural one. Media literacy programs that equip citizens to approach audiovisual content with appropriate skepticism; provenance standards that allow authentic content to be verified; independent journalism with the credibility and resources to correct disinformation at speed; and international cooperation among democracies to limit foreign state-sponsored information operations — these are as important as any specific legal framework.

Romania's 2024 presidential election, annulled after documented AI-powered interference, is the clearest warning of what happens when these defenses fail. It also illustrates that democratic institutions can respond — the annulment itself was a democratic act of self-protection. The resilience of democratic systems to deepfake threats will ultimately depend not on any single legal instrument, but on the depth of institutional commitment to truth, accountability, and the conditions under which free political judgment is possible.

Whether law can protect truth in the AI era depends, in the end, on whether democracies remain committed to truth as a value worth protecting — and on whether the citizens, institutions, and technology companies that comprise them treat that commitment as more than a rhetorical gesture.

© 2026 Law of Tomorrow. This work is licensed for non-commercial use with attribution.

Share