Home Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore
Article Open Access

Reimagining U.S. Tort Law for Deepfake Harms: Comparative Insights from China and Singapore

  • Huijuan Peng ORCID logo EMAIL logo and Pey-Woan Lee ORCID logo
Published/Copyright: September 29, 2025
Journal of Tort Law
From the journal Journal of Tort Law

Abstract

This Article explores how U.S. tort law can respond more effectively to the distinct harms posed by deepfakes, including reputational injury, identity appropriation, and emotional distress. Traditional tort doctrines, such as defamation, the right of publicity, and intentional infliction of emotional distress (IIED), remain fragmented and ill-suited to the speed, scale, and anonymity of deepfake dissemination. Using a comparative functionalist approach, the Article analyzes how China and Singapore respond to deepfake harms through structurally divergent but functionally instructive frameworks. China’s model combines codified personality rights with intermediary obligations under a civil law regime, while Singapore adopts a hybrid approach that integrates common law torts with targeted statutory and administrative interventions. Although neither model is directly replicable in the United States, both offer valuable comparative insights to guide the reform of U.S. tort law. The article advances an integrated governance model for U.S. tort law: reconstructing personality-based torts, repositioning tort law through conditional intermediary liability, and clarifying constitutionally grounded limits for speech-based claims. Drawing on Chinese and Singaporean legal approaches, the Article sets out a comparative reform framework that enables U.S. tort law to better address deepfake harms while safeguarding autonomy and dignity in AI-driven digital environments.

1 Introduction

Deepfakes are synthetic images, videos, or audio recordings generated through machine learning techniques, most prominently Generative Adversarial Networks (GANs).[1] These technologies can closely replicate a person’s likeness and voice, thereby blurring the line between authentic and fabricated content. As deepfake tools become increasingly sophisticated and accessible, the potential for misuse has grown significantly.[2] Deepfakes are now broadly used in four domains: pornography, political manipulation, commercial exploitation, and creative expression.[3] While deepfake applications may offer potential benefits, the associated risks are significant.[4] For instance, in the United States, consumers were misled by a fabricated video depicting Taylor Swift endorsing a cookware giveaway;[5] in China, a finance employee was defrauded of transferring $25 million via a deepfake video call;[6] and in Singapore, deepfake fabrications have involved former President Halimah Yacob and former Prime Minister Lee Hsien Loong.[7] While public figures remain frequent targets, ordinary individuals increasingly face similar forms of harm.[8]

Although reputational, financial, and emotional injuries are well-recognized under current U.S. tort law, the emergence of deepfakes complicates both the infliction and legal redress of such injuries. The anonymous creation, algorithmic amplification, and cross-border dissemination of deepfakes often leave victims without a clearly identifiable wrongdoer or meaningful legal recourse. Traditional U.S. tort doctrines such as defamation, false light, and the right of publicity provide only partial and frequently inadequate remedies. These claims typically require plaintiffs to prove falsity, actual malice, or commercial exploitation, elements that many deepfake scenarios fail to meet.[9] Even where concrete harm is evident, perpetrators are often untraceable, and online platforms hosting such content are afforded broad immunity under Section 230 of the Communications Decency Act.[10] These doctrinal and structural constraints highlight the need to recalibrate U.S. tort law to address deepfake harms with greater precision, accountability, and institutional coherence.

While recent scholarship has begun to examine legal responses to deepfakes,[11] developments in non-Western jurisdictions remain underexplored. This Article addresses that gap through a comparative functionalist analysis of legal responses in China and Singapore.[12] China adopts a civil law framework that emphasizes codified personality rights, mandatory obligations on online platforms, and supporting administrative regulations. Singapore adopts a hybrid model that supplements common law torts with targeted statutory and institutional mechanisms. Rather than advocating direct legal transplantation, this Article distills doctrinal, institutional, and procedural insights from these jurisdictions to inform U.S. tort reform strategies. It further contributes to the literature by proposing a governance framework that integrates tort-based remedies with regulatory tools to more effectively address deepfake harms.

The remainder of this Article proceeds as follows. Section 2 outlines the doctrinal and structural limitations of U.S. tort law in addressing deepfake harms. Section 3 examines the legal responses of China and Singapore through a comparative lens, with a focus on personality rights, intermediary obligations, and institutional enforcement. Section 4 distills functional insights from these comparative analysis and articulates potential reforms for U.S. tort law. Section 5 concludes by reflecting on the broader implications for doctrinal innovation.

2 Deepfakes and the Doctrinal Challenges of U.S. Tort Law

2.1 Does Defamation Law Adequately Address Deepfakes?

U.S. defamation law protects individuals from reputational harm caused by false statements of fact, and it is traditionally divided into libel (written or visual) and slander (spoken).[13] A successful defamation claim requires proof of false and defamatory statement, publication to a third party without privilege, fault (at least negligence), and resulting reputational harm.[14] Fabricated depictions of sexual or criminal conduct can cause serious reputational damage and have given rise to defamation claims.[15] Some scholars have argued that nonconsensual deepfake pornography should be recognized as actionable under defamation law.[16] However, courts may be reluctant to treat deepfake content as defamatory, particularly when accompanied by disclaimers or framed as satire or implausible fantasy. For instance, in Doe v. Friendfinder Network, Inc., the court dismissed a defamation claim involving a fabricated online dating profile on the basis that the content could not reasonably be understood as factual.[17] Although this case did not directly involve deepfakes, it illustrates the principle that only content reasonably perceived as fact can support a defamation claim. This judicial reluctance reflects a deeper doctrinal tension. Even though deepfakes are fabricated by design, their visual realism often obscures the distinction between fiction and fact, a distinction central to defamation liability. Under Milkovich v. Lorain Journal Co., only statements that imply objectively verifiable facts are actionable.[18] Although visually persuasive, deepfakes may not constitute actionable factual assertions, as courts ask whether a reasonable viewer would interpret the content as conveying a statement of fact. Attribution further complicates defamation claims. Anonymity and decentralization hinder many online tort actions, but these difficulties are particularly acute in defamation, where plaintiffs must identify the originator and prove their state of mind as required by the applicable fault standard. Even when publishers are identifiable, public-figure plaintiffs must satisfy the demanding New York Times Co. v. Sullivan standard of “actual malice”,[19] a constitutional safeguard that significantly limits liability.[20] In United States v. Alvarez, the U.S. Supreme Court further held that even knowingly false speech may fall within First Amendment protection.[21] Section 230 of the Communications Decency Act adds another barrier by broadly immunizing online platforms from liability for third-party content, including algorithmically amplified defamatory material.[22] As a result, defamation law often fails to provide meaningful recourse for deepfake victims, who cannot easily identify creators or hold platforms accountable.

2.2 Can Privacy Torts Address Deepfakes?

Most U.S. states recognize a right to privacy through common law or statute.[23] It is typically categorized into four torts: intrusion upon seclusion, public disclosure of private facts, false light, and appropriation of name or likeness.[24] Among these, false light and appropriation are most relevant to deepfakes, though both face doctrinal limitations. False light may be invoked when deepfakes portray individuals in fabricated or offensive contexts.[25] Plaintiffs must show public dissemination, that the content would be highly offensive to a reasonable person, and that the defendant acted with knowledge or reckless disregard as to its falsity.[26] Because such deepfakes are often created and disseminated anonymously, establishing authorship and the mental element required for false light claims can be particularly difficult. Courts often analogize this to the “actual malice” standard developed in defamation law for public figure plaintiffs. For example, in Time, Inc. v. Hill, the U.S. Supreme Court held that false light claims involving matters of public concern require a showing of “actual malice,” aligning the standard with that applied in defamation cases.[27] This heightened standard, combined with the evidentiary challenges of anonymous online dissemination, poses substantial barriers for plaintiffs seeking to challenge expressive or fictionalized deepfakes. Moreover, courts have been reluctant to recognize false light liability where the content is unlikely to be perceived as a statement of fact. However, they have also recognized that fictionalized content may still trigger liability when it reasonably appears to convey factual assertions. For instance, in Spahn v. Julian Messner, Inc., the New York Court of Appeals upheld liability for a children’s biography that falsely attributed military honors to the plaintiff, holding that despite its fictional nature, the portrayal was sufficiently realistic to be understood as factual by reasonable readers.[28] Accordingly, deepfakes that take the form of satire, parody, or otherwise implausible content may fall outside the scope of this tort,[29] whereas realistic portrayals, especially those lacking clear disclaimers may still support liability if a reasonable viewer would interpret them as factual.

Appropriation prohibits the unauthorized use of a person’s name or likeness for another’s benefit.[30] While image and name are commonly protected under the tort of appropriation, voice lacks uniform recognition as a protected interest, complicating efforts to pursue tort claims based on AI-generated voice replicas.[31] In some jurisdictions appropriation has been construed narrowly, limited to commercial purposes or purposes of trade, excluding claims based on noncommercial uses. Doctrinally, appropriation originated as a privacy tort aimed at protecting dignitary interests rather than regulating commercial exploitation. In practice, many jurisdictions have moved toward a publicity-based approach, thereby creating uncertainty about the scope of protection, particularly in disputes involving voice replication. Some jurisdictions further require that the name or likeness have “intrinsic value,” restricting protection to individuals who are well-known.[32] Deepfakes created for harassment, revenge, deception, or targeting individuals without public recognition often fall outside this scope.

Among the four traditional privacy torts, intrusion upon seclusion and public disclosure of private facts are generally inapplicable to most deepfake scenarios. The tort of intrusion upon seclusion requires an intentional invasion of an individual’s private physical space or private affairs.[33] However, deepfakes are typically created without accessing such private domains and involve content that is already in the public domain, making it unlikely for claims to satisfy this threshold. Public disclosure of private facts applies when one gives publicity to matters concerning another’s private life that would be highly offensive to a reasonable person and are not of legitimate public concern.[34] By contrast, because deepfakes often involve fictionalized depictions or material drawn from publicly available sources, they often lack the disclosure of true private facts required for liability under this tort.

2.3 Does the Right of Publicity Protect Against Deepfake Commercial Exploitation?

The right of publicity protects individuals against the unauthorized commercial exploitation of their identity, including name, likeness, and other distinctive indicia of identity.[35] Some states also extend this protection to a person’s voice.[36] Although originally rooted in the privacy-based tort of appropriation, it has evolved into a distinct, property-oriented doctrine that protects the commercial value of personal identity. It is most commonly invoked by public figures and celebrities, and in some jurisdictions, the right survives death through statutory postmortem recognition.[37] The right of publicity has taken on renewed importance in deepfake cases involving false endorsements or commercialized impersonations.[38] Most U.S. states recognize this right through statute, common law, or both.[39] However, the scope of protection varies considerably across U.S. jurisdictions.[40] Some states do not recognize this right at all, while others limit it to explicitly commercial contexts, such as advertising or merchandise, thereby leaving noncommercial uses such as deepfake pornography outside its scope.[41] In addition, a few states restrict the right to particular groups, such as professional performers, soldiers, or the deceased.[42] Courts also diverge in distinguishing between commercial exploitation and incidental use. For instance, in White v. Samsung Electronics America, Inc., the Ninth Circuit upheld a publicity claim involving a robot resembling Vanna White in a futuristic commercial.[43] In contrast, in Almeida v. Amazon.com, Inc., the Eleventh Circuit rejected a claim involving the use of a model’s image on a book cover, finding the use incidental rather than exploitative.[44]

Further doctrinal ambiguity concerns whether plaintiffs must demonstrate preexisting fame or measurable commercial value in their persona. Although many successful claims have involved celebrities, some courts require proof that the plaintiff’s identity has independent market value, while others extend protection regardless of such value.[45] This inconsistency is particularly problematic for deepfake cases involving ordinary individuals whose likenesses are used commercially without authorization. Even for well-known plaintiffs, First Amendment defenses present significant hurdles. Defendants may argue that a deepfake constitutes parody, satire, or artistic expression, thereby invoking heightened constitutional protection.[46] When deepfakes involve deceased individuals, additional challenges arise. The right of publicity is not consistently recognized after death in the United States.[47] Postmortem publicity rights allow a deceased individual’s estate to control the commercial use of their identity and derive economic benefit from it. These rights exist only in certain states and vary significantly in both duration and scope.[48] Some jurisdictions provide protection for a limited statutory period, while others allow potentially perpetual rights.[49] States such as New York have enacted statutes that explicitly recognize postmortem publicity rights and restrict the use of digital replicas of deceased individuals.[50] Nevertheless, the absence of federal legislation and the patchwork of state-level rules continue to leave substantial gaps in protection.

2.4 Can Emotional Distress Claims Capture the Harm of Deepfakes?

Intentional infliction of emotional distress (IIED) offers a potential avenue for redress in cases involving severe psychological harm resulting from extreme and outrageous conduct.[51] Unlike defamation or privacy torts, IIED does not require a false statement or communication to a third party. Instead, it centers on emotional harm rather than reputational or privacy-based injury. Courts have acknowledged that fabricated intimate or degrading imagery may rise to the level of outrageousness required for IIED liability, particularly when it targets private individuals or minors. However, the threshold remains high, and courts have generally been reluctant to extend the doctrine of IIED to digital harms absent truly exceptional circumstances.[52] For example, in Snyder v. Phelps, the U.S. Supreme Court held that even highly offensive public speech may be protected under the First Amendment.[53] Yet in one of the few instances, Catsouras v. Department of California Highway Patrol involved law enforcement officials disseminating graphic postmortem photographs of a deceased teenager, which resulted in a finding of outrageous conduct sufficient to sustain an IIED claim.[54] Proving the requisite intent or recklessness also presents challenges. In addition, courts generally require more than evidence of emotional harm, and they often demand a prior relationship or knowledge of the plaintiff’s vulnerability, conditions seldom met in anonymous, algorithmically mediated deepfake scenarios.[55] Causation adds another layer of complexity, as the decentralized circulation of deepfakes often makes it difficult to trace the harm to a particular actor or specific instance of dissemination. For public figures, Hustler Magazine, Inc. v. Falwell requires proof of actual malice when IIED claims arise from parodic or satirical content.[56] Deepfake creators may similarly invoke First Amendment protections, arguing that their portrayals constitute satire, parody, or artistic expression. As a result, IIED remains largely inaccessible for most victims of deepfake-related emotional harm. The tort of negligent infliction of emotional distress (NIED) is even less viable, because most jurisdictions require either physical impact or exposure to imminent physical danger,[57] thresholds that are rarely met in cases involving psychological harm from deepfakes.

Table 1 offers a doctrinal mapping of key torts to representative deepfake scenarios. It illustrates the fragmented and inconsistent treatment of distinct deepfake harms under current U.S. tort law, underscoring the need for greater doctrinal coherence and reform.

Table 1:

Representative deepfake scenarios and doctrinal gaps in U.S. Tort Law.

Deepfake scenario Potential tort claim(s) Key doctrinal limitations
Nonconsensual intimate deepfakes False light, IIED False light not recognized in all states and often requires proof that the portrayal would be highly offensive to a reasonable person; IIED requires extreme and outrageous conduct with intent or recklessness; anonymity complicates attribution and causation
Political impersonation deepfakes Defamation, false light First amendment protections; actual malice standard for public figures; satire or parody may negate liability
Commercial deepfakes using likeness or voice Right of publicity Must involve commercial use; protection varies across states; some jurisdictions require proof of preexisting commercial value or public recognition
Fraudulent deepfakes used in scams (e.g., voice cloning)a Appropriation, IIED Voice is not uniformly protected under appropriation; IIED requires extreme and outrageous conduct and severe emotional distress, but courts typically frame fraud as economic rather than dignitary harm, creating a doctrinal mismatch; anonymity hinders identification and attribution
Satirical and parody deepfakes Defamation, IIED Not reasonably perceived as factual; satire and parody are protected as speech; IIED requires extreme and outrageous conduct
Posthumous deepfakes for commercial exploitation Postmortem right of publicity Recognized only in some states; duration and scope vary; enforcement by estates depends on state statutes and choice-of-law rules
  1. Table 1 compiled by the authors based on doctrinal analysis in Sections 2.12.4 of this Article. aIn fraudulent deepfakes used in scams, two categories of harm may arise: the economic loss suffered by the deceived party (typically actionable under fraud or misrepresentation), and the dignitary or identity-based harm experienced by the individual whose voice or likeness is misappropriated. This Article focuses on the latter, for which tort remedies remain fragmented and doctrinally underdeveloped.

3 Comparative Functional Insights from China and Singapore

3.1 China: Codified Personality Rights and Intermediary Obligations

China has adopted an integrated legal approach to deepfake harms by codifying personality rights and imposing layered regulatory obligations on online intermediaries.[58] Under this model, likeness, voice, reputation, and related interests are treated as part of a coherent civil framework. Courts can recognize a broad range of identity-based injuries without requiring plaintiffs to prove falsity, malice, or commercial gain. Responsibility for harm prevention is shifted from individuals to platforms.[59] The foundation of this regime lies in China’s Civil Code (2021), which establishes personality rights as a distinct category and prohibits unauthorized or manipulated use of a person’s likeness, voice, or other personal identifiers.[60] Liability under this framework is primarily determined by the infringement of a protected interest and resulting harm, without requiring plaintiffs to establish the falsity of the content or the intent of the actor. This approach more closely resembles strict liability than the fault-based standards typical in U.S. tort law. The Personal Information Protection Law (PIPL) complements this framework by classifying biometric identifiers such as facial features and voiceprints as sensitive personal data, which require separate and informed consent for processing.[61] In recent years, Chinese courts have begun to apply these provisions in deepfake-related cases. For example, the Beijing Internet Court issued a ruling that unauthorized use of a voice actor’s voice via AI constitutes a violation of personality rights, which became the first judicial recognition of voice as an independently protectable interest under Article 1023 of the Civil Code, specifically in the deepfake context.[62] In another ruling, the same court held that an AI app’s unauthorized collection and use of facial data infringed personal information rights, thereby distinguishing between portrait rights and personal information rights.[63] In a separate criminal proceeding, a man was sentenced to over seven years in prison for producing and distributing deepfake pornography. The judgment also acknowledged a violation of personal information rights, reflecting the overlap between criminal enforcement and civil data protection norms in such cases.[64]

China has also issued a series of administrative regulations that impose ex ante obligations on online platforms. The Provisions on the Administration of Deep Synthesis Internet Information Services (2023) (“Deep Synthesis Provisions”) apply to both AI developers and online platforms,[65] and represent China’s first binding regulation to directly address deepfake technologies.[66] Rather than using the term “deepfake,” the regulation adopts a more neutral expression, “deep synthesis.”[67] The Provisions on the Administration of Algorithmic Recommendation in Internet Information Services (2021) (“Algorithmic Recommendation Provisions”) require platforms to ensure transparency in content promotion and to correct mislabeled material.[68] The Interim Measures for the Management of Generative Artificial Intelligence Services (2023) (“Generative AI Measures”) impose obligations on model providers and platforms to retrain models, prevent misuse, and ensure that generated content complies with legal standards.[69] In March 2025, the Cyberspace Administration of China (CAC), along with other relevant authorities, issued the Measures for the Labeling of Artificial Intelligence-Generated Content (“Labeling Measures”), which further strengthen ex ante obligations on platforms by mandating content labeling, source traceability, and clear user notification.[70] The Labeling Measures establish a dual-tiered labeling system for AI-generated or synthesized content. Explicit labels are user-visible indicators presented through text, audio, or visual symbols within the content or its interface to signal its synthetic origin. Implicit labels are machine-readable markers embedded in the content’s data structure, such as metadata or digital watermarks, designed to support downstream verification and provenance tracking.[71] These regulations reflect a governance philosophy that treats online platforms not as passive conduits, but as active gatekeepers responsible for ex ante control within the digital ecosystem.

While China’s approach to deepfake governance demonstrates strong institutional capacity, it also exhibits certain doctrinal ambiguities and structural constraints. Civil litigation involving deepfakes remains relatively limited, as the regulatory framework emphasizes platform-based preventive compliance regimes and administrative oversight.[72] This model tends to blur the boundary between private law remedies and public law regulatory enforcement, thereby placing tort-based responses in a relatively secondary role in the legal response to deepfake harms. Moreover, doctrinal uncertainties persist, particularly in attributing fault and establishing causation in the context of deepfakes. For instance, although the unauthorized use of an individual’s likeness or voice is actionable, courts have not clearly defined the threshold at which deepfake content becomes legally actionable.[73] The Civil Code does not explicitly distinguish between “fictional simulation” and “targeted impersonation”, leaving significant interpretive discretion to courts.[74] While this flexibility allows courts to adapt to novel forms of digital harm, it also creates doctrinal uncertainty that may undermine legal predictability and hinder effective redress for victims of deepfakes. Due to regulatory risk aversion, platforms often engage in proactive and wide-ranging online content moderation. While such measures facilitate the swift removal of harmful or illegal content, they may also risk chilling legitimate expression.[75] These dynamics underscore a central trade-off in China’s approach: while it offers an integrated and preventive regulatory regime for managing deepfake harms, its emphasis on ex ante regulatory control may generate tensions with legal transparency and the expressive function of civil adjudication.[76]

3.2 Singapore: Common Tort Law Constraints and Regulatory Intervention

Singapore’s approach to deepfake harms reflects a hybrid model that combines limited tort remedies with increasingly prominent statutory and administrative interventions. Unlike China’s codified civil framework, Singapore’s response is shaped by its English common law tradition and regulatory pragmatism. The country does not recognize a general right to privacy or a right of publicity under either common law or statute.[77] Instead, individuals must rely on established torts such as defamation, breach of confidence, or passing off, none of which provides comprehensive protection against identity-based deepfake harms. The tort of malicious falsehood may apply in cases involving economic injury, such as commercial deepfake endorsements, but it requires proof of malice and actual pecuniary loss, which limits its practical reach. Unlike in the United Kingdom, Singaporean courts have not recognized the tort of misuse of private information.

In Singapore, defamation, grounded in English common law and supplemented by the Defamation Act 1957,[78] remains the most directly applicable tort for addressing deepfakes, particularly where deepfake content has been published. To succeed, plaintiffs must show that the statement refers to them, is defamatory, and has been published to a third party.[79] Requirements that can be difficult to satisfy in cases involving fabricated or anonymized deepfakes. Singapore courts have recognized that malice may be inferred from selective or manipulative disclosures, a line of reasoning that may apply to deepfakes implying misrepresentation.[80] Defamation thus provides victims with a potential remedy where reputational harm is established, but it does not adequately capture the broader dignitary and emotional harms that deepfakes often inflict. Breach of confidence offers limited recourse, as deepfakes typically do not involve confidential information.[81] Passing off, traditionally confined to commercial misrepresentation, requires proof of goodwill, misrepresentation, and damage to business reputation. These elements are rarely met where personal identity is manipulated without an established commercial persona.[82] The absence of torts for misuse of private information or appropriation further narrows available remedies.[83] Legal scholars have proposed reforms, such as recognition of a right of publicity or an informational privacy tort, but these proposals remain unrealized.[84]

In light of these doctrinal limitations, Singapore’s response to deepfakes increasingly relies on statutory and administrative mechanisms. The Protection from Harassment Act (POHA) enables victims to seek protection orders and claim damages for emotional distress without needing to satisfy traditional tort thresholds.[85] The Penal Code criminalizes the non-consensual distribution of intimate images and recordings,[86] which may extend to deepfake pornography involving identifiable individuals. In 2024, Singapore’s election law was amended to prohibit the creation and distribution of deepfakes involving political candidates.[87] These amendments were first applied during the 2025 general election, when the Elections Department issued guidelines requiring political parties to disclose AI-generated campaign materials and comply with anti-manipulation rules.[88] Though framed as electoral safeguards, these measures reflect growing concern over deepfakes and a preference for ex ante regulation.

Broader digital regulation further supports this governance model. The Protection from Online Falsehoods and Manipulation Act (POFMA) authorizes government correction directions for false or misleading deepfake content,[89] while the Online Criminal Harms Act (OCHA) enables takedown orders for digital impersonation and online scams.[90] These statutes expand enforcement capacity but do not provide civil causes of action. Platform obligations are further enhanced by the Online Safety (Miscellaneous Amendments) Act 2022, which empowers the Infocomm Media Development Authority (IMDA) to mandate transparency, takedown, and reporting requirements for major platforms.[91] Under the Code of Practice for Online Safety, deepfakes are categorized as a specific risk category requiring proactive mitigation.[92] Although these obligations enhance online platform accountability, they arise under public law and do not enable private claims.

To address the growing risks posed by deepfakes, Singapore has adopted a combined strategy of legal sanctions, platform obligations, and public education.[93] This regulatory architecture reflects a hybrid model that prioritizes criminal and administrative measures, leaving tort law at the margins of deepfake governance. Civil litigation remains limited to traditional reputational harms, while identity-based or emotionally injurious deepfakes are addressed primarily through criminal prosecution or administrative regulatory mechanisms.[94] This model facilitates swift administrative action but does not provide compensatory relief or serve tort law’s normative role in articulating rights and wrongs.[95] However, this landscape is poised to evolve. In November 2024, Singapore’s Ministry of Law (MinLaw) and Ministry of Digital Development and Information (MDDI) launched a joint public consultation on legislative reforms aimed at enhancing online safety.[96] Following the consultation, the two ministries proposed a suite of measures to strengthen victim redress and improve accountability for online harms. These include the introduction of statutory torts covering impersonation, non-consensual intimate imagery, and identity-based deepfake content; the creation of an independent redress authority; and expanded obligations for platforms to prevent and respond to such harms. As of March 2025, the ministries have published a summary of consultation feedback, confirmed broad stakeholder support, and indicated their intention to proceed with the proposed measures, although no draft legislation has yet been introduced.[97] If enacted, these reforms would significantly enhance civil remedies and signal a potential reconfiguration of Singapore’s digital governance model through the integration of tort-based remedies.

4 Comparative Insights and Doctrinal Opportunities for U.S. Tort Law

The United States’ approach to deepfake regulation has begun to take shape through a combination of federal and state-level initiatives. While Congress has only recently initiated lawmaking efforts in this domain, state legislatures have responded more swiftly and expansively.[98] At the federal level, the most notable development is the enactment of the Take It Down Act in May 2025.[99] This statute is the first nationwide measure addressing AI-generated deepfake content. It requires online platforms to remove non-consensual intimate imagery (NCII) within 48 hours of receiving a verified notice. The Act authorizes the Federal Trade Commission to enforce these takedown obligations, and it also imposes criminal penalties for noncompliance.[100] While procedurally significant, the Act remains limited in scope: it applies only to NCII, provides no private right of action, and does not address broader impersonation misuse or deepfake harms beyond intimate content.

Alongside these federal efforts, state legislatures have taken the lead in developing legal responses to deepfake harms. As of mid-2025, 47 states have enacted at least one law targeting deepfakes, with California, Texas, New York, and Utah among the most active.[101] These laws vary considerably in scope but tend to fall into four overlapping categories: sexually explicit deepfakes, political manipulation, fraud, and platform accountability. Most states have criminalized the creation or distribution of non-consensual sexually explicit deepfakes and nearly 30 states have adopted disclosure or labeling requirements for AI-generated political content, especially during election periods.[102] A smaller number of states, such as California, have imposed obligations on platforms to provide disclosure mechanisms and remove reported deepfakes.[103] Despite growing legislative activity, significant gaps remain. Several states have yet to enact laws specifically addressing deepfakes,[104] and jurisdictional challenges persist due to the cross-border nature of deepfake content. The result is a fragmented regulatory landscape in which states serve as laboratories of policy experimentation, while emerging federal efforts address only specific categories of harm.[105] This patchwork raises concerns about inconsistent enforcement, forum shopping, and unequal protection for victims of deepfakes.

Table 2 provides a structural comparison of how China, Singapore, and the United States address deepfake harms, highlighting key divergences across four dimensions: doctrinal structure, intermediary obligations, procedural tools, and constitutional safeguards. These contrasts underscore the fragmented nature of the U.S. tort system and suggest areas where targeted statutory reforms could complement or enhance existing doctrines.

Table 2:

Comparative legal structures for addressing deepfake harms.

Dimension China Singapore United States (status quo)
Doctrinal structure Codified protection of likeness, voice, and reputation under the civil code No codified right to image or privacy; tort protection limited and fragmented Fragmented common law doctrines (defamation, privacy torts, right of publicity, IIED) with limited statutory coveragea
Intermediary obligations Statutory obligations for labeling, consent, takedown, and real-name verification Administrative enforcement under POFMA, OCHA, and the online safety framework Broad immunity under Section 230; no general duty to monitor or remove user-generated content absent specific statutory obligations
Procedural tools Combined judicial and regulatory takedown mechanisms Fast-track orders and correction directions from administrative agencies Limited statutory takedown process (Take It Down Act); otherwise primarily post-harm judicial remedies
Constitutional safeguards Limited constitutional protection for expression; strong administrative discretion Moderate constitutional protection for expression; limited constitutional review Strong first amendment protections; speech-related torts subject to strict constitutional limits
  1. Table 2 compiled by the authors based on the analysis in Sections 2 and 3 of this Article. aOther doctrines, such as fraud, may apply in narrow or context-specific circumstances and may also give rise to criminal liability. Accordingly, they are not the focus of this Article, which centers on tort law responses to deepfake harms.

4.1 Reconstructing Personality-Based Torts

Deepfake technologies expose a doctrinal gap in U.S. tort law: the absence of a coherent framework for addressing the misappropriation of likeness, voice, and other fundamental aspects of personal identity and dignity. No comprehensive federal statute currently governs deepfake-related harms, and existing state-level responses remain narrow and fragmented.[106] Victims must rely on a patchwork of traditional torts, such as defamation, false light, and the right of publicity, each governed by distinct and often ill-fitting doctrinal requirements. Liability may depend on proving falsity, actual malice, or commercial use, leaving many deepfake harms beyond the effective reach of existing tort law.[107] As Benjamin Sobel argues, the core harm of deepfakes lies not in factual falsehoods or disclosures of private information, but in the synthetic manipulation of how individuals are portrayed.[108] This kind of representational distortion inflicts a distinct dignitary harm that existing tort frameworks, which primarily focus on truth, deception, or commercial use, struggle to address. Recognizing this conceptual gap underscores the need to rethink personality-based torts to better address how individuals are portrayed, even when such portrayals are neither factually false nor commercially exploited. This Article argues statutory reform alone will not close this gap. Common law doctrines should remain available to provide flexible, residual protection in contexts that legislation cannot fully anticipate.

The comparative functionalist approach adopted in this Article supports moving toward a more coherent doctrinal framework for addressing deepfake harms. China’s Civil Code provides an instructive example by consolidating protections for likeness, voice, and reputation within a unified personality rights regime.[109] U.S. tort law has traditionally addressed reputation, privacy, and publicity interests through distinct causes of action, each grounded in a separate normative rationale. This Article does not propose merging these claims into a unified personality tort. Instead, it argues that a more promising approach is to enact a targeted federal statute that directly addresses the distinct harms posed by deepfake technologies.[110] Such a statute could define deepfakes in technologically neutral terms and impose liability where reasonably foreseeable reputational, emotional, or dignitary harm occurs. To guide enforcement, the law might distinguish “benign deepfakes” (lacking intent to cause harm), “malicious deepfakes” (intended to deceive or inflict harm), and “prohibited deepfakes” such as non-consensual pornography or manipulative election content. In cases involving particularly harmful types of deepfakes, such as sexually explicit or criminal impersonation, courts could apply rebuttable presumptions of harm, shifting the evidentiary burden to defendants to disprove intent or injury. Liability should not be confined to commercial contexts, since many serious deepfake harms, including humiliation and psychological distress, arise in non-commercial settings. Extending coverage beyond commercial exploitation reinforces tort law’s normative function in protecting autonomy, dignity, and personal integrity.

Additionally, legislators might consider recognizing elements of personal identity, such as voice and likeness, as property-like interests within statutory frameworks, particularly in commercial contexts where deepfakes yield economic gain. This approach draws on the conceptual clarity and exclusivity associated with property law. As Margaret Jane Radin has argued, certain aspects of personhood warrant property-like protection because of their deep connection to individual identity and autonomy.[111] However, as Jennifer Rothman cautions, such a model must be carefully structured to avoid the risk of over-commodification, as expansive interpretations of the right of publicity may ultimately undermine individual control over personal identity.[112] Even if such statutory reforms were to introduce civil liability, they would likely operate in defined domains, targeting specific types of deepfake harms or providing clear procedural tools, rather than replacing the broader and more flexible remedial space offered by the common law. In this sense, statutory remedies and common law torts can function in a complementary manner. Legislation can set out bright-line protections and deterrents, while the common law can fill residual gaps, adapt to unforeseen fact patterns, and offer remedies where statutory coverage is partial or contested.

Given these considerations, courts could incrementally adapt existing tort doctrines to address deepfake harms. Through judicial interpretation, established claims such as defamation, the right of publicity, and intentional infliction of emotional distress (IIED) could gradually evolve to better accommodate deepfake scenarios. This gap-filling role may require easing certain doctrinal thresholds, such as the requirements of actual malice, intent, or commercial use, that have traditionally limited recovery in identity-based claims. By integrating statutory innovation with incremental common law adaptation, U.S. tort law could preserve the internal logic of existing torts while developing a layered and resilient framework capable of addressing both foreseeable and unforeseen deepfake harms.

4.2 Repositioning Tort Law through Conditional Intermediary Liability

U.S. tort law should be repositioned to account for the structural challenges that deepfakes expose, particularly the central role of online intermediaries. Traditional tort doctrine presumes identifiable actors, traceable injuries, and bilateral duty-breach relationships, assumptions that collapse in the context of anonymously created, algorithmically amplified deepfakes. In deepfake scenarios, platforms often serve as critical enablers of harm. Accordingly, tort law must evolve beyond doctrinal refinement to incorporate conditional intermediary liability models that reflect platforms’ systemic roles in risk amplification.[113] In China, platform obligations are primarily imposed on service providers through administrative regulations and sector-specific rules that require real-name verification, labeling of synthetic content, and timely removal of unlawful material upon notice or regulatory order,[114] with liability generally triggered when platforms fail to comply. Singapore adopts a centralized enforcement model: statutes such as the Protection from Online Falsehoods and Manipulation Act (POFMA) and the Online Criminal Harms Act (OCHA) empower government agencies to compel content takedowns and corrections, bypassing traditional civil litigation.[115] Singapore’s experience illustrates the importance of integrating common law tort remedies with proactive regulatory tools to ensure robust platform accountability. While the two jurisdictions differ in institutional design, they converge in treating intermediaries as active governance actors, embedding responsibilities in the legal architecture beyond purely ex post remedies.

By contrast, tort-based claims against platforms in the United States face significant doctrinal and statutory barriers. Although plaintiffs have at times invoked common law theories such as negligent publication or aiding and abetting, such claims are routinely precluded by Section 230 of the Communications Decency Act.[116] Originally enacted to promote free expression and innovation, Section 230 now creates a structural accountability gap by shielding platforms from liability for harms they are uniquely positioned to prevent or mitigate.[117] This divergence underscores the limitations of a remedial framework that imposes no affirmative duties on intermediaries, even as they function as primary gatekeepers of online content.[118] Scholars have increasingly criticized this overbroad immunity. For instance, Goldberg and Zipursky argue that Section 230(c) should not be read as granting blanket immunity to platforms. While the provision shields intermediaries functioning as passive conduits, it should not extend to those that actively recirculate harmful content.[119] Although their analysis focuses on defamation, the underlying reasoning applies to other forms of platform-amplified harm, including algorithmically disseminated deepfakes.

This Article proposes a balanced approach: conditioning intermediary immunity on reasonable content moderation efforts. Section 230 could be revised to preserve safe harbors only for platforms that implement reasonable safeguards, such as deploying deepfake detection tools, labeling manipulated content, responding promptly to credible complaints, and maintaining transparency protocols.[120] Courts could treat failure to adopt such measures in the face of foreseeable harm as a breach of duty, drawing on an analogy to product liability.[121] This approach would not impose strict liability, but rather a graduated duty of care proportional to the platform’s role in harm propagation. This strategy promotes platform responsibility while aiming to preserve expressive freedom without unduly chilling expression. Its implementation, however, may face significant political and institutional hurdles, including industry lobbying, First Amendment sensitivities, and federal legislative gridlock. A more pragmatic starting point may lie in state-level experimentation with tailored intermediary duties, as well as encouraging industry-led codes of conduct with the potential for judicial endorsement or enforcement. Conditional intermediary liability regimes in China and Singapore illustrate the feasibility of integrating online platform duties into the broader governance architecture. These incremental pathways could generate practical insights, build normative consensus, and lay the groundwork for future federal reform.

4.3 Navigating Constitutional Boundaries

Comparative analysis underscores a shared challenge for all legal systems confronting deepfakes: balancing accountability with robust expressive protections. Although China and Singapore do not share the United States’ First Amendment tradition, their regulatory models offer both instructive strategies and cautionary insights, illustrating how preventive controls may improve enforcement but also pose risks to legal transparency and freedom of expression. For the United States, the central challenge is to develop legal responses that respect constitutional commitments while effectively confronting the distinctive harms posed by deepfakes. In China, the governance of deepfakes emphasizes preventive regulation through administrative oversight and platform obligations, enabling swift content removal in cases involving threats to national security, infringement of reputation or privacy rights, or obscene or pornographic material, and such removals often occur without judicial review.[122] Singapore also employs an interventionist approach, but operates under a more clearly articulated statutory regime. Laws such as the Protection from Online Falsehoods and Manipulation Act (POFMA) and the Online Criminal Harms Act (OCHA) empower designated authorities to mandate corrections or takedowns of manipulated or misleading content.[123] While judicial review is formally available, it is infrequently used. Both models underscore the enforcement capacity inherent in centralized administrative regulation, but also invite closer scrutiny of its transparency and the adequacy of safeguards for expressive rights.

By contrast, the United States relies on a speech regime both shaped and constrained by strong constitutional protections. The First Amendment protects even offensive or knowingly false expression, so long as it does not fall within narrowly defined and historically recognized unprotected categories such as defamation or fraud, particularly when the expression involves matters of public concern.[124] Although tort claims are technically private causes of action, they remain subject to constitutional limits when they implicate protected speech. For example, in Hustler Magazine v. Falwell, the U.S. Supreme Court rejected liability for intentional infliction of emotional distress (IIED) without proof of falsity and actual malice.[125] In United States v. Alvarez, the Court reaffirmed that even factually inaccurate speech may fall within the First Amendment’s protective scope.[126] These precedents significantly constrain tort-based remedies for deepfake harms, especially in cases involving parody, opinion, or artistic expression. Moreover, rigid application of existing tort doctrines may leave individuals harmed by deepfakes without meaningful legal recourse.[127] Courts could adopt more nuanced standards in cases involving realistic deepfakes that depict private individuals in sexually explicit, criminal, or election-related scenarios, particularly where such portrayals are invasive or infringe upon personal dignity.[128] In such narrowly defined contexts, courts might introduce a rebuttable presumption of harm or intent to cause harm, shifting the evidentiary burden to defendants. Defendants would retain the opportunity to rebut by showing substantial public interest or expressive justification. This approach would differ from China’s model of state-driven administrative oversight and from Singapore’s current regime of executive enforcement under statutory authority,[129] instead focusing on demonstrable harm while maintaining constitutional rigor.

5 Conclusions

Deepfakes expose the structural limitations of current U.S. tort law in addressing digital harms. These shortcomings are particularly evident in cases amplified by algorithms, shielded by anonymity, and disseminated across borders. These dynamics underscore the need for doctrinal adaptation and targeted reform. This Article examines how China and Singapore address deepfake harms through structurally distinct yet functionally instructive models. China embeds personality rights and platform duties within a codified civil law regime, while Singapore supplements common law torts with targeted statutory and administrative interventions. These responses illustrate the value of doctrinal coherence and proactive institutional engagement in confronting identity-based digital harms. Building on this comparative analysis, this Article makes a twofold contribution to the emerging literature on deepfakes and tort law. First, it adopts a comparative functionalist lens to analyze how selected Asian jurisdictions have developed legal responses that differ institutionally but converge functionally. Second, it proposes a governance model for the United States that integrates doctrinal reform, intermediary accountability, and constitutional safeguards to address the distinct challenges posed by deepfakes.

Rather than replicating foreign regulatory models, U.S. tort law should engage critically with their institutional designs to develop a more responsive and principled framework. A reform agenda would involve three pillars. First, reconstructing personality-based torts to encompass deepfake harms involving likeness, voice, and identity. Second, introducing conditional intermediary obligations to address enforcement gaps. Third, ensuring that all interventions remain grounded in constitutional principles, particularly those set forth in the First Amendment. Tort law alone cannot resolve all the complex challenges posed by deepfakes, but it remains an essential component of a broader regulatory ecosystem. Future inquiry into how tort law interacts with adjacent legal domains, including criminal law, intellectual property law, and data protection law, will be crucial to constructing a more holistic and integrated response to deepfake harms. Comparative insights from China and Singapore highlight alternative doctrinal and institutional approaches beyond traditional Euro-American paradigms. As deepfake content circulates globally, effective legal responses will require coordinated international dialogue and cooperation. Deepfakes thus mark a critical frontier in global digital governance, testing the capacity of legal systems to adapt to technology while preserving fundamental human rights. By reimagining tort’s institutional role, courts and legislatures can better safeguard personal autonomy, human dignity, and the integrity of information in the age of AI.


Corresponding author: Huijuan Peng, PhD Candidate, Yong Pung How School of Law, Singapore Management University, Singapore, Singapore, E-mail:

Received: 2025-08-19
Accepted: 2025-09-07
Published Online: 2025-09-29

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 4.10.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jtl-2025-0028/html?lang=en
Scroll to top button