Home Tort Liability for Failure to Age Gate: A Promising Regulatory Response to Digital Public Health Hazards
Article Publicly Available

Tort Liability for Failure to Age Gate: A Promising Regulatory Response to Digital Public Health Hazards

  • Matthew B. Lawrence ORCID logo EMAIL logo , Brett Frischmann and Avi Sholkoff
Published/Copyright: August 21, 2025

Abstract

Tort liability for failure to “age gate” is a promising legal response to the public health hazards of AI, social media, sports gambling, and other digital spaces. Tort liability for failure to “age gate” hinges liability for harms to minors on an app’s failure to take reasonable steps to prevent minors from gaining access or otherwise to apply appropriate governance rules, such as privacy-protective default settings or ensuring genuine parental consent. While no one legal response is a panacea, tort liability for failure to age gate carries several distinctive advantages that make it a particularly promising option at this stage of an evolving regulatory challenge. These advantages include avoiding section 230 preemption, mitigating First Amendment barriers, ensuring fit with existing tort doctrine, assessing technical viability, and minimizing harms to innovation. The relative insulation of the tort system from industry manipulation as compared to legislative processes, and consistency with the need to balance prevention with access in regulating addictive technologies are added bonuses.

1 Introduction

From railroads to cars to chemicals, new inventions have brought previously unimagined benefits alongside previously unimagined harms. The question of how law can best mitigate the harms of new technologies without diminishing their benefits is never easy to answer, requiring a choice among legal actors, regulatory mechanisms, and goals.[1] That choice is made, moreover, not by one all knowing, good faith actor (as scholarship often assumes[2]) but by a myriad of incompletely-coordinated institutional actors working under emerging, evolving, uncertain conditions and subject to industry-subsidized efforts to distort the information and political environments to favor industry interests whether they align with public interests or not.[3]

This dynamic is now playing out around the regulation of digital spaces such as AI, social media, and sports gambling apps. Public health harms associated with these and other new technologies are prompting a range of legal actors to consider responses, from state legislatures to courts to federal agencies.[4] Such harms have also entered the popular conversation led by books including Jonathan Haidt’s Anxious Generation and David Courtwright’s Age of Addiction alongside documentaries like The Social Dilemma.[5] Industry, for its part, has incentives to invest in shaping both the information environment and political outcomes to serve industry interests, which history suggests will mean preserving the status quo regulatory environment or shifting any changes to entrench industry market power.[6] And even without that complicating force, legal actors’ work in responding to the public health concerns around digital technologies is made more difficult by the newness of these technologies and the distinctive, incompletely understood benefits and harms, and regulatory challenges and opportunities that come with them. Indeed, clear categories of digital technologies have not yet formed – what exactly is “social media,” – and constructing such categories is in part the work of law.[7]

This symposium contribution explores one particularly promising legal response to the public health hazards of AI, social media, sports gambling, and other digital spaces: tort liability for failure to “age gate,” that is, tort liability for harms to minors stemming from an app’s failure to take reasonable steps to prevent minors from gaining access or otherwise to apply appropriate governance rules, such as privacy-protective default settings or ensuring genuine parental consent.[8] “In simplest terms, age gating only means that age serves as the triggering condition for governance rules: if [age], then [governance rules].”[9] While no one legal response is a panacea, tort liability for failure to age gate carries several distinctive advantages that make it a particularly promising option at this stage of an evolving regulatory challenge. These include avoiding Section 230 preemption, mitigating First Amendment barriers, fit with existing tort doctrine, technical viability, minimizing harms to innovation, the relative insulation of the tort system from industry manipulation as compared to legislative processes, and consistency with the need to balance prevention with access in regulating addictive technologies.

The contribution proceeds in three parts. It first offers background on public health concerns around digital technologies, possible legal vehicles for protecting public health in digital spaces, and the considerations relevant to the choice of regulatory vehicles. It then elaborates on the advantages of tort liability for failure to age gate as a legal tool for regulating AI, social media, sports betting, and other digital technologies.

2 The Complex Regulatory Challenge of Digital Public Health

In 2025, there is growing concern that the companies that design and operate new digital technologies are doing less than they reasonably could to mitigate the various sorts of public health harms – from distracted driving and parenting to gambling addiction to eating disorders associated with algorithmically-directed consumption of “pro-ana” (pro anorexia) content – that come with these technologies.[10] With that concern have come a variety of efforts to use law to forcibly require these tech companies to do more. These efforts include consumer protection and tort suits in state and federal court to enacted and proposed legislation, federal agency regulation, public school phone bans, and guidance recommending ways parents and users can reduce the harms of digital technologies themselves.[11] Of particular significance to this analysis are the lawsuits – pursuing a variety of claims – alleging children were harmed by “addictive design” or other design features of social media,[12] sports gambling,[13] and AI apps.[14] Several of these suits have survived motions to dismiss and are now in discovery.

This range of efforts and the concerns underlying them is surveyed in a developing literature that spans the fields of privacy, tort law, technology law, First Amendment law, medicine, public health, and even antitrust.[15]

From an advocacy perspective there is perhaps little difference among these potential types and sources of regulation except insofar as some may offer greater prospects for success (however the advocate defines it), others lesser such prospects. From the perspective of a reformer seeking ways to bring law to bear to address a perceived public health crisis, each of these potential regulatory tools is another possible mechanism to solve this problem, and, so, worth exploring.[16] From an industry perspective, meanwhile, each front is another place to bring persuasive, economic, epistemic, and political power to bear either to preserve the status quo and prevent regulation or to steer regulation in a direction favorable to industry, such as locking in market share.

From a public interest perspective, however, the question of which legal actor, imposing which sort of regulation (tort, consumer protection, licensure, etc.), may be very important. Legislators or administrators are theoretically better positioned than courts to tailor regulation to prompt the right industry incentives or behavior – on a welfarist economic view, this would presumably mean forcing industry to internalize externalities without unduly stifling innovation. On the other hand, legislators and administrators may be worse positioned than courts to resist industry lobbying pressure to preserve the status quo or use regulation to create barriers to entry. And separate from the type of actor, the legal requirements they impose come with different mixes of costs and benefits – licensure is effective but administratively burdensome, liability is less administratively taxing but may be under- (or over-) enforced. And so on.

In deciding among legal tools to address a particular variety of activity, it can be useful to focus pragmatically on the distinctive features of that activity.[17] Three considerations have been raised around digital technologies that may be particularly important for the selection of regulatory vehicle in this domain. First, some credit a partial regulatory vacuum created by Section 230 of the Communications Decency Act as itself having played an essential role in online innovation.[18] From this innovation maximalist perspective, even modest additional regulation of digital technologies as they now exist might harmfully chill the experimentation necessary to develop the new digital technologies of the future. For reasons we elaborate upon below, we believe this view is significantly overstated,[19] but it is nonetheless a powerful one that looms large in any discussion of regulation of digital technologies. Those who do subscribe to it view regulation of industry behavior, especially new behavior (new and different technologies and apps), as particularly worrisome in the long term, whatever the short-term costs and benefits.

Second, there are legitimate concerns that sports gambling apps, social media, and AI can be designed in ways that foster compulsive use and addiction.[20] Indeed, former tech industry executives have suggested that some industry players knowingly employed “operant conditioning” techniques used in slot machines – like intermittent reinforcement and variable reward – to “hijack” users’ brains and thereby sustain use.[21] These efforts appear to have been successful. More than half of Americans report being “addicted” to their smartphones.[22] While prior reporting has focused on social media and gambling, similar concerns are emerging around AI chatbots. Contrast the pleas of AI chatbot users for restored access to their “loved ones” after generative AI chatbot character.ai cut off access to an early model due to concerns about harms to users, on the one hand, with a lawsuit brought against the company on behalf of a 14 year old user whose compulsive use allegedly contributed to his suicide, on the other.[23] Addictive design poses its own particular regulatory challenges, simultaneously creating industry incentives to target youth to foster the development of lifelong compulsions, increasing industry incentives to distort the information environment to discourage or impede regulation, and decreasing the effectiveness of command-and-control style regulations directed at compulsive users.[24]

Third, the newness of social media, sports betting and other gambling apps, and generative AI – and the continuing, breakneck pace of evolution in digital technologies – exacerbates the uncertainty that always surrounds new technologies. While debates linger about the exact effects of “second hand smoke,”[25] for example, the uncertainty about fundamentals that clouded regulation of tobacco in the 1960s and 1970s is long gone; cigarettes have largely remained cigarettes. The same could be said of railroads or automobiles – these developments brought uncertainty at first due to their novelty, but they became familiar with time. Several decades into the emergence of digital technologies, by contrast, they remain an area of fundamental change and evolution. Uncertainty about their costs and benefits, and uncertainty about the right regulatory approach to reap their benefits while mitigating their costs seems poised to remain a regulatory fact of life in this domain for the foreseeable future.

For example, this uncertainty underlies a debate about whether the significant evidence that some digital technologies contribute to compulsive use is sufficient to be judged as “causal” in medical parlance, “causal” for the purposes of particular legal theories, or supportive of policymaking. This uncertainty in turn necessitates not only more evidence development but careful attention to the evidentiary standards relevant to different disciplines and contexts. The evidentiary standard that medicine might demand to find clinical causation, for example, is generally more stringent than either civil liability’s “more likely than not” standard or the standard a parent or policymaker would wish to employ in deciding whether a regulation was appropriate.[26] Consider the fact that medicine did not recognize gambling addiction as a potential mental illness until 1980 and an actual such illness until 2013[27]–public health concerns around gambling had, of course, supported state and federal restrictions beginning more than a century prior.[28]

A comprehensive comparative examination of potential regulatory options for addressing the public health risks of digital technologies in light of these considerations of innovation, addictiveness, and uncertainty would be useful, but such a project is beyond the scope of this symposium contribution. More modestly, in the Part that follows, we highlight several promising features of one particular regulatory option – tort liability for failure to age gate.

3 Tort Liability for Failure to Age Gate as a Promising Approach

Pending state and federal court actions alleging that “addictive design” by social media companies hurt particular children in particular ways raise a plethora of claims, from violation of consumer protection statutes to defective design through products liability claims.[29] Among these, a few lawsuits have raised a particular claim: That social media companies tortiously failed to age gate their apps, that is, despite awareness of risks to youths and despite adopting policies forbidding youth use, the companies failed to take reasonable steps to prevent minors from using their apps or accessing harmful features.[30]

This particular sort of regulation – reliance on the tort system overseen by courts to insist apps take sufficient steps to discourage use by minor users, like a pool owner putting a fence around their pool to keep trespassing neighborhood kids from drowning – is a particularly promising one, for several reasons.

3.1 Legally Viable

To start, unlike many possible reforms, encouraging age gating through the tort system is legally viable. Tort liability for failure to age gate can readily avoid the limits of Section 230, be made consistent with constitutional limits on regulation of expression, and be made to fit with existing tort theories.

3.1.1 Section 230

For years social media platforms and other internet service providers have operated free from much state regulation due to the preemptive effect of Section 230 of the Communications Decency Act, a federal law that provides immunity from state laws that would treat internet service providers as the publishers of third party content.[31] Absent federal legislative change, this statutory barrier stands as a substantial regulatory obstacle to legal checks. For example, in V.V. v. Meta Platforms, a family sued Snap for having allegedly recommended their 15-year-old daughter chat with a sexual predator who went on to blackmail and abuse her.[32] The court found the case barred by Section 230, insofar as it would have based liability for Snap on its promotion to the minor of particular content, namely, interaction with the sexual predator.

In ongoing lawsuits adjudicating various claims against social media companies for contributing to harms to minors, however, courts have begun to trace the limits of Section 230's immunizing reach in this context. In particular, multiple courts have found that state regulation of content-neutral platform activity – platform activity that does not itself discriminate among third-party content (such as by promoting or failing to censor such content) – is not barred.[33] This approach sidesteps lingering, open questions about whether Section 230 applies to “matchmaking” and personalization; regardless whether Section 230 applies to matchmaking and personalization, it does not apply to platform (design) choices that do not discriminate among third party content at all.

Failure to age gate can easily fit within this safe harbor from Section 230 preemption. This approach presupposes liability on the platform’s conduct – its failure to include a particular design feature – and that conduct itself is not related to content.[34] So long as the trigger for the requirement to age gate itself is content neutral, it is difficult to frame failure to age gate as doing what Section 230 forbids, namely, treating a platform as the provider of third-party content (Section 230 is more likely to limit such a claim if the trigger for age gating is itself specific third-party content, however, like “indecent content” or “violent content.”[35]). It follows that multiple courts in pending cases have allowed claims targeting platforms’ parental controls and provision of access to minors to proceed to the merits, denying platforms’ motions to dismiss on Section 230 or First Amendment grounds.[36]

3.1.2 Constitutional

Focusing on a failure to age gate can also avoid constitutional objections that have hindered other regulatory approaches. In Netchoice v. Moody, the Supreme Court noted that a platform’s editorial choices in amplifying, arranging, or censoring content can themselves constitute expression that is protected by the First Amendment’s guarantee of freedom of speech.[37] That finding does not make it impossible to regulate such choices, but it makes it very difficult to do so because in order to support a restriction of expression a state must satisfy stringent legal tests of constitutional interest and fit.[38]

Tort liability for failure to age gate can avoid or satisfy constitutional objections in four ways. First, so long as the trigger for the requirement to age gate is itself not content discriminatory,[39] it is difficult to make an argument that, from an app’s perspective, forbidding or permitting access by minors is an expressive act – certainly such conduct is far removed from the editorial choices the Supreme Court recognized as constituting editorial expression in Moody. Should a court scrutinize a content-neutral age gate under the First Amendment, this favors intermediate scrutiny. Second, this is especially so when age gating merely entails technical measures to make more effective a bar on use by minors already included in an app’s terms of service. One could perhaps imagine an app under certain circumstances arguing that inclusion of posts by minors is an aspect of its editorial discretion – that it intends to express an inclusiveness of all voices, including minors, for example.[40] But it is hard to imagine how an app could argue that tort liability premised on its failure reasonably to effectuate its own prohibition on minor use interfered with its expressive choices. Third, courts routinely apply generally-applicable tort theories to expressive conduct without separately assessing whether tort law, as applied, impermissibly restricts expression.[41]

Fourth and perhaps most fundamentally, even where a court finds that age gating implicates the freedom of speech, states would have strong arguments that requiring age gating satisfies tests of interest and fit (especially if courts were to apply intermediate scrutiny, as the Supreme Court did to a state law requiring age gating for online obscenity in Free Speech Coalition v. Paxton[42],[43]). Courts have long recognized state compelling interests in protecting the health and psychological wellbeing of minors.42 Requiring age gating tracks neatly with these existing state interests.

3.2 Fit with Existing Tort Theories

Failure to age gate also plausibly fits within existing tort theories. Discussed below are a sampling of such theories, with particular emphasis on products liability, premises liability, and attractive nuisance.

3.2.1 Products Liability

A first tort theory that might be employed to hold an app liable for failure to age gate is products liability.[44] To assert a claim under products liability law, an item must be considered a covered “product.” A product is tangible personal property distributed commercially for use or consumption.[45] Nontangible items can be considered products if their use is “sufficiently analogous” to the distribution and use of personal property.[46] When originally written, the authors of the Third Restatement of Torts noted that courts may eventually assess whether digital software is considered a product.[47] The aim of products liability law is to incentivize manufacturers to execute the “proper care” and deter the production and ultimate use of goods developed defectively.[48]

The applicability of products liability principles to social media, and by extension other digital technologies, is currently a central point of contention in litigation. Courts that have held social media apps to be “products” have allowed design-based claims, including claims for failure to age gate, to proceed into discovery.[49] On the other hand, other courts have found that social media apps are not properly considered “products,” and so refused to allow products liability claims to move forward.[50]

Specifically, in Patterson v. Meta Platforms,[51] plaintiffs survived a motion to dismiss on a products liability theory. The plaintiffs asserted that the social media platforms “intentionally designed their products to frustrate the exercise of parental responsibility.”[52] It is feasible, the plaintiffs asserted, for these platforms to design an application that not only makes it more difficult for children under 18 to access it, but also to require explicit parental consent to access it.[53] (We agree, as we elaborate below.).

The Patterson complaint notes that while this safer, age-gated product design exists, the social platforms “chose to ignore or disregard it” and maintained the defective design to earn higher profits.[54]

Similarly, in In re Social Media Adolescent Addiction Litigation, the Northern District of California found that the defendant platforms owe plaintiff users of their social media platforms duties because of their status as product makers.[55] The Northern District compared tangible products such as parental locks on bottles containing prescription medicine to protect young children as analogous to the defective parental controls on Snapchat, Instagram, and TikTok. The court also acknowledged that the similarities between the Lyft Mobile application characterized by a Florida appellate court as a product and the age defects plaintiffs alleged as present in the social media applications.[56]

Further, in a footnote, citing the Third Restatement, the Northern District suggested courts should look to public policy factors in defining a “product.”[57] This analysis could include “the public interest in life and health,” as well as, among others, “the justice of imposing the loss on the manufacturer who created the risk and reaped the profit.”[58] As discussed previously, the Supreme Court has recognized that states have a general interest in protecting the health and well-being of children.[59] These interests often include the psychological well-being of minors, preventing drug addiction, and public health.[60] Tort liability can serve as a way for parents and family members to protect their own children. It can also serve as a way for states themselves, under parens patriae to seek remedies for the children in their states.[61] Ultimately, for the Northern District, “The alleged defects concern children’s well-being and safeguards to ensure adequate parental oversight of their online activities. As such, they advance the public’s interest in young people’s well-being.”[62]

By contrast, in Social Media Cases, the California Superior Court held that social media platforms are not products because – using language from the restatement – they are not “tangible; one cannot reach out and touch them.”[63] The court found that products liability is a common law doctrine focused on addressing unanticipated harms by mass market manufacturing of items. Social media platforms are thus not tangible products because they lack well-defined, anticipated functions.[64] The court reasoned that “without the foundational element of a static product from which ordinary consumer expectations or benefits from use of the product can be discerned, there is no reasonable basis for applying the tests for whether a product is defective.”[65],[66]

3.2.2 Premises Liability

Social media apps might properly be understood as “products,” but if not, might they be considered “premises” for purposes of tort law?65 One argument is that because the essence of the business model of platforms such as Instagram or Snapchat involves the invitation of users to use their space, along with awareness and control of the platform the user lacks, the answer may be yes.[67] Premises liability is a negligence claim in which a person states that their injuries occurred from a defendant’s failure to maintain property in a “reasonably safe condition.” Under premises liability, liability for third parties is only imposed by the courts if the conduct is “reasonably foreseeable.”[68] To prove a claim for premises liability, a plaintiff must prove, broadly speaking, that a defendant had knowledge or recognition of artificially created danger on his property and failed to warn the plaintiff about it.[69] Premises liability can apply differently depending on the “status” of the person on the land; (a) an invitee (a property owner owes the highest duty); (b) licensee; and a (c) trespasser (lowest duty).[70]

Scholars have made vigorous arguments that platforms such as Facebook and Twitter, and even the Metaverse are similar to the physical places mentioned in the Restatement.[71] Others have posited that websites which showcase third-party content could be liable under such liability only if they do not take preemptive measures due to potential harm.[72]

For instance, those who upload content to these platforms could be considered “invitees,” which would make those in charge of them subject to more stringent requirements to protect the users.[73] One example of such a site is ChatGPT.[74] Because OpenAI is theoretically not aware of every piece of content produced by the platform, but is aware of “some obvious” harmful activity, it could take steps to address it. If it did address it, it could potentially use premises liability as a defense.[75] If it does not, potential plaintiffs harmed could potentially argue that OpenAI did not fulfill its duty to take affirmative care. As this discussion illustrates, premises liability applies differently depending on the “status” of the person on the land; (a) an invitee (a property owner owes the highest duty); (b) licensee; and a (c) trespasser (lowest duty).[76] In Rodriguez v. Meta Platforms, Inc., the plaintiffs argue that an 11 year old who committed suicide after becoming addicted to Snapchat and Facebook should be treated as an “invitee” for purposes of premises liability, thus requiring a heightened duty of care toward users. Specifically, the Rodriguez case is a wrongful death action in which an 11 year old allegedly became addicted to Snapchat and Facebook and then eventually committed suicide. The complaint asserts many claims under products liability law, negligence, and products liability. For instance, the complaint alleges that the social media platforms “designed, manufactured, marketed, and sold products that were unreasonably dangerous because they were designed to be addictive and detrimental to mental health of children to whom the Defendants knowingly marketed their products.”[77] The complainant further alleges that the defendants failed to adequately verify the ages and identities on their platforms.[78] Moreover, other aspects of the complaint focus on the addictive design of these social media applications. The complaint states that Meta and Facebook deliberately designed its features to cause the brains of children to release dopamine bringing them “euphoria.”[79] However, this euphoria is short lived. As soon as the dopamine is released, the children’s minds “downregulate” the number of dopamine receptors that are stimulated and become dejected.[80] The complaint further asserts that addictive use of social media by children is “psychologically and neurologically analogous to addiction to internet gaming disorder.”[81]

Among other issues, the complaint in Rodriguez asserts premises liability claims. The complaint alleges that because Meta compares its platforms to “physical places” to reap financial benefits in its own marketing and promotion materials, it is similar to the brick-and-mortar businesses discussed in the Restatement. In this context, the complaint asserts that the youth social media users should be considered business invitees.

3.2.3 Attractive Nuisance

A tort doctrine with particular relevance to the failure of an app that nominally forbids use by children – and knows itself to include features that can be harmful to children – is that of attractive nuisance.[82] If an owner maintains a device or machinery of an “unusually attractive nature” that attracts children, the owner appears to, at least implicitly, invite these children onto their property. As a result, these children are considered to be “rightfully on the premises.”[83] A landowner can be liable for physical harm for children trespassing caused by an artificial condition if (a) the landowner knows children are likely to trespass there, (b) the condition is one the landowner knows will involve unreasonable risk of death or serious harm to children, (c) children didn’t know the condition or realize the risk of engaging with it, (d) the utility of maintaining the condition and burden of eliminating danger is “slight” compared to the risk of the children who will encounter it, and (e) a landowner fails to exercise reasonable care to protect the children or eliminate danger entirely.[84]

The classic example of an “attractive nuisance” is a swimming pool, and there are numerous cases finding a swimming pool owner liable for harms to trespassing children because of their failure to properly gate their pool to prevent access.[85] The comparison here to a social media, sports gambling, or an AI company’s failure to take reasonable steps to prevent access by minors despite awareness of potential harms to minors on their app is obvious. Like a swimming pool, social media can be a place of joy, fulfillment, and community – benefits that justify even the serious risks entailed. But, also like a swimming pool, the harms of social media can be substantially mitigated by putting a sensible gate around the pool to prevent unsupervised access by minors. And, as with a person or entity building a swimming pool, a person or entity building a digital app may have insufficient incentive to take such sensible steps without the looming threat of tort liability.

In Rodriguez, along with premises liability, the plaintiffs also asserted claims under this doctrine. The plaintiffs argue that the defendant social media platforms specifically designed a product targeted toward young children but failed to provide adequate protections.[86] They allege that Instagram and Snapchat knew these harms were occurring but – using language from tort law – “failed to exercise ordinary care owed” to underage “invitees” to prevent solicitation of sexual favors from older users toward young girls.[87]

One potential barrier to attractive nuisance for failure to age gate arises from the term “artificial conditions” as characterized in the Restatement.[88] Examples of such conditions include “recreational structures” such as swimming pools, trampolines, or tree houses.[89] Collections of such cases emphasize the physical and artificial nature of the tort. For instance, claims involving insects and animals such as dogs or horses have not proven successful because they are not considered artificial.[90] Further, “items” are generally not considered to be attractive nuisances.[91] This may create an either/or choice: if a digital space is a product then products liability may apply instead, and vice versa.

Perhaps the most substantial challenge with applying attractive nuisance or premises liability theories to digital spaces is the lack of physical tangibility of such spaces.[92] Application of common law tort theories to digital spaces is not unprecedented, however. Take trespass to chattels, for instance.[93] Someone commits this tort when they intentionally dispossess someone of their chattel or “use [] or intermeddle []” the chattel in the possession of someone else.[94] In the early 21st century, some courts applied this principle to the online context.[95] In CompuServe Inc., v. CyberPromotions, the Southern District of Ohio expanded the tort to unsolicited bulk email.[96] It found that “electronic signals generated and sent by computer have been held to be sufficiently physically tangible” and that the plaintiff had a possessory interest in its computer systems.[97] Because the defendants’ conduct was “clearly intentional” the Court held that the elements of trespass to chattels elements had been satisfied.[98] As mentioned, tangibility served as the court’s key focus for applicability of the tort to the online context. Similarly, in EBay Inc., the Northern District of California held that an entity using a “software robot” consumed EBay’s server capacity and was enough to demonstrate a claim for trespass to chattels.[99]

3.2.4 Public Nuisance and Civil RICO

Public nuisance torts may also provide a basis for imposing liability for failure to age gate, especially considering the impacts of compulsive use of digital technologies by children on education and social welfare systems. Scholars have thus suggested public nuisance might apply to social media companies, by asserting that these platforms should exercise a higher duty to not harm the minds of young children.[100]

The tort of public nuisance focuses on unreasonable interference with a right common to the public.[101] Such unreasonable interference can occur when conduct is of a “continuing nature” or has “produced a permanent or long lasting effect.”[102] The remedies for public nuisance claims include injunctive relief to prevent future or damages for past conduct. For damage actions, an individual must have suffered damage “different in kind” from what the general public endured.[103] Plaintiffs have used public nuisance in suits against tobacco companies and in opioid litigation.[104]

Finally, while not a tort theory, it is also worth noting that civil liability under the Racketeer Influenced and Corrupt Organizations Act (RICO) has been an effective tool in applying tort-like or tort-adjacent liability to tobacco manufacturers.[105] More recently, in Medical Marijuana, Inc. v. Horn, the Supreme Court held that civil RICO provides “a remedy for business and property loss that derives from a personal injury.”[106] This ruling raises the possibility of liability for false or fraudulent claims related to an app transmitted “by means of wire. . . communication,” such as claims related to an apps’ accessibility to or safety for minors.[107]

3.3 Technically Viable

Age gating is a ubiquitous feature of modern life in offline spaces. Highways, hotels, liquor stores, casinos, bars, public pools, and banks are just some of the places where minors are routinely forbidden or permitted only with adult supervision. From banking to gaming, online age gating is also ubiquitous. Many online age gates, however, are notoriously easy to circumvent. While offline age gates routinely require a government-issued ID or even visual assessment of a person’s age, and while some online age gates today require such steps, some online age gates today require a mere attestation that the user satisfies the applicable age requirement, perhaps with an attestation of birth date.[108]

As one of us describes in an in depth interdisciplinary exploration of the subject, industry’s reliance on self attestation age gates online despite their vulnerability to user circumvention is sometimes defended based on the perception that more effective age gates online are not technically viable.[109] That paper also notes that the truth of this perception – the actual difficulty of implementing effective age gates online – is itself a function of the socio-legal environment, insofar as legal requirements of routinized age gating themselves foster development of effective age gates, reducing in turn the cost of imposing such gates.

More fundamentally, however, the perception is not true. There are today technically viable options for age gating online in ways much more effective than self-attestation. For example, anonymous credentialing employs a trusted and anonymous identify verifier working between a user and app.[110] Moreover, technical and institutional capability in this space is growing, so the viability of age gating online may well increase in the years to come. Technologies under development include age estimation using biometrics and capacity testing[111] as well as age verification using traditional IDs, digital IDs, and anonymous credentials. These options are not today foolproof and will never be – just as the requirement to show an ID to purchase alcohol can be circumvented by a minor intent enough to purchase a fake ID, even the most effective online age gates can potentially be circumvented by a dedicated and savvy user. Moreover, distinctive considerations in the online space complicate the analogy to physical age gating and raise additional challenges and tradeoffs to consider.[112] But, also like the requirement to show an ID to purchase alcohol, age gating online can be effective enough to meaningfully increase the difficulty of access, especially to new or potential users.

To be sure, the design of an age gate raises subtle questions, and effectiveness is of course not the only criterion – privacy risks in particular must be considered as well.[113] But assessing whether a technically viable but costly intervention ought to have been adopted by a person or entity who caused another harm is the bread and butter of tort law.

3.4 Fit with Distinctive Challenges of Digital Technologies

3.4.1 Innovation

As discussed above, innovation is a primary concern for opponents of regulation of online spaces. Opponents of regulation argue that there is some kind of causal connection between the blanket protection from much regulation provided by Section 230 and the pace of tech innovation in recent decades. As we understand their argument, in economic terms, it holds that the cost of compliance with regulatory requirements, however slight, would significantly stifle the positive externality that is learning through experimentation – for checking compliance increases the cost of experimentation for any given firm, especially when impacts are uncertain and occur at scale. From a welfarist perspective, if it existed, this “cost” of regulation might be intolerable no matter the magnitude of regulatory benefits.

As a preliminary matter, we repeat our reservations about this causal claim. We note that it is based on supposition and intuition that is a far cry from the actual evidence of harm to individuals brought to bear in ongoing litigation and testimony before legislatures. Regulatory opponents also doubt that the evidence in support of tech harms is sufficient to justify regulation, but if we would not accept supposition and intuition in establishing public health harms that motivate regulation, why would we accept them in establishing innovation risks that might undercut the case for regulation?[114]

We also note Rosenquist, Morton, and Weinstein’s point that not all innovation is good. In particular, the sort of innovation that preventing addictive design entails may stifle – innovation to make an app more addictive, fostering compulsive use and so time on device – “need not increase consumer welfare.”[115] It is a challenge to develop an argument, whether welfarist or deontological, that tobacco companies’ discovery that additives could increase the addictiveness of their products made the world a better place.

Whatever their merit, however, questions about the threat to innovation posed by regulation of digital technologies are certainly front and center in arguments against regulation. Here lies another advantage of tort liability for age gating. Such liability is targeted at a particular aspect of app design. It alters innovation incentives mainly around the development and implementation of effective age gates. While it is debatable whether that alteration itself would be beneficial some might see reducing the cost of such gates over time while increasing their effectiveness as a boon, but others might worry that such innovation could fuel scope creep such liability would not impact experimentation regarding app features or functions within the gate to the same extent as less targeted regulatory approaches. It therefore does not pose the same risk of chilling experimentation associated with, for example, a consumer protection standard necessitating consideration of product risk in all aspects of product design.

3.4.2 Addictiveness

The potential for digital technologies to be designed, like a slot machine, to foster compulsion and addiction in users creates its own set of distinctive regulatory challenges. As Hemel and Oullette note, addictiveness warps innovation incentives, as it incentivizes innovators to invest in demand creation and habit formation rather than product quality.[116] This pernicious incentive is exacerbated where a designer has market power – as platforms obtain through network effects – because they then capture a larger share of created demand and, relatedly, a larger share of the benefits of epistemic and political manipulation of the regulatory environment.[117]

Tort liability for age gating fits well with these challenges. To start, tort law is developed almost exclusively by courts and legal advocates.[118] Despite their limitations, courts are less (or at least differently) susceptible to industry manipulation than legislatures or administrative agencies.[119] Consider tobacco. As documented in an extensive body of public health literature, Big Tobacco succeeded in forestalling lawmaking to reduce the harms of tobacco use. Ultimately, successful tort suits succeeded in creating a regulatory check, culminating in the Multi-State Master Settlement,[120] and that success in turn stimulated legislative responses.

Moreover, while there are many problematic aspects of addictive design that age gating does not address, age gating does target a core concern in the regulation of addictive products: that companies will target minors in order to get them hooked at a vulnerable development stage, creating lifelong consumers.[121] This is why restrictions on sale to minors are such a routine feature of regulation governing other products, like tobacco and alcohol.

In addition, tort liability for age gating helpfully puts the onus for limiting exposure to minors on app developers rather than parents, schools, or others without direct control. To use a drug policy analogy, tort liability is a “supply side” restriction, like forbidding the manufacture or distribution of marijuana. Alternative regulations that operate directly on users – such as school phone bans – are on this analogy a “demand side” restriction. The “iron law of prohibition” holds that, due to the reduced elasticity of demand associated with addictiveness, demand-side regulations are likely to backfire, punishing rather than preventing addiction.[122]

Relatedly, from a public health perspective the possibility that a persistent user might overcome an age gate is a feature, rather than a bug. A persistent challenge in the regulation of addictive products and services is that of segregating potential users (whose use might usefully be discouraged to prevent addiction from forming), on the one hand, with actively addicted users (whose use is extremely difficult and often harmful to restrict). Age gating achieves this goal – a cutting-edge age gate would effectively prevent use by the vast majority of users – preventing the formation of addiction in these users – without cutting addicted users off from manipulating workarounds to gain access.

3.5 Uncertainty

Finally, age gating cuts through much of the uncertainty about the benefits and harms of digital technologies associated with their breakneck evolution. While technology is constantly changing, human psychology is not. Our understanding of the precise reasons and milestones is improving, but minors have long been understood to be developmentally more vulnerable than adults.[123] That vulnerability has long been understood to warrant restrictions on minors’ access to a plethora of products, places, and services going back almost two centuries.[124] By requiring age gating online, courts would simply carry this familiar line into digital spaces.

4 Conclusions

The question of how to address the public health risks posed by digital technologies is a complex one implicating myriad legal, technical, and institutional considerations. We do not purport to answer it here, but instead, we highlight the benefits of one particular sort of regulation across these domains. Tort liability for failure to age gate is a promising approach because it minimizes constitutional and section 230 challenges, fits with existing tort theories, is technically viable, and constructively navigates the distinctive challenges of innovation, addictiveness, and uncertainty that complicate public health regulation of digital technologies.


Corresponding author: Matthew B. Lawrence, Professor of Law, Emory University School of Law, Atlanta, USA; and Affiliate Faculty, Petrie-Flom Center for Health Law, Health Policy, and Biotechnology, Harvard Law School, Cambridge, USA, E-mail:

Received: 2025-06-16
Accepted: 2025-07-03
Published Online: 2025-08-21
Published in Print: 2025-03-26

© 2025 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 28.11.2025 from https://www.degruyterbrill.com/document/doi/10.1515/jtl-2025-0018/html?lang=en
Scroll to top button