Synthetic futures and competition law
-
Ioannis Lianos
Abstract
This Article presents an in-depth analysis of the challenges that competition law enforcement faces in light of the rapid advancements in AI, quantum computing, and synthetic biology. It delves into the various approaches that competition law institutions, such as competition agencies and courts, can adopt to address the uncertainties surrounding the competition impact of corporate strategies and conduct in developing and applying these new general purpose technologies. The Article focuses on the four key features of this “coming wave”: asymmetry, hyper-evolution, omni-use, and autonomy, all interconnected with the rise of complex systems that contribute to uncertainty. It explores the limitations of the ordinary risk management (ORM) approach typically followed in competition law, based on the expected utility framework. It advocates for the application of the precautionary principle as a more accurate description of the approach taken by competition authorities in this context and a more normatively adequate option for regulating threats of harm in complex systems while incorporating responsible innovation concerns. Moreover, the Article extensively examines how the precautionary principle can be seamlessly integrated into the design of competition law institutions and the substance of competition law, discussing the various containment tools used by competition authorities to address uncertainty.
Introduction
The rise of new technologies has always been a challenge for competition law enforcement, starting with the expansion of railways, [1] the development of mobile and wireless telecom networks, the growth of digital online platforms, and most recently the evolution of artificial intelligence (AI). As the fourth industrial revolution unfolds, fusing the physical, digital and biological worlds, [2] we are witnessing unparalleled changes in our economic, social and political systems propelled by generative AI and large language models (LLMs), gene-editing and synthetic biology, robotic automation, and quantum computing. [3] This technological (r)evolution gives rise to the emergence of artificial phenomena that in the interim are assessed according to the traditional scientific and disciplinary framework(s) developed to engage with the natural world. [4] However, this wave of technological developments is expected to lead to the emergence of “synthetic” systems and worlds in which humans may not be in the driver’s seat, raising (for some) the distressing prospect of a “life post-anthropocene.” [5] There are four central features of this “coming wave” [6] of interest for our study:
(a) It gives rise to significant asymmetries of power, as these new technologies have the potential to establish modern “empires” that will be quite difficult for the Westphalian state to contain, if at all. [7]
(b) It generates hyper-evolution with an important acceleration in the diffusion of general purpose technologies (GPTs) that, because of the advantages of scaling and learning, prompt concentration and allow a small number of players to control the levers of the global economy.
(c) It is characterized by omni-use, as GPTs are adapted in different settings and economic sectors, which engenders intermarket feedback loops and technological convergence (the intersection of biology and digital technology, bio-digital, being the most recent example). [8]
(d) It is driven by autonomy, to the extent that autonomous systems interact with their surrounding environment independent of human action, which has “the potential to produce [a] set of novel hard-to predict effects,” making the forecasting of threats excessively difficult. [9]
These new “synthetic” worlds bring changes to the existing socioeconomic and institutional systems, raising novel threats of harm that may not be predicted by knowledge systems, such as neoclassical economics, which mostly focus on linear processes, assume competitive markets and human agency, and often ignore the impact of technological change. [10] They will naturally require the legal system to adapt to such higher uncertainty. The new techno-structure will require unique legal coding, to allow its seamless operation and expansion. [11] In the field of competition policy, this evolution is altering human intervention in markets, either by displacing human activity at the production level or by enabling mass personalization at the demand level. [12] Such developments are also profoundly reshaping the “operational foundation of business,” as scalable AI-driven processes lead to looser forms of economic organization and affect the way value is produced. [13] Moreover, in view of the four abovementioned features, these “synthetic” worlds are characterized by the emergence of complex systems made up of a large number of parts that do not interact in a simple or predictable way. [14]
Competition law intervention alone will be insufficient to address all the potential threats posed by this incoming wave, [15] but competition authorities have nonetheless already attempted to predict possible threats of harm. [16] As a result, regulators often take action before these harms materialize or a solid scientific consensus has emerged, largely due to the fear that absent this precautionary approach it will be difficult and costly to mitigate the possible threats. The four features of the “incoming wave” and the complexity of the system (through the operation of network effects, feedback loops and cascade effects) further intensify this perception of urgency, despite the simultaneous fueling of uncertainty.
As competition authorities increasingly focus on future risks (“future gazing”), it is important to consider how legal technologies can address potential threats of harm. This Article explores the hypothesis that the precautionary principle, a legal concept that deals with unpredictability, may play a more prominent role in guiding competition authorities’ actions, particularly in relation to emerging technologies like AI. It first examines the precautionary principle and its potential application in competition law (Part I). It then delves into the regulatory debate surrounding perceived harms to competition from these technologies and provides a critique of the accompanying economic assessments (Part II). The focus then falls on the legal technologies of precautionary action in competition law that are available to address the threats posed by AI and other emerging technologies (Part III). In conclusion, the precautionary principle offers a vital framework for competition authorities and courts to proactively address potential competition distortions posed by artificial intelligence and the coming wave of new technologies. By enabling preventive intervention before irreversible competitive harm occurs, this approach helps safeguard market dynamics and innovation while managing the unique challenges posed by the transformative impact of generative AI and, more generally, autonomous complex systems.
I The Precautionary Principle and Complexity
A Uncertainty and the Scope of Intervention of the Precautionary Principle
The precautionary principle [17] usually comes into effect in the presence of decision-making under conditions of uncertainty, particularly in circumstances of extraordinary risk and significant ignorance about the future consequences of an action. [18] The use of the precautionary principle complements, and in specific circumstances substitutes for, ordinary risk management (ORM) approaches.
ORM approaches rely on cost-benefit analysis and neoclassical economics’ (utilitarian) reliance on the expected utility, both of which form the foundations of the standard framework of analysis in competition economics. Cost-benefit analysis compares the present values of an action (and a counterfactual without the specific action), under the assumption of a deterministic world “in which all relevant relationships are known without error.” In reality, however, two kinds of error frequently kick in: statistical error due to “random elements in the system” not accounted for, and deficiencies in our knowledge such as biased estimates. [19] In order to deal with such errors, neoclassical economics follows an axiomatic analysis of preferences that examines the expected behavior by an “idealized individual.” Such examination supposes that individuals’ utility functions are derived from preferences over risky alternatives (lotteries or gambles), which are considered as a probability distribution over a known finite set of outcomes (the expected utility hypothesis). [20] This replaces expected values with deterministic values. [21] The expected utility hypothesis was originally formulated to be used with probabilities known ex ante (objective uncertainties, e.g., the probability that a coin may fall heads or tails). The validity of its assumptions depends on “whether it yields sufficiently accurate predictions about the class of decisions with which the hypothesis deals.” [22] One may also integrate a degree of risk aversion in the cost-benefit analysis to accommodate uncertain prospects [23] by estimating an option price (i.e., estimating the willingness to pay (WTP) for the option of future use). [24]
However, ORM approaches present many deficiencies that may place in question their use in the context of decision-making under uncertainty. First, one might argue that probabilities are not objective in the sense of relative frequency, but rather are subjective, reflecting an agent’s personal belief in the occurrence of an event. Hence, ORM attempts to combine an individual’s personal utility function with its subjective probability distribution (subjective expected utility hypothesis). [25]
Second, in analyzing a situation, it’s essential to recognize the potential impact of uncertainty and ignorance. There are three distinct epistemic situations to consider: (i) (Knightian) risk, in which scenario, the possible outcomes of an action are known in advance, along with their relative likelihood, such that the probabilities can be expressed as relative frequencies; (ii) (Knightian) uncertainty, where there is no empirical or theoretical basis for assigning probabilities to outcomes; [26] and (iii) different degrees of ignorance, [27] where there is lack of knowledge about outcomes and probabilities are unknown [28] (situations of “gross ignorance” or “unknown unknowns”). [29]
Third, the possible outcomes of an action (or inaction) may be characterized either by strong irreversibility if the costs of reversing are insurmountable, or weak if the costs of reversing an action are modest. [30] Risks also may not be idiosyncratic but systemic, in which case standard techniques of ORM may not operate well, as they focus on the central tendency of the distribution and often ignore cascading effects. This is particularly the case in complex systems (e.g., network effects) that generate “fat tails” of high-damage outcomes, and thus generally underestimate the costs of high-impact low probability (HILP) events. [31]
Fourth, and relatedly, ORM approaches take a Newtonian approach [32] and use reductionist models that examine the interaction of “elemental components with defined properties” to describe the operation of a system and to estimate impacts; they assume linear changes and a stable system that naturally returns to, or is close to, its initial equilibrium point following external shocks. [33] However, in the world of complex adaptive systems, interactions are nonlinear and involve feedback loops and reciprocal dependence, thus changing dynamically over time. The system’s evolution and response to external shocks or stimuli is affected by its prior path and hysteresis. [34] Such complex systems give rise to new features which could not be predicted from their current specifications (e.g., tipping points) [35] and are characterized by sudden, drastic, and eventually irreversible regime shifts. [36] Managing such complex systems requires new approaches of adaptive iterative processes that aim to reduce uncertainty over time (“groping in the dark”). [37]
In conclusion, ORM may suffer in situations of decision-theoretic uncertainty, when there is knowledge of the possible outcomes of a potential action but it is difficult or impossible to establish the probabilities. The problem is more acute in the context of decision-theoretic ignorance, when there is no available knowledge of the set of possible outcomes of a potential action. Precautionary principles may support decision-making in these contexts and prevent decision-theoretic uncertainty or ignorance from causing harm to welfare or some other value inaction.
B The Content of the Precautionary Principle
From a practical decision-making perspective, one may distinguish between different interpretations of the precautionary principle: (i) it may provide some parameters to select a course of action given specific circumstances of decision-theoretical risk; (ii) it may set some epistemic standards to provide insights as to what one should reasonably believe under conditions of uncertainty; and (iii) it may denote procedural guidelines to express requirements for decision-making. [38]
According to Randall, there are three important elements for the operation of the precautionary principle: harm, uncertainty, and action (remedy). [39] The precautionary principle may be triggered when there is a “sufficient” level of “scientific and credible evidence” of a threat (chance) of harm requiring some precautionary response. [40] A “weak” precautionary principle suggests that, in the presence of serious risks, uncertainty is normally not sufficient to justify inaction. [41] Alternatively, a “strong” precautionary principle imposes a “de minimis condition” after which the principle is triggered, [42] which means it is “determinative” as regulators are “required to act on it.” [43]
Harm refers to a threat of harm (chance of harm), that is “an indication of impeding harm or a signal correlated with future harm.” [44] This expands the situations of uncertainty over harm beyond Knightian uncertainty to include situations of gross ignorance or “unknown unknowns.” Uncertainty relates to the operational concept of evidence and concerns knowledge regarding the unpredictability of outcomes and likelihoods as well as potentially “the failure to know everything that is knowable.” [45] Uncertainty can be explained in three ways: ‘decision-theoretic’ perspective refers to the absence of empirical evidence of outcomes; ‘scientific uncertainty’ hints at the absence of a predictive model; [46] and ‘axiological uncertainty’ focuses on the lack of value assumptions. [47]
In this context, the precautionary principle may be used to adjust standards of evidence and provide, for instance, more weight to evidence suggesting a causal link between an activity and threats of serious and irreversible harm than one would give to evidence suggesting less dangerous or beneficial effects. This may also lead to a reconsideration of type I and type II errors, leading to an easier rejection of the null hypothesis (that there is no connection between the event characterized as a threat and a specific bad outcome), thus denoting a preference for type I errors over type II errors. [48] Another option would be to adopt scientifically sound and simplified “precautionary defaults” [49] to deal with regulatory decisions in the face of insufficient information—these could take the form of presumptions that may be triggered by certain events/criteria, or alternatively by cautious or pessimistic assumptions considered when interpreting the available evidence. [50]
Finally, remedial actions under the precautionary principle look stronger than those prescribed by ORM, as the principle’s implementation aims to proactively avoid, mitigate and/or be tailored to the underlying threat of harm. Precautionary remedies develop in a stepwise, sequential, iterative process so as to generate regulatory learning about the threat of harm. [51] Hence, one should assess the seriousness of a threat, its potential for harm, and the reversibility of that harmful outcome, before proceeding to “reasonable” measures. [52] Steel conceptualizes the principle of precaution as a “meta-rule” which “imposes general constraints on how […] decisions are made,” [53] a decision rule [54] “that selects among concrete policy options,” and an epistemic rule “requiring that a high standard of evidence be satisfied before a new technology is accepted as safe.” [55]
Steel also observed that the precautionary principle is relevant when a decision involves “a trade-off between short-term gain […] against a harm that is uncertain or spatially or temporally distant.” [56] He observes that such decisions involve, first, a “meta-precautionary principle” to restrict the sorts of rules used and to avoid the paralysis resulting from scientific uncertainty. Secondarily he emphasizes that any precautionary measure adopted should be proportional to the plausibility and severity of the threat. [57] It results from the above that the principle may intervene in a wide set of circumstances, not just those involving unquantifiable probabilities. [58]
Having determined the conceptual contours of the precautionary principle, we now delve into the question whether, when compared to the traditional ORM approach, it might better explain the action of competition authorities regarding possible action in the market of novel technologies such as AI.
II Perceived Threats of Competition Harm of AI and the Technologies of the “Incoming Wave”
Competition authorities have been criticized for their slow and inadequate handling of the challenges posed by the digital economy and the Big Data revolution, often intervening only once the digital markets have tipped. [59] Some courts, including the Court of Justice of the European Union (CJEU), have been quick to highlight the need for a precautionary approach in competition law to respond to digital developments, [60] although this suggestion has been ignored by competition authorities. This historic failure to grasp the technological changes transforming the competition landscape has evidently prompted competition authorities to become more proactive in recent years. Significant breakthroughs on the AI front (machine learning, large language models) have been immediately met with scrutiny from competition authorities who have published a number of reports that identify threats of harm and wrestle with possible remedies. [61]
This analysis explores various AI-related concerns through the lens of precautionary intervention, drawing upon Randall’s threefold classification of threats. Rather than attempt an exhaustive examination, I focus on establishing the evidential foundation of these concerns and analyzing the nature of identified threats. Randall distinguishes between (a) novel threats, typically emerging from new technologies, which can be predicted and prevented before they materialize; (b) threats arising from “business-as-usual” practices that involve ongoing exploitation, where cumulative stress factors and regime shifts may eventually cause harm—these are particularly challenging to address as they stem from complex matrices of stressors, making both elimination and remediation costly; and (c) threats that, while novel, only appear harmful once widely dispersed (based on ex post knowledge), but may be relatively simpler to remediate as they can be attributed to a single agent or factor. [62]
This framework provides a valuable analytical lens for examining the evolution of competition authorities’ responses to AI-related challenges. Initially, these authorities, along with academia, concentrated on algorithmic collusion/coordination—a classic example of a type (a) threat that could be anticipated and eventually addressed proactively. However, as AI adoption has expanded across the economy, attention has shifted to exploitation concerns regarding consumers and trade partners due to corporate extraction strategies and the imposition of unfair terms on business or end-users of these novel technologies, which align with Randall’s type (c) threats. Most recently, the focus has turned to three interrelated concerns: the high economic concentration within various segments of the AI stack, the widespread deployment of algorithms throughout the economy, and the inherent characteristics of these technologies as potential sources of exploitation—primarily falling under Randall’s type (b) classification.
A Algorithmic Coordination [63]
Since the publication of an open letter by 70 scientists calling for more research on the societal impacts of artificial intelligent technologies, [64] and US v. Topkins [65] in which the U.S. DOJ examined the use of complex pricing algorithms for the first time, [66] the possibility of collusion by algorithms (and autonomous algorithmic collusion) has become a topic of intense policy debate. [67] The language game of competition law has so far only involved humans and their firms, but with the advent of AI is now faced with the introduction of computers/algorithms as new “players” in the game. [68] Online retailers use software programs to monitor the prices of their competitors [69] and adjust their own prices in response. Simultaneously, consumers may also benefit from the use of algorithms through reduced search and transaction costs and personalized product recommendations. [70]
As public authorities began to explore the threat of algorithms “offering opportunities to firms to achieve collusive outcomes in novel ways,” [71] the economic literature [72] soon distinguished three possibilities of algorithmic collusion: (a) conventional collusion enabled by preprogrammed pricing algorithms that use strategies to facilitate collusion, (b) collusion through third-party pricing, e.g., software companies that provide competing firms with similar algorithms, and (c) algorithmic collusion facilitated solely through coordination by sophisticated pricing algorithms, without explicit communication from humans. [73] For our purposes, the focal point is the kind of evidence relied upon for this economic consensus to slowly emerge.
The first wave of scientific evidence analyzed simple algorithms (playing as rational players with limited memory and reasoning capacity) [74] to assess collusion by oligopolies in the framework of noncooperative repeated games (alluding to the reality that interactions between players in a market usually occur repeatedly over time). Several contributions by Ariel Rubinstein, Itzhak Gilboa or Ehud Kalai have used finite automata, considered as very simple types of algorithms, to model bounded rationality. [75] In 2015, Bruno Salcedo found that when four conditions were met simultaneously, namely (1) that firms set prices through algorithms that can respond to market conditions, and (2) these algorithms are fixed in the short run, (3) can be decoded by the rival, and (4) can be revised over time, then every long-run equilibrium of the game led to monopolistic, or collusive, profits. [76]
The second type of referenced scientific evidence pertained to computer simulated experiments where pricing algorithms in controlled (synthetic) environments were analyzed regarding their ability to sustain collusive strategies and their speed of convergence to above-competitive prices. Substantial attention was devoted in the economics literature to reinforcement machine learning including Q-learning algorithms, where agents learn from interacting autonomously through trial and error with their environment. [77] Emilio Calvano and others [78] studied experimentally the behavior of algorithms powered by Q-learning in a workhorse oligopoly model of repeated price competition and found that the algorithms consistently learned to charge supra-competitive prices, without communicating with one another or such strategies being preprogrammed in their design. [79] However, as Timo Klein noted, many of these results were either not robust to small fluctuations in the payoff function, or did not seem to be based on equilibrium behavior. [80]
The third type of evidence, empirical work on the use of algorithms and the risk of collusion, has been relatively rarer. [81] In a seminar paper, Assad et al. explored the use of algorithmic pricing in the German retail gasoline market and concluded that widespread adoption could facilitate collusive behavior. Irrespective of the type of learning algorithms, adoption made deviations from collusive conduct easier to detect and punish, thus making supra-competitive prices easier to sustain. [82]
It can be concluded that at the time the debate on algorithmic collusion took off in the mid-2010s, the few existing theoretical models and experimental studies revealed that it was possible for firms using pricing algorithms to reach and sustain collusive outcomes, but there was no consensus [83] as to the nature or level of the threat to market competition. Writing in 2020, Schrepel argued that “algorithmic collusion is a fundamentally unimportant subject for antitrust and competition law,” noting both the lack of significant empirical evidence and relevant cases brought by the competition authorities in the EU and the U.S. [84] Later in 2023, Assad et al. called for further research, noting that “we are in the very early stages of both academic and applied research on pricing algorithms and collusion.” [85]
However, some emerging economic literature raises more important and distinct concerns regarding algorithmic collusion through large language model (LLM) pricing agents, using simulations as an additional source of scientific evidence about algorithms. [86] The authors find that algorithms pretrained on very large datasets but without explicit instructions learn to play optimally by experience [87] and have more “discretion” as to the possible interpretation of their prompts. As a result, the LLM becomes “a randomized, ever-evolving ‘black box’ whose ‘intentions’ are opaque and largely uninterpretable, even to its users.” [88] The authors conclude that “it is conceivable that LLM-based pricing algorithms might behave in a collusive manner despite a lack of any such intention by their users,” even if the textual instructions they receive are “innocuous.” [89] These developments prompt us to critically examine the behavioral assumptions underlying our economic models. While traditional economics relies on predictions based on the rational “homo economicus,” we must now grapple with how algorithmic decisionmakers—what has been called “homo silicus” [90]—operate under fundamentally different parameters and constraints. [91] This shift challenges our established theoretical frameworks and demands new analytical approaches to understand and predict collusive market behavior. Despite the lack of a solid evidential basis and ensuing scientific uncertainty, public authorities have not succumbed to inaction, the threats imposed by algorithmic collusion being tackled by new legislation (see Part III). [92]
B Unilateral Exploitative Conduct
While competition authorities are examining threats of algorithmic and AI exploitation, including those posed by generative AI, many of these concerns represent enhanced versions of conventional antitrust issues. Traditional industrial organization models of monopolistic behavior, particularly regarding excessive pricing and other forms of exploitative conduct (e.g., unfair trading terms), remain relevant but must now account for how AI’s sophisticated capabilities amplify these risks. This novel technology doesn’t fundamentally alter the nature of these anticompetitive practices but rather intensifies their potential impact and reach, requiring a recalibration of traditional antitrust frameworks. For instance, the recent FTC Report on the growth of generative AI focuses on three ‘building blocks’—data, talent, and computational resources—and highlights the threats of harm arising from the concentration of the AI stack. [93] The Report highlights the potential for anticompetitive behavior by cloud-service providers and the increased likelihood that higher demand for server chips (needed to train AI) will be matched by “exorbitant data egress fees.” [94] Alternatively, on the less conventional side, the exploitation of customers and business users is possible through personalized pricing, exploitative tying, or by simply knowing more about the customers of competitors. [95] Exploitation may thus affect both the price paid for a digital service/product and/ or some non-price parameter of competition such as quality (e.g., privacy). [96]
Parallel concerns have been expressed about the practice of behavioral pricing or personalized price discrimination, as sellers may be able to charge different prices depending upon a buyer’s search history, or “digital shadow.” [97] Tantamount to first-degree price discrimination, behavioral pricing has prompted calls for intervention, [98] and it has been conceded that the manipulation achieved by in-depth AI-leveraged knowledge of the individual consumer’s behavior will be more intense and lead to purchases that reduce consumer welfare. [99] “Price targeting,” as observed in various markets, [100] enables the producer to charge a specific consumer as much as his/her WTP, reducing the available income of that consumer to make other purchases. This necessitates a decrease in consumer welfare compared to the counterfactual of uniform marginal cost pricing, and it could enable the producer to capture the entire consumer surplus. [101] However, for consumers whose WTP is lower than the counterfactual uniform price, ‘personalized pricing’ may allow them to purchase specific products that they would otherwise be unable to afford. ‘Personalized pricing’ may therefore have ambiguous welfare effects, depending on the market structure and the tradeoff between the market ‘appropriation’ effect to consumers with high WTP versus the ‘market expansion’ effect to consumers with low WTP. [102] Additional conventional competition concerns addressed in the economic literature were that such AI-based discrimination may discourage consumer search, ultimately leading to suboptimal matching of consumers to products and aggregate higher prices for consumers. [103]
Reports by competition authorities have overlooked the more systemic risks of AI being widely dispersed to different economic and social activities (cumulative effect), hinted at by the academic literature. Indeed, the potential for AI to offer “a vast psychological audit, discovering and representing the desires of society,” [104] raises the risks of large-scale manipulation by powerful economic actors. Personalized pricing also presents fairness considerations (value ethics) because of both the lack of transparency and the exposure of sensitive personal data. [105] Generative AI developers may also have an incentive to demand unfair conditions for access, such as rights over content created by the AI or information uploaded by end or business users—again a concern that integrates broader concerns about fairness and responsible innovation. [106]
C Exclusionary AI-Related Theories of Harm
Exclusionary concerns also largely rely on conventional economic models of anticompetitive foreclosure and exclusion, transposed in the context of AI. Only minimal attention has been devoted to differentiating these models and adapting them to the specificities of LLMs and machine learning.
It has been alleged that algorithms may allow companies to undertake predatory pricing and supra-competitive selective pricing measures, [107] eliminating competitors from the market in the process. [108] For example, Uber collected data on drivers working for both them and Lyft and offered them targeted benefits to work exclusively for Uber, thus raising the competitors’ costs. [109]
Another concern commonly expressed is that major tech corporations, which already wield significant control over critical digital infrastructure and markets, could consolidate their dominance into the emerging AI sector: their entrenched advantages—from vast data repositories to established cloud platforms—could enable them to shape AI technology trajectories to the detriment of fair, open and effective competition. [110] Competition authorities have focused on the concentration at the level of the public cloud infrastructure, as well as the existence of partnerships between cloud service providers and AI foundation model providers. [111] Demand has emerged for a comprehensive review of M&A transactions and the scope of merger regulation concerning partnerships across the digital economy. [112] Simultaneously, enforcers have raised concerns about economic actors’ paired access to privileged data and unique algorithms, to the extent that they may produce a snowball effect whereby a large user base allows easier access to new training data that subsequently enables significant improvement to the models and ultimately attracts an even larger user base. [113] The potential for AI and LLMs to reduce interoperability between datasets or services and place rivals at a competitive disadvantage [114] is reminiscent of existing economic thought. Despite this, it should be noted that using user data to further refine the model in question will create a strong first-mover advantage because AI feedback loops will have a larger exclusionary potential than traditional data feedback loops. [115]
The existing barriers to the acquisition of publicly available data is a well-known concern in the context of the broader digital economy. [116] Presently, foundational model (FM) developers may gain access to new data either by drawing on their proprietary resources to use data they have already harvested in their business activity, or by purchasing data from third-party providers such as publishers and image repositories. [117] Such agreements for the sale, licensing, or sharing of user-generated content, particularly in community-driven platforms, [118] have raised the specter of exclusionary concerns for some competition regulators. [119] While early foundation models like Llama 2 and Stable Diffusion relied solely on publicly accessible data for training, this approach may soon hit its limits. As noted by the CMA’s AI Report, “in future it could be more challenging for FM developers to improve on model performance by increasing the scale of training data because freely available data may be fully exploited (ie there is no new data that models could be trained upon) or grow at a slower rate.” [120] The potential for LLMs to collapse when trained on recursively generated data means that synthetic data may not be used as a cheaper training data substitute by FM developers [121] and that access to real-world data is an essential ingredient for the success of new LLMs. [122] The well-established literature on the benefits of open access models for social welfare [123] has prompted discussion on open-source vs closed-source LLMs in competition authorities’ reports. [124] Both the CMA and FTC acknowledge the risk that open-source models may suddenly become closed, prompting consumer inertia [125] and locking in customers thereafter. [126]
The substantial investment in distributed computing systems, AI accelerator chips and GPUs, [127] coupled with the scaling laws observed when larger models integrate more data and training parameters to perform better than smaller models, [128] has streamlined competition authorities’ focus on access to computing power. Scale may mean that “FM development may exhibit economies of scale, as initial high model development costs (pre-training, fine-tuning) can be spread over a larger customer base,” thus conferring an additional advantage on large players. [129] This concern is motivated by conventional threats of dominance and foreclosure, especially in light of the concentration of AI chip production by U.S.-based Nvidia. [130]
The necessity of access to significant cloud computing capabilities also creates a concern for FM developers due to the high expense of in-house development and the restriction of external cloud service provision to only AWS, Azure and GCP, and specialized providers such as CoreWeave. [131] The limited availability of cloud service providers creates barriers to switching providers or choosing to use multiple providers at the same time, through measures such as complex tariff structures, egress fees and a low level of interoperability. [132] There is an incentive for cloud computing firms, who are active in several markets, to integrate AI in other products [133] and agree on one-way or two-way exclusivity agreements with FM developers to restrict access to FMs to only their cloud service in order to gain from a multimarket presence. Although it has also been noted that “once a FM completes its pre-training or fine-tuning, its performance level is essentially fixed, with the number of users having no immediate direct impact on user experience,” [134] nonetheless access to large volumes of feedback from different categories of users could enable multiproduct and service firms to improve their FMs beyond an achievable standard for smaller firms. As the CMA notes, “the greater the feedback effects, the quicker firms will be able to make their downstream FM services better, giving these firms a competitive advantage.” [135]
An additional concern highlighted by the reports is vertical or quasi-vertical integration, with the presence of some firms in two or more stages of the AI value chain raising traditional concerns of leveraging and anticompetitive foreclosure. [136] There is an increasing risk of further entrenchment of market power through partnerships between the main AI players and chip manufacturers (especially Nvidia), which may have ambiguous effects from a consumer welfare perspective. [137] When referring to the relationship between Microsoft and OpenAI, a recent OECD report observed that while powerful partnerships in the sector may not raise competition issues at the moment, they have the potential to be seriously deleterious in the future. [138] Indeed, as the CMA acknowledges in its AI Foundation Models Report, “(s)everal FM developers, such as Microsoft, Amazon and Google, own key infrastructure for producing and distributing FMs such as data centres, servers and data repositories.” [139] This enables FM developers and their Big Tech partners, which are present in a range of user-facing markets where FM technology can be integrated (e.g., online shopping, search, supply of software), to control various stages of the AI development and deployment process. Partnerships may adopt exclusionary strategies (e.g., restricting access to their FMs by companies outside their ecosystem, refusing to license their leading AI models, giving preferential treatment to their own downstream generative AI at the cost of competing downstream services [140]) as well as exploitative strategies (e.g., imposing exorbitant charges for the use of these FMs, introducing exploitative bundling, [141] tying generative AI applications to existing products to “reduce the value of competitors’ standalone generative AI offerings”). [142] A recent FTC report also highlights how M&A activity by major companies may encourage the purchase of critical applications and cutting off access to core products, as well as buying out rivals in the market in lieu of offering better services. [143]
Access to qualified AI experts and specialized financial backing remains essential for firms, and the prevalence of noncompete clauses that restrict the ability of workers to move to rivals may exacerbate the barriers to entry. [144] As acknowledged by the OECD, “(t)he expertise required to develop a foundation model includes the necessary AI based techniques, as well as the talent to progress techniques to derive the right outcomes.” [145] This reflects the concern over monopsony in labor markets originally highlighted in Joan Robinson’s IO models. [146]
The most recent concerns of ‘ecosystemic theories of harm’ expressed by academic writing are also touched upon in some reports, [147] particular attention being devoted to the possibility that ecosystem “stickiness” and customer lock-in may be reinforced by the use of AI. Some reports highlight that FMs may often integrate into existing digital ecosystems (i.e., mobile platforms, search engines, productivity software), providing the controlling players the capacity to manipulate integration rules, by “funneling users toward their own generative AI products instead of their competitors’ products” [148] and ultimately limiting consumer choice. [149]
However, it also widely recognized by these reports that AI may lead to improvements in existing products and services and enhancements in customer convenience, and enable entirely new solutions that address unmet needs of consumers and businesses. [150] In conclusion, reports by competition authorities observe both the potential disruptive impact of AI developments as well as the possibility that they may reinforce existing dominant positions, noting that it is “impossible to accurately assess what the impact on competition will be from potential new FM products and services.” [151] This Article only focuses on uncertainty stemming from competition-related threats of harm and does not deal with the broader set of threats of harm engendered by the use of AI and more generally of the technologies of the “coming wave” on the democratic process, labor’s share of the economy, environmental costs, or even existential threats, which may also call for a precautionary approach. [152]
III The Precautionary Principle, Innovation and Competition Law: The Legal Technologies of Precautionary Action
Having considered the threats of harm envisaged by competition authorities, we proceed to a normative discussion of the potential justifications for use of the precautionary principle in the context of ongoing scientific uncertainty. A particular effort will be made to address criticisms often put forward by opponents of the precautionary principle that its implementation inherently stifles innovation. Finally, we provide a descriptive account of the various forms of precautionary intervention available to competition authorities, noting how these have been used so far to address threats of harm generated by the technologies of the “incoming wave.”
A A Synergetic Approach to the Interaction Between the Precautionary Principle and Innovation: The Responsible/Sustainable Innovation Framework
Justifications and normative grounds for the precautionary principle vary from the failure of ordinary risk management (ORM) approaches, [153] through the ignorance of decisionmakers, to high impact low probability (HILP) events [154] and the creation of an illusion of control, [155] as well as a desire to consider moral “secondary effects” or “social amplifications.” [156] Precautionary principles have nevertheless been criticized for their conceptual incoherence. Sunstein has argued that a “strong precautionary principle” would advocate for action “even if the supporting evidence is speculative and even if the economic costs of regulation are high,” leading to paralysis in decision-making. [157] However, this risk may be mitigated by conducting a risk-risk tradeoff and factoring in the forgone benefits of the abandoned action as possible harms of precautionary regulation. [158]
Much of the opposition to the application of the precautionary principle originates from the perception that it may reduce innovation incentives and stifle growth. [159] Following an examination of the opportunity and legal certainty costs arising out of the application of the precautionary principle, Portuese advocated in favor of the simultaneous use of an “innovation principle” to “balance out” the application of the precautionary principle. [160] This approach would see authorities aiming to integrate innovation at the levels of regulatory preparation and implementation, adopting agile regulatory tools such as regulatory sandboxes and innovation deals. There is nothing controversial in adopting a “weak” innovation principle, as is put forward by the author. [161] However, the devil is in the details. This principle is presented as antagonistic to or in tension with the precautionary principle, [162] but this blurs the debate and does not address the elephant in the room—that one may take a precautionary approach to the protection of innovation by maintaining the value of future innovation trajectories or open-up technological opportunities.
Indeed, innovation has multiple dimensions, some of which may significantly increase the wellbeing of society either now or in the future, while others may also lead to losses for certain societal groups without providing any compensating benefits. An innovation principle approach fails to consider the inherent uncertainty of the process of innovation, as only a very small minority of innovations involve situations of (Knightian) risk, the vast majority being characterized by (Knightian) uncertainty as to the probability of their success. The positive societal impact of a novel technology [163] is at best a guess and in most cases a known unknown or even an unknown unknown. [164]
In the context of scientific uncertainty about innovation and its outcomes to society, it may be advisable to adopt a precautionary principle operating for the preservation of (the chance of) innovation and the option value of future innovation. Exploring the interaction between the precautionary principle and innovation, we consider different scenarios: [165]
If there is no evidence ex ante about the possibility of harmful outcomes and it is impossible or significantly costly to contain such outcomes (the threat of harm is high and the uncertainty is obvious), the remedial precautionary action may be quite strong and involve even the prohibition of the activity or innovation in question.
If there is some knowledge about the possible outcomes and ex ante uncertainty about their likelihood, but it is possible to distinguish classes of cases based on their predisposition to generate serious harm or the societal aversion to harm in the relevant industry, then the remedial response should accommodate for these different situations through a categorical approach. This will reverse the burden of proof, in essence leading to a more iterative stepwise model of precautionary remedies that enables regulatory learning.
If there is scientific evidence ex ante about the possible outcomes and their likelihood, then it would be possible to proceed with a case-by-case ORM approach, requiring the careful modelling of the threat of harm and the circumstances of its occurring, eventually combining this with a precautionary approach by raising the standards of evidence.
However, any discussion on innovation should not only focus on the level of innovation, as is often the case, but also on its direction. [166] There are clear societal commitments in the EU (and other major jurisdictions) towards sustainable development goals (SDGs) [167] and this needs to be factored into any discussion concerning innovation. [168] There is an ongoing dialogue in science (propelled by Polanyi’s seminal work ‘The Republic of Science’) [169] about the need for scientists to take responsibility for the possible hazards their research may unleash, which has attracted attention to the demand for democratic governance over the innovation and technology process. [170]
Work in economic sociology and sociology of science also highlights the risk of prioritizing “framing” and “overflowing” [171] in “hot situations” where there is no stabilized knowledge base, and instead proposes that “hybrid forums” composed of experts and laypeople that would take into account the debates and sociotechnical controversies surrounding specific technologies would provide a more democratic context for innovation. [172] Anticipating future states of the world and future threats (“future-gazing”) through hybrid forums, which combine the predictive power of scientific experts with the inclusion of all possibly affected stakeholders, enables greater reflexivity on the part of actors and institutions. [173] Regulatory sandboxes may also provide similar mechanisms for anticipating negative impacts before these are generalized. These tools promote an understanding of the dynamics and shape of different technological futures and form part of the new paradigm of “responsible innovation.” [174]
The concept of “responsible innovation” has broadly been described as “taking care of the future through collective stewardship of science and innovation in the present” by limiting the asymmetric power of some actors and providing “room for public and stakeholder voices to question the framing assumptions not just of particular policy issues but also of participation processes themselves.” [175] It includes four dimensions: anticipation (strategic foresight), reflexivity (embedding social scientists and the legal profession in the innovation process), inclusion (democratic innovation governance), and responsiveness (a greater role for regulation and standards). [176]
As such, the simplistic juxtaposition between the precautionary principle and innovation does not account for the richer and more synergetic interaction between the need for a precautionary approach and the protection of responsible innovation that inspires most modern legal technologies of containment.
B Legal Technologies of Containment: Precautionary Principle-Inspired Competition Law Interventions
In light of scientific uncertainty as to the competition implications of the new technologies of the “incoming wave,” such as generative AI, synthetic biology and quantum computing, we can dissect different doctrines and approaches, related to both the substance of competition law and enforcement tools, that have integrated the precautionary principle approach and that may be used in this context.
1 Prohibitions and New Legislation Dealing with Novel Threats of Harm
The adoption of legislation prohibiting the use of technologies that impose unacceptable risks on society may be an option in the regulatory toolkit. This approach will usually concern novel technologies that appear ex ante to generate, according to the available scientific evidence, plausible threats of harm that are not addressed by the existing legislative framework (type (a) of Randall’s classification discussed in Part II).
By imposing different obligations on providers (and users) of AI technology depending on the level of risk, the European Union AI Act provides an example of such regulation. [177] AI systems that pose unacceptable risks are banned, while AI systems that pose high risk are subject to prior assessment before being commercialized throughout their lifecycle. While generative AI, such as Chat GPT, is not classified as high risk, it is subject to transparency requirements due to the recognition that some high-impact general-purpose AI models may create systemic risk. However, the AI Act does not address any competition risks that may result from the use of advanced AI, [178] and the only indirect reference to competition is the requirement for the European Artificial Intelligence Board and the market surveillance authorities to cooperate with the EU and national competition authorities when as part of their reporting obligations they come across information that may be of potential interest for the application of EU competition law. [179]
Unsurprisingly, the issue of algorithmic coordination has been at the center of the regulatory debate about a possible ban of AI or at least some form of ex ante auditing before introduction by businesses. Suggestions have been made for the introduction of a per se prohibition on certain pricing algorithms that encouraged supra-competitive prices, as well as an antitrust liability determined by some form of algorithmic auditing and dynamic testing. [180] Some have even directly referred to Asimov’s three laws of robotics to adopt legal provisions and constraints, with a particular focus on smart algorithms that could learn to communicate by sending messages encoded in the prices charged and the potential for sophisticated algorithms to overcome any provisions implemented. [181] Others have argued for not subjecting algorithms that facilitate collusion to per se prohibitions or bans, but assessing them according to a structured rule of reason, balancing their negative effects on facilitating coordination with their procompetitive effects [182] and relying on rebuttable presumptions [183] in specific scenarios that raise significant threats of harm. [184] There have also been suggestions for adjusting the standards of ex-post regulation, to allow the legal standard of proof to be more assertive as regards the possibility of ‘tacit collusion’ in this context. [185] Finally, others have objected to any regulation by arguing in favor of a ‘business-as-usual’ approach where algorithmic pricing is regarded as not posing any new problem that cannot be dealt with by current antitrust legislation.
As evidence has arisen regarding the potential anticompetitive threats posed by algorithms and new models were published exposing the possibility for LLMs to enhance the collusive potential of algorithms, proposals have been made for stronger precautionary action. The recent Preventing the Algorithmic Facilitation of Rental Housing Cartels Act. 2024 Bill proposes to “[m]ake it unlawful for rental property owners to contract for the services of a company that coordinates rental housing prices and supply information,” designating such arrangements as a per se violation of the Sherman Act. [186] The bill came following the public outcry against the realtor ‘RealPage’ and its software program ‘YieldStar,’ which aggregated private rental data to “help landlords push the highest possible rents on tenants.” [187] A number of tenants filed class action suits alleging illegal price fixing, [188] and the U.S. Department Of Justice, joined by eight State Attorney Generals, filed a civil antitrust lawsuit against the company in August 2024. [189] In the meantime, the Preventing Algorithmic Collusion Act of 2024 Bill, which aims to expand the scope of the prohibition of the use of pricing algorithms to include those that can facilitate collusion through the use of nonpublic competitor data and to put in place an antitrust law enforcement audit tool, was introduced in the U.S. Congress. [190]
As the use of generative AI intensifies in different sectors of economic activity and economic models about collusion evolve to account for the capability increases in LLMs, it is expected that some jurisdictions will slowly move to “hard” precautionary approaches that integrate bans for certain types of algorithms or require preauthorization and extensive auditing prior to commercialization. Similarly, recent advancements in AI and bio-synthesizers as well as quantum computing may increase the pressure to move to a “more licensed environment” that would address these novel threats of harm. [191]
2 Reimagining Competition Standards for Interventions in Markets
Recent reports commissioned by competition authorities regarding the digital economy have constantly noted that the existing competition law standards may be too static, focused only on the market situation at the time of examination and not dealing with more dynamic threats of harm that may materialize in the future and have an impact on the level and direction of innovation. These critiques have been followed by suggestions as to the development of different theories of harm, the adjustment of standards of proof regarding the nature and the amount of evidence required to prove allegations, and eventually the use of presumptions. Focusing on future harm and conducting prospective analysis before taking remedial action is an essential feature of merger control and other ex-ante tools. However, in the presence of scientific uncertainty and novel threats of harm, this futurization of competition law expands in all areas of enforcement. Although the concerns prompting such approaches are new, this is not the first time that the precautionary principle has inspired the competition law playbook.
i The Threat of Economic Concentration and the Incipiency Doctrine
U.S. law was the first regime to pioneer the introduction of precautionary approaches, and the ‘incipiency doctrine’ that developed following the adoption of the Clayton Act in 1914 [192] reflects a high watermark of such integration. The Act complements the Sherman Act, [193] adopted two decades earlier, by prohibiting exclusive dealing and tying (Article 3) as well as mergers and acquisitions the effect of which “may be substantially to lessen competition, or to tend to create monopoly” (Article 7). The goal pursued by the Act, as explained in the House of Representatives Report accompanying the Bill, was to “arrest the creation of trusts, conspiracies, and monopolies in their incipiency and before consummation.” [194]
The development of this doctrine took place in the context of aggressive merger enforcement against economic concentration and the abuse of economic power in the pre-Chicago “consumer-welfare” driven antitrust era. [195] The doctrine highlighted the importance of protecting “redundant” competitors that were considered crucial for the preservation of the competitive process. [196] In the 1960s federal authorities, supported by U.S. Supreme Court precedent, employed the doctrine to block a series of mergers that would have increased (even moderately) economic concentration. [197] The Supreme Court held in Brown Shoe that by adopting the Clayton Act, Congress was concerned “with probabilities, not certainties,” [198] while in Philadelphia National Bank the Supreme Court acknowledged that the incipiency doctrine “requires not merely an appraisal of the immediate impact of the merger upon competition, but a prediction of its impact upon competitive conditions in the future.” [199] Regarding the formulation of the incipiency doctrine in the context of the prohibition of exclusive dealing and tying, the courts recognized the shortcomings of relying on quantitative tests [200] and instead embraced a qualitative substantiality approach focusing on the “probable effect” on competition and allowing for the consideration of factors beyond just the coverage or percentage of foreclosure. [201]
In their seminal study on the incipiency doctrine, Carstensen and Lande list “at least” five formulations of the incipiency doctrine, which account for (a) the amount of harm required to prove a competition law violation, (b) the cumulative effect of harm because of a broader “industry trend or wave,” (c) the “lower degree of probability of proof of harm” that suffices for a finding of a violation of the law, (d) the timing of harm and the need to “look further into the future for possible harm,” and (e) the acceptance that competition enforcement “should err on the side of overenforcement,” thus signifying a different calculus as to the error costs usually considered in the framework for antitrust. [202]
As is highlighted by these different dimensions of the doctrine, the core concern for the application of the incipiency doctrine is the perceived threat of economic concentration, as an archetypical harm to competition. This was challenged by the Chicago School’s more consequentialist emphasis on market outcomes, as measured by effects on price and output, and the reduced emphasis on containing economic concentration as a goal of competition law in both the U.S. and Europe. [203] This led to the relative demise of the incipiency doctrine in the enforcement policy of the U.S. Department of Justice and Federal Trade Commission from the late 1970s until interest resurfaced in all but name in the mid-2010s, with competition policy aiming to contain the rise of economic concentration [204] in light of accompanying societal harm in the digital economy [205] and beyond. [206] The recent FTC and USDOJ Merger Guidelines partly embrace this perspective by taking a precautionary approach regarding mergers that involve a dominant firm acquiring a nascent competitive threat so that it doesn’t grow into a significant rival leading to a reduction in its power, although the focus is not only on the rise of concentration but also on risks to potential competition and innovation. [207]
ii The Rise of the Potential Competition Doctrine and Potential Effects
It can be argued that the modern expression of the ‘incipiency doctrine’ takes the form of protecting potential competition, an indirect reference to the importance of the competitive process without however linking it to the more “static” focus of preserving market structure from economic concentration, as in the previous era. The idea of ‘potential competition’ integrates a dynamic element of behavior (and focus on incentives) and is very much related to the consideration of the likelihood of a new entry as a constraint to the pricing decisions of an incumbent. This is not to say that any traditional structural concerns highlighted in standard economic theory are absent. Indeed, entry and expansion barriers are possibly the most important element in the definition of the relevant market and in the assessment of market power. [208] However, just as entry barriers can be a contextual element in an investigation, they can also themselves be the focus of the investigation—what are often called ‘strategic’ entry barriers, as opposed to ‘natural,’ ‘structural,’ or ‘intrinsic’ ones which the incumbent should not be held liable for. [209]
As remarked by Bush and Massa in their analysis of U.S. antitrust law, the potential competition doctrine operates both as a shield and as a sword. [210] The doctrine of potential competition, developed in U.S. courts during the 1960s-70s, recognized that even businesses not yet active in a market could influence competitive behavior. Simply put, when existing firms believed that a powerful newcomer might enter their market, they often behaved more competitively—keeping prices lower and service quality higher—to discourage that entry. The mere “perceived potential competition” and threat of potential rivals therefore helped keep incumbent market power in check. [211] As it is “exceptionally difficult to prove perception,” the courts in subsequent case law moved away from a subjective perception to consider the situation in which firms may prospectively compete if they enter the market. [212] This led courts to examine the type of evidence necessary to show that a potential competitor is having some impact on the market. Focus was placed on the attributes of a potential competitor, market conditions and trends to determine the financial incentives to enter a particular market, and the actions that the alleged potential entrant had taken to enter. [213] Such tests focus on building a credible potential competition narrative, either as a defense to the finding of market power or as a sword in case there is a strategy followed to block potential entry. This allows some flexibility to engage with the temporal dimension of the competition harm and the uncertainty of the impact that such a new entrant will have on competition in the market in question. [214]
With respect to the different approaches focusing on innovation (examined in the following subsection), the potential competition doctrine does not depart from the traditional focus of competition analysis on the strategy of constraining price to reduce the risk of future entry. [215] Applying potential competition analysis would, however, require that one of the firms already be an established supplier of the relevant good or service, which is not always the case; and some effects, for example possible delays due to regulatory requirements, cannot be captured by the tool of potential competition but may be if one assesses the competitive effect on innovation.
Potential competition can be conceived in distinct ways in EU competition law. First, excluding a potential competitor may raise concerns, particularly in contexts where the incumbent benefits from entry barriers, such as IP rights, which provide the possibility for supra-competitive pricing. [216] Second, protecting potential competition (or a potential competitor) is a default fallback option in the presence of uncertainty as to the actual effects of a specific conduct on competition. [217] Third, potential competition may be considered a synonym for potential effects to competition. For instance, in order to establish under EU competition law that a practice is abusive, its detrimental effect on competition must exist, although it need not necessarily be quantified and it is sufficient to prove the potential existence of such an effect capable of eliminating competitors who are at least as efficient as the dominant undertaking. [218] The CJEU case law rejects “purely hypothetical” anticompetitive effects, although it seems content with anything more than that. [219] Potential effects can be demonstrated through an economic theory of harm, based on reliable scientific evidence (e.g., an economic model), which predicts that the adoption of the particular practice will bring about negative economic effects on effective competition. However, a potential threat (risk) of harm to competition may also be assessed according to the principle of prevention and precaution, which requires action even in the presence of uncertainty as to the existence of concrete harm.
The concept of potential effects on competition is therefore broad and extends to more abstract effects of jeopardizing the competitive structure and functioning of the market, and in general the public good of the competitive process. [220] The potential effects doctrine thus appears to have become an ordinary risk assessment technique that also integrates some flavor of the precautionary principle.
iii Theories of Harm Addressing Innovation Effects: The Emergence of Precautionary Innovation Antitrust
Although structural elements (avoiding market concentration and the exclusion of potential competitors) remain evident in both the incipiency and potential competition doctrines, as competition authorities realign to the more prospective analysis of assessing the innovation effects of specific business conduct or market configurations, the question arises as to whether structural elements provide a sufficient proxy for negative innovation effects, or an alternative standard would be more suitable.
To gauge innovation effects in the context of merger control, recent economic literature has advocated for the internalization of the business stealing effect that influences the incentives of economic actors to innovate. [221] Assuming that unilateral innovation effects are closely analogous to unilateral price effects, the approach advances the logic that the higher the innovation ratio (business stealing effects of an innovation), the more likely the merged entity will scale back or cease to innovate and thus increase the “probabilistic loss of competition.” [222]
This approach is inspired by arguments in favor of a more dynamic Schumpeterian perspective on competition, coupled with empirical evidence of the inverted U relation between competition and innovation. [223] Federico et al. distinguish between different types of mergers involving innovation effects and examine their impact on competition in accordance with the business stealing effect criterion. [224] Consideration of the impact on competition of adjusting the uncertainty of the average probability of the successful introduction of a pipeline product leads to the finding that a merger may be anticompetitive even if there is “low probability” that the rival will introduce the business stealing pipeline product. [225] This conclusion seems motivated by a precautionary approach in favor of innovation variety, to the extent that a decision is reached even if there is uncertainty as to the possible development of the overlapping pipeline product by the rival. [226]
iv Error Cost Analyses in Competition Law and the Precautionary Approach
Following the identification of a market failure resulting from the exercise of market power, competition authorities traditionally employ an error cost framework in their assessment of the need to intervene or not in a specific market. There are two forms of social costs: ‘substantive costs’ or error costs, and ‘procedural costs’ or decision costs. [227] False positives (or type I errors) occur when the decisionmaker finds violations although the conduct did not harm competition, while false negatives (or type II errors) occur when the decisionmaker does not find violations although the conduct did harm competition. [228] Decisionmakers employ a sequential information gathering process to reduce decision costs, while of course aiming to minimize the occurrence of substantive costs (false positives and negatives). [229] The decision to acquire more information is therefore a tradeoff between these two types of costs. [230]
Gal and Padilla argue that the development of AI may challenge the way different types of conduct affect market dynamics, impacting “the relative likelihood and cost of the false positive and false negative errors” and ultimately challenging “the optimal balance between false positive and false negative errors and information costs on which some current legal rules are based.” [231] AI “can strengthen the consequences of exclusionary or exploitative conduct” and accordingly puts more weight on avoiding the likelihood and resultant costs of false negative errors, especially as the use of AI by competition law enforcement reduces decision costs. [232] These suggestions are compatible with an application of the precautionary principle, which traditionally applies in situations where avoiding false negatives is considered more socially costly than avoiding false positives. [233] However, one may challenge the reliance on the error cost framework altogether, given the rapid development of technology and the limited knowledge of competition policymakers and authorities regarding the real impact of their decisions into the future.
In this case, we can refer to a distinct descriptive model that relies on Bayesian statistics where probabilities are beliefs, rather than classical statistics where probabilities are objective. In the Bayesian analysis, the starting point is a ‘prior belief’ about the state of the world, and then evidence changes that belief so that the endpoint is a ‘posterior belief.’ [234] However, this is not ideal either, as prior beliefs may affect the resulting posterior belief, whereas in an ideal world the evidence alone should drive the conclusion. In these more uncertain contexts, relying on a precautionary principle and iteratively adjusting its use, by considering the existing knowledge about the threats of harm in devising proportional action, may offer a superior decision procedure than resorting to the error cost framework. As threats of harm may range from “deterministic certainty” to “gross uncertainty” and include “Knightian risk,” “Knightian uncertainty,” and “commonsense uncertainty,” a specific precautionary methodology may be devised for each type of uncertainty. [235]
v Experimenting with Future-Gazing and “Early-Warning” Tools in Competition Law
The development of new approaches for future planning in a highly uncertain world characterized by rapid technological change has become a prominent feature of modern strategic foresight techniques used in government. [236] These techniques include ‘horizon scanning,’ which helps assess future threats and serves as an input for scenario development public policy processes; ‘super forecasting’ [237] and other forecasting tools, including the scenario or the ‘Delphi methods’; and ‘road mapping,’ which associates communities of experts and quantitative foresight tools such as agent-based modelling and dynamic simulation models. [238] Such approaches have already been applied in the context of assessing the threats of harm to competition from AI and the digital economy, with horizon-scanning reports (and accompanying “strategic” reports [239]) completed by the Data, Technology and Analytics unit (DaTA) and the Digital Markets Unit (DMU) at the UK Competition and Markets Authority. [240] The above forementioned reports on the possible threats of AI to competition provide a further illustration. Additionally, “early warning” and “red teaming” mechanisms, which test for threats to competition of technology systems, may also be voluntarily adopted by undertakings as part of their compliance efforts, with the potential to eventually standardize their use and involve independent experts for government-led audits. [241] The use of regulatory sandboxes may provide adequate ground to experiment with new business models, while at the same time engaging constructively with competition authorities to mitigate and thus contain any threats that they might engender. [242]
There is great potential for using strategic foresight methods more extensively in all areas of competition law, [243] especially in addressing new threats of harm to competition arising from evolving technologies and complex systems. It is also important to integrate these methods into a broader framework that considers the concerns of responsible and sustainable innovation. Any framework should also adopt a participatory public policy approach, in order to address threats of harm in line with the strategic interests of all stakeholders.
Conclusion
This Article has explored the challenges faced by competition law enforcement in the face of significant technological advancements in AI. The key features of the latest technology “wave” [244] are (a) the “asymmetries” to which they give rise, (b) “hyper-evolution” and concentration of global economic control, (c) “omni-use” of general purpose technologies (GPTs), and (d) “autonomy” that obviates the need for humans to be “in the loop.” These new “synthetic worlds” challenge the usual contours of our thought and epistemic toolkit and raise new threats of harm.
The underlying foundation of this important technological and social transformation is the emergence of complex systems, [245] characterized by continuous interaction of multiple (autonomous) agents active in various economic and social spheres. This makes predicting the pattern of evolution of these adaptive systems particularly challenging and calls for more agile regulatory decision-making processes and methodologies. [246] Higher levels of uncertainty require a policy design that is aware of the gaps in our knowledge base and remains open to the existence of multiple potential innovation trajectories and different “synthetic futures.”
This Article has explored the hypothesis that the legal concept developed to deal with the unpredictable, the precautionary principle, may guide the action of competition authorities, focusing on AI and other technologies of the “incoming wave.” It has discussed how competition agencies and courts currently deal with threats to competition from corporate strategies reliant on new AI capabilities and the increased use of algorithms, as well as the limitations of the ordinary risk management approach. In light of the shortcomings of alternative approaches, it has explored how the precautionary principle might be a more accurate and normatively appropriate option for regulating threats in complex (and “synthetic”) systems. Concerns about the precautionary principle have also been addressed and a more inclusive interaction between the precautionary principle and innovation, within the framework of responsible and sustainable innovation, has been proposed. Finally, this Article has examined how the precautionary principle can be integrated into competition law doctrines and institutions, allowing authorities to more comprehensively limit new threats of harm caused by this technological wave.
© 2025 by Theoretical Inquiries in Law
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- AI, Competition & Markets
- Introduction
- Brave new world? Human welfare and paternalistic AI
- Regulatory insights from governmental uses of AI
- Data is infrastructure
- Synthetic futures and competition law
- The challenges of third-party pricing algorithms for competition law
- Antitrust & AI supply chains
- A general framework for analyzing the effects of algorithms on optimal competition laws
- Paywalling humans
- AI regulation: Competition, arbitrage and regulatory capture
- Tying in the age of algorithms
- User-based algorithmic auditing
Articles in the same Issue
- Frontmatter
- AI, Competition & Markets
- Introduction
- Brave new world? Human welfare and paternalistic AI
- Regulatory insights from governmental uses of AI
- Data is infrastructure
- Synthetic futures and competition law
- The challenges of third-party pricing algorithms for competition law
- Antitrust & AI supply chains
- A general framework for analyzing the effects of algorithms on optimal competition laws
- Paywalling humans
- AI regulation: Competition, arbitrage and regulatory capture
- Tying in the age of algorithms
- User-based algorithmic auditing