Antitrust & AI supply chains
-
Maurice E. Stucke
Abstract
Will AI technology disrupt the current Big Tech Barons, foster competition, and ensure future disruptive innovation that improves our wellbeing? Or might the technology help a few ecosystems become even more powerful? To explore this issue, our Article outlines the current digital market dynamics that lead to winner-take-most-or-all ecosystems. After examining the emerging AI foundation model supply chain, we consider several potential antitrust risks that may arise should specific layers become concentrated and firms extend their power across layers. After raising several countervailing factors that might lessen or prevent these antitrust risks, we conclude with suggestions for the policy agenda to promote both healthy competition and innovation in the AI supply chain.
Introduction
The buzz these days is AI, specifically generative artificial intelligence and foundation models, which are “trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks.” [1] In February 2024, AI chipmaker Nvidia joined the multi-trillion-dollar market valuation club, joining existing members Microsoft, which has partnered with OpenAI and is incorporating OpenAI’s foundation model across its ecosystem, Google, which is developing its own foundation model, and Apple, which in 2024 shifted resources from driverless cars to AI. As Nvidia’s CFO noted, “Building and deploying AI solutions has reached virtually every industry.” [2] New technological developments promise to further catapult generative AI technology, with significant implications for citizen autonomy, social stability, and democracy.
When surveyed on AI’s likely future impact on society, AI experts have expressed concerns about personal privacy, human rights, democracy, and civility. [3] Tech titans Elon Musk and Larry Page have famously debated whether artificial intelligence would ultimately elevate or destroy humanity. The risks that generative AI may present to humanity have also been linked to the departure of Ilya Sutskever, chief scientist and cofounder of OpenAI, from the company. [4] His newly founded company, “Safe Superintelligence,” has attracted much interest, with a valuation of over $5 billion in its first months of operation. [5] The lightspeed evolution of the technology and the continuing investment in AI-related initiatives capture an increasing portion of our social and economic activity [6] and fuel the debate about the technology’s impact.
Alongside these significant developments, a separate discussion has emerged, looking at how AI markets will evolve and who will control the technology. In this context, policymakers and competition officials are exploring foundation AI models and this technology’s likely impact on innovation and competition. The critical question is whether these foundation models will herald disruption that would challenge existing gatekeepers and tech barons and bring about new dynamism to digital markets, or will AI further entrench market power. This question lies at the heart of this Article.
Assessing the impact of foundation models on competition is nuanced. There might be more innovation, but the type of innovation (sustaining versus disruptive) and its value (positive, extractive, or negative) might vary. In some ecosystems, we might see many more sustaining innovations that primarily extract value from consumers or other producers. We may see more toxic innovations that reduce our and our children’s wellbeing.
The importance of identifying the trajectory of generative AI markets and ensuring their contestability cannot be overstated. As many competition officials have candidly admitted to us, their agencies missed or underappreciated the digital market dynamics that led to “a winner-take-most-or-all” that led to the present Big Tech Barons, namely Google, Apple, Meta, Amazon, and Microsoft. Currently, enforcers around the globe are scrutinizing these dominant ecosystems, which are heavily investing in generative AI. But there is no consensus over how to prevent, among other things, the distortion of competition and innovation. The concern is that feeble or delayed action will miss the potential tipping of these emerging AI markets. As the experience with the Big Tech Barons reflects, antitrust enforcement, if too little (e.g., primarily monetary fines) or too late (cases that take years to develop and litigate), will not restore competition and innovation levels. It is far easier to keep markets contestable than to try to restore contestability.
Because the AI landscape is nuanced, any intervention needs to appreciate the complexity of the technology stack, the competition dynamics at stake, and the existing dominance of a few ecosystems. Ultimately, two fundamental inquiries emerge:
First, the digital economy has several factors and characteristics that can lead to concentrated markets. Are there similar factors in the emerging AI foundation model supply chain that will lead to a “winner-take-most-or-all” situation?
Second, could AI herald new business models and innovations that disrupt the dominant ecosystems? Or will the current ecosystems dominate these AI-driven foundation models and applications?
Within this context, this Article considers whether this generative AI technology forms a disruptive force that could destabilize the current powerhouses, foster competition, and ensure future disruptive innovation that improves our wellbeing, or might the technology herald the next step in the ongoing entrenchment of critical ecosystems? This inquiry requires careful consideration of the emerging foundation model supply chain and the extent to which current strategies, market characteristics, and technologies could entrench a few powerful firms.
To explore this issue, Part I briefly elaborates on foundation models and their significance. Part II outlines the current dynamics in many digital markets that lead to a winner-take-most-or-all situation. Part III next examines the emerging AI foundation model supply chain and considers several potential antitrust risks that emerge should specific layers become concentrated and firms extend their power across layers. Part IV raises several countervailing factors that might lessen or prevent these antitrust risks. This Article concludes with suggestions for the policy agenda to promote both healthy competition and innovation in the AI supply chain.
I What Are Foundation Models?
Narrow non-foundation AI models are “trained on specific data for a specific task and context” and are “not designed for reuse in new contexts.” [7] For example, a bank may train a narrow AI model to predict the risk of a customer defaulting on a loan. However, that model could not do many other things, like chat with the customer to assist them with a transaction. [8] In contrast, foundation models are “AI models designed to produce a wide and general variety of outputs. They are capable of a range of possible tasks and applications, such as text, image or audio generation. They can be standalone systems or can be used as a ‘base’ for many other applications.” [9] Their flexibility and capacity enable these models to serve as the foundation for many other applications—from creating a poem to acing a biology test to creating a podcast about this Article. [10]
Let us begin with the data to see how this works and how foundation AI models differ from non-foundation models. A foundation model is trained on vast amounts of data. In collecting a lot of data from different sources, the developers can train the foundation model to “infer patterns, recognize relationships, discern context and generalize their knowledge.” [11]
Next is the modality. The types of foundation models vary, including “generative adversarial networks (GANs), variational auto-encoders (VAEs), transformer-based large language models (LLMs), and multi-modal models.” [12] As the latter reflects, foundation models can be uni- or multi-modal, with reference to the type of data used to train the model. Uni-modal models are trained to handle one type of data (such as generating text outputs from text inputs). [13] Google’s Gemini model, in contrast, is multi-modal in that the foundation model is trained using different types of data, including words, images, videos, and audio. [14] Prominent among the foundation models are “large language models” and “image generation models” that enable creative text, video, and audio and, as such, differ from narrower AI technologies used to analyze existing data.
So, one could ask a multi-modal foundation model to draw an image or create a movie from a written text prompt. Below is one prompt that is part of a U.S. copyright infringement action against the AI model’s developer. Here, the multi-modal model takes a text prompt (show me a painting of the Brooklyn Bridge in winter in the style of this particular artist) and produces the following image: [15]
Text prompt: “Painting of the Brooklyn Bridge in winter in the style of Roy Lichtenstein.”

Source: Center for Art Law, DreamUp generated output image
A third differentiator is the model architecture. Many foundation models rely on a deep learning architecture, which “uses multilayered neural networks to mimic the human brain’s decision-making process.” [16] Typically, this deep learning architecture helps the foundation model to undertake a “wide range of tasks and operations.” [17] After training and evaluating the foundation model using certain benchmarks, one can fine-tune the pre-trained foundation model for further particular tasks. [18]
Foundation models have already been integrated into many facets of our lives and businesses to facilitate search, improve software, communications, text completion, text-to-speech conversion, content creation, entertainment, financial services, legal services, healthcare, education, pharma, fashion, marketing, insurance, and more. [19] These technologies promise to shake up the status quo in many industries as they drive new business ventures, creativity, efficiency, and innovation. Not surprisingly, the market has reacted with excitement, resulting in high valuations of the leading players and expectations for exponential growth in the coming years. [20]
Earlier versions of this generative AI have given way to more advanced performances. Consider Bill Gates. In June 2023, Microsoft’s founder was unimpressed with ChatGPT’s performance: “I was like, ‘Yeah, it’s kind of an idiot savant. I don’t think it’s practical.’” [21] So, he told the OpenAI CEO to get back to him when the foundation model could achieve the highest score on the college advanced placement test for Biology. Gates thought, “OK, that’ll give me three years to work on HIV and malaria.” Two months later, OpenAI’s developers returned, and Gates watched ChatGPT achieve a 5 on the AP Biology test. “I’m still, personally, in a state of shock at ‘Wow, it is so good,’” Gates said. (Only 14.3% of high school students who took the exam in 2023 obtained that top score. [22])
Its GPT-4 model, according to OpenAI, has done even better, exhibiting “human-level performance on various professional and academic benchmarks.” [23] These benchmarks include GPT-4 racking the highest scores on many other AP subjects (including history, statistics, psychology, and micro- and macroeconomics), scoring in the 90th percentile on the Uniform Bar Exam for lawyers (the GPT-3.5 model scored only in the 10th percentile), in the 99th percentile on the graduate school GRE Verbal Assessment, and even a 77% on the Advanced Sommelier examination. [24]
The performance of these AI models is constantly improving, and key players are now racing toward the ultimate goal: developing Artificial General Intelligence (AGI)—an AI system with a human level of cognition and self-teaching capacity to undertake various tasks autonomously. [25] In 2024, OpenAI released its o1 model with “reasoning” abilities, bringing us closer to human-like artificial intelligence. [26] The new generative AI models increasingly excel at complex problem-solving, math, and coding, and bring us closer to the ideal of AGI.
Add to the mix the increasing investment in AI. Nearly one-third of investments by venture capital firms in 2024 have been in AI companies. [27] As the Wall Street Journal remarked in late 2024, “Generative artificial intelligence has sparked one of the biggest spending booms in modern American history, as companies and investors bet hundreds of billions of dollars that the technology will revolutionize the global economy and one day lead to massive profits.” [28] The belief is that this AI will transform many different markets, which vary in contestability and other competitive dynamics.
Not surprisingly, besides new value opportunities and efficiencies, the latest AI foundation models bring notable risks of misuse and concerns over human autonomy and social cohesion. [29] President Joe Biden’s executive order on AI, for example, warned that “irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.” [30] These risks include non-experts, such as terrorists, using these foundation models to “design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons” or enable “powerful” cyber-attacks. [31] Thus, the risks from these models extend far beyond antitrust.
But before delving into whether this AI will disrupt the existing Big Tech Barons, let us first review the competitive dynamics of the digital economy, which led the Big Tech Barons to grow their powerful ecosystems.
II The Digital Market Dynamics that Led to the Big Tech Barons
Most U.S. companies live short lives. One study found that the average half-life of U.S. publicly traded companies is close to 10.5 years, meaning that half of all companies that began trading in any given year have disappeared in 10.5 years. [32] Nor are platforms guaranteed long lives. One study calculated that 209 platforms had failed and died over the past 20 years. Most of them (85 percent) were transaction platforms, which had shorter lives (on average 4.6 years) than the innovation platforms (5 years) or hybrid platforms (7.4 years) in the survey. [33] So, suppose many companies and platforms die within ten years of their birth. In that case, why have the Big Tech Barons successfully dominated multiple markets for years, and why do they seem poised to continue their domination over the next decade?
In the Western world, despite the billions invested by Microsoft in its search engine Bing, Google, as of 2024, still dominates general search services and search text advertising. [34] Despite the rise of TikTok, Meta still dominates personal social networks [35] and social media ads. [36] Meta, Google, and Amazon dominate digital advertising generally. [37] Amazon remains the dominant online shopping platform. [38] Google and Apple remain the leading operating systems for smartphones. [39] Amazon, Microsoft, and Google control the leading cloud computing platforms. [40]
As this Part explores, the digital economy generally exhibits several characteristics that support a winner-take-all, or winner-take-most, outcome, with only a few companies dominating markets. Noteworthy elements include scale economies, big data, network effects, big analytics, the ability to channel digital innovation, and an array of anticompetitive measures, including acquire-copy-or-kill. [41] As we, among many others, have explored these themes elsewhere, [42] let us just summarize here a few:
Big data, which drives much of the innovation and offering in the digital economy, has risen to become a key asset in many digital markets. Data may be structured or unstructured, [43] gathered voluntarily, through observations, or by inferring new information using existing data. [44] When considering personal data’s role in the digital platform economy, it is helpful to take note of the “four Vs of Big Data” – Volume, Variety, Velocity, and Value. [45] Alongside the many improvements Big Data offers, it is noteworthy that limited access to relevant and timely personal data may inhibit entry, expansion, and innovation. [46] Data has been referred to (somewhat controversially) as the “new oil” which powers our modern society. [47] While personal data is often “non-rivalrous” in the sense that it may be duplicated and used by multiple parties without depleting its value (and as such differs from oil or traditional assets), [48] its control, especially where the four Vs are present, offers a distinct advantage in the digital economy. A data advantage over rivals can enable leading players to achieve critical economies of scale, which could tilt the data—and competitive balance—in their favor. A positive feedback loop, in which control over data reinforces market power, which in turn increases the four Vs, helps the strong become stronger as the weak get weaker. [49]
Network effects occur when a good or service becomes more valuable as others use it. [50] Each of the Tech Barons’ ecosystems exhibits multiple network effects. They include direct, indirect, and data-driven effects that create a feedback loop that attracts users, sellers, service providers, developers, and advertisers to a network. [51] Alongside the unparalleled efficiencies and economies of scale offered by network effects, they may support the rise of significant barriers to entry and limit the competitive pressure on the incumbent. [52] Indeed, the combination of network effects, access to data, and analytics may tip the market in favor of a leading provider, which may become inefficiently entrenched. [53] Data-driven indirect network effects may, in particular, reinforce the feedback loop and lead markets to tip in favor of the leading platform.
Big Analytics offer the power to optimize the use of data, identify patterns, improve the understanding of market dynamics, and open the door to accelerated innovation. Advanced analytics have also been central in identifying consumers’ needs and wants. Recent advancements in analytics and AI may enable further optimization even with relatively limited or unstructured data. [54] Here, alongside the benefits, one can identify worrying trends. Data mining, data trade, online marketing, pattern recognition, demand estimation, and price optimization have been used to approximate reservation prices, identify biases, and power exploitative practices. Big Data and Big Analytics often reinforce each other. [55]
Ability to channel digital innovation. On the one hand, the leading digital players (Google, Apple, Meta, Amazon, and Microsoft) invest heavily in research and development. [56] Their investments are often associated with new technologies and disruptions of third parties’ value chains. [57] However, their control over their respective ecosystems also enables them to affect the nature and composition of innovation that reaches the market. The key here is how powerful ecosystems can distort the supply and demand for innovation, suppress disruption that threatens their value chains, and promote sustaining innovation that supports their business models. [58]
What is noteworthy is that despite the ongoing antitrust litigation, [59] heightened scrutiny in Europe under its new regulatory schemes, such as the Digital Services Act (DSA) and Digital Markets Act (DMA), [60] and the threat of further regulations in the EU and elsewhere, these digital ecosystems still dominate. In fact, despite ongoing enforcement and increased regulation, the Tech Barons’ power remains entrenched.
Consider Meta. It faces greater scrutiny in Europe under the DMA [61] and DSA. [62] The German Federal Cartel Office has also challenged Meta’s terms and policies on data sharing across its apps and collection from third-party websites via its business tools as violating German competition law. [63] In the U.S., the FTC is challenging Meta’s acquisitions of Instagram and WhatsApp. [64] A bipartisan coalition of states is challenging Meta’s exploitation of teenagers. [65] New Mexico has sued Meta for putting “profits ahead of children’s safety” in having its platforms becoming “prime locations for predators to trade child pornography and solicit minors for sex.” [66] The U.S. Senate Judiciary Committee excoriated Meta’s CEO (as well as TikTok’s), promising in January 2024 further regulations. [67] But the day after the Senate hearings, Meta’s stock jumped over 20%. [68] While such a price increase might be seen with a penny stock, one had never seen a nearly $200 billion increase in market capitalization in one day: Meta set a new stock market record. [69]
Wall Street clearly does not see Meta’s dominance as being under threat. Why? Meta controls an ecosystem, not markets, which provides greater power than a dominant platform. [70] Google, Apple, Meta, Amazon, and Microsoft each dominate several interlocking, multi-sided markets, which connect individuals, service and product providers, and advertisers. With positive feedback loops and ongoing expansion, their digital ecosystems benefit from unparalleled access to data and other critical inputs of the digital economy that can be used to improve products, services, technology, and algorithms and give them a significant advantage over others.
As our most recent book explores, ecosystems can be viewed as autonomies that control the competition dynamics, entry conditions, fees, and data flow within their respective frameworks. [71] Positive feedback loops increase the power asymmetry, resulting in a few key players influencing competition parameters, barriers to entry and expansion, the nature and flow of innovation, and user behavior and choice. [72]
With that background, let us consider whether AI will disrupt or protect the Big Tech Barons’ ecosystems. The answer lies in the emerging foundation model supply chain, which the next Part examines.
III The Foundation Model Supply Chain and Potential Antitrust Risks
In assessing whether AI will disrupt or protect the Big Tech Barons’ ecosystems, one must explore the supply chains at the heart of generative AI and consider their likely impact on competition and innovation. The Ada Lovelace Institute, an independent UK research institute with a mission to ensure that data and AI work for people and society, illustrates the emerging AI foundation supply chain: [73]

Source: Ada Lovelace Institute
At the upstream level, as of 2024, processing power is predominantly provided by Nvidia, which develops graphics processing units (GPUs) and AI accelerators. [74] Also central are cloud service providers, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure, that offer computing resources. Also vital at this infrastructure level are data-gathering companies that harvest public data or companies that own proprietary data. [75]
Below these operators, one may find the development layer where leading operators (such as Google, OpenAI, Meta, Anthropic, and Mistral AI) develop proprietary AI models and “open access models,” which we elaborate on later.
These pretrained models are deployed downstream via download or API access through applications or dedicated programs.
In a competitive market, each layer in the foundation model supply chain would have multiple rivals, and no rival would operate on multiple layers. For example, many semiconductor firms would provide the accelerator computer chips with the computing power needed to train the foundation model. Another set of companies would provide the data required to train the foundation model. A third set of companies would offer computational power and data centers. A fourth set would develop an array of AI foundational models. Next are the hosts who make these models available to developers who use the foundation model for their apps, products, or services. Businesses and customers could switch among the foundation models and apps, which are interoperable with each other.
However, as we explore below, once we relax these assumptions, antitrust risks emerge when these stacks become concentrated and powerful firms vertically integrate along the supply chain. Due to economies of scale and scope, feedback loops, high entry barriers, and the need for significant investments, critical segments of the supply chain can become highly concentrated. While the foundation model supply chain has multiple levels, its evolution has already led to significant vertical integration, which raises numerous concerns, including self-preferencing. [76]
As the G7 has noted, “[a]n inability to access these key inputs may inhibit competition to develop AI and AI applications, reducing innovation and harming consumers.” [77] As the FTC has similarly observed,
If a single company or a handful of firms control one or several of these essential inputs, they may be able to leverage their control to dampen or distort competition in generative AI markets. And if generative AI itself becomes an increasingly critical tool, then those who control its essential inputs could wield outsized influence over a significant swath of economic activity.” [78]
Let us consider these antitrust risks by exploring several critical layers in the stack.
A Concentration of Computational Power
Let us begin by assessing computational power. The development and deployment of foundation models “generally requires dedicated hardware, such as computers with specialized chips like graphical processing units (GPUs) that can be expensive to operate and maintain.” [79]
At the onset of the AI revolution, the dominant player, as of 2024, is the firm supplying the picks and shovels, namely Nvidia. The chipmaker’s rise is expected in the AI infrastructure stage when multiple downstream entities compete to acquire these specialized accelerator chips to develop foundation models.
So, one problem in 2024 is acquiring these specialized accelerator chips, which Nvidia primarily produces (with an estimated 80 percent market share). [80] Demand for these chips in 2024 exceeds the available supply, [81] including for Nvidia’s next-generation chips, code-named Blackwell. [82] Thus, not every firm today has an equal shot at acquiring these chips. [83] As Elon Musk quipped, “GPUs at this point are considerably harder to get than drugs.” [84] These accelerator chips have become prized, so much so that they have been transported in armored cars on occasion. [85]
Another problem, as the Wall Street Journal has observed, is that “[h]ow Nvidia allocates its limited supplies could influence who wins or loses in the AI race.” [86] Amazon, Google, Meta, OpenAI, and Microsoft have all been huge buyers of Nvidia’s AI-training chips. [87] OpenAI, for example, trained its foundation model ChatGPT with tens of thousands of Nvidia’s GPU chips. [88] Thus, because Nvidia chips are necessary to train foundational models, antitrust enforcers are assessing whether the chipmaker is abusing its dominance by, among other things, punishing customers who purchase in part any products and services from rivals. [89]
Alternatively, entrants seeking to develop a foundation model can use cloud computing services to provide these high-performance computational resources. [90] But three firms dominate the Infrastructure as a Service (IaaS) cloud computing service market: Amazon with an estimated 39% share of the worldwide segment (in 2021), Microsoft with a 21% share, and Google with a 7% share. (Two Chinese firms, Alibaba and Huawei, had a 10 and 5 percent share, respectively. [91]) The U.S. Treasury reported that while there is competition among the three IaaS cloud service providers, “even the largest financial institutions reported difficulties in drafting contracts.” [92] Moreover, given the demand for the use of the cloud service providers’ server chips to train these AI models, AI developers have had to wait for months for time on these servers. [93]
This bottleneck has gravitated more power to the already powerful. Rather than wait in line, some foundation model developers are partnering with the Big Tech Barons: the Barons invest in the model developer, which in turn purchases access to the Tech Barons’ cloud server. [94] Some have described these partnerships “as necessary or ‘critically important’ for the development” of these foundation models. [95] So, one sees Google’s investment in Anthropic circulating back to the Tech Baron, who now gets greater control over the development of the foundation model. [96] Consider Microsoft’s partnership with OpenAI. Microsoft has the upper hand in this relationship. As Microsoft’s CEO observed, its partnership is not hands-off:
We [Microsoft] are in there. We are below them, above them, around them. We do the kernel optimizations, we build tools, we build the infrastructure. So that’s why I think a lot of the industrial analysts are saying, “Oh wow, it’s really a joint project between Microsoft and OpenAI.” The reality is we are, as I said, very self-sufficient in all of this. [97]
Thus, should OpenAI cease to exist, Microsoft’s CEO was confident in his company’s ability to continue to provide AI services:
But the point is, we were very confident in our own ability. We have all the IP rights and all the capability. If OpenAI disappeared tomorrow, I don’t want any customer of ours to be worried about it quite honestly, because we have all of the rights to continue the innovation. Not just to serve the product, but we can go and just do what we were doing in partnership ourselves. We have the people, we have the compute, we have the data, we have everything. [98]
Moreover, the concern is that if Microsoft “withheld its cloud computing system on which OpenAI was reliant, [OpenAI] would be incapacitated.” [99] Nvidia, however, is supplying some of its chips to CoreWeave, a cloud-computing firm in which it has invested, [100] and is offering its own full stack of services. And other computational alternatives like Dell are emerging. Nonetheless, an entrant will likely incur significant costs—either in developing this computational power internally for its foundation model, or in outsourcing the compute function to a handful of cloud computing service providers, who may be a potential competitor in that AI foundation model market and who can raise their rivals’ costs. [101]
While Nvidia is the clear current winner at the chips level, the concern is that the Big Tech Barons—through their access to data, preferential access to the Nvidia accelerator chips, development of their own accelerator chips, and their ownership interests in other foundation model developers—will eventually dominate this layer as well, as they integrate through the various levels of the technology stack.
B Concentration of Foundation Model Developer Level
Training AI foundation models, as of 2024, generally requires the following inputs:
Significant volume and variety of data to train the generative AI foundation model, fine-tune it, and provide up-to-date responses; [102]
Significant large-scale computational resources, including cloud computing resources, with specialized chips; [103]
Significant amounts of energy; [104] and
Human capital, including “a skilled engineering and research workforce with expertise in areas such as machine learning and natural language processing.” [105]
Thus, the early advantage in developing foundation models will go to firms that already have a significant volume and variety of data (e.g., hundreds or thousands of gigabytes of data [106] across different modes) to train these models, the large-scale computational resources (either internally or committed cloud computing resources), and the human capital, including the human feedback needed to fine-tune the model’s output (such as preventing biased, false or harmful outputs). [107]
So, who has these advantages? The Big Tech Barons, who in the first half of 2024 spent collectively over $100 billion in capital improvements related mainly to AI. [108] Overall, the Tech Barons controlled multiple foundation models: by 2023, Google had 38 models (28 of which were closed); OpenAI had 18 models; Microsoft had an additional ten models (six of which were closed); and Meta had ten models (two of which were closed). [109] As of 2023, the higher-performing models were closed-source. [110] The significant investment needed to develop the technology supports ongoing concentration, [111] underscoring the risk of further entrenchment of the Big Tech Barons. We are already seeing OpenAI (through its partnership with Microsoft), Google, and Meta develop their own foundation models, from which other firms may fine-tune their AI applications, products, and services. As the company reports, Google’s Gemini foundation model was the first “to outperform human experts on MMLU (Massive Multitask Language Understanding), one of the most popular methods to test the knowledge and problem-solving abilities of AI models.” [112] Gemini also powers Google Bard. [113]
The Big Tech Barons are investing not only in their own foundation models and AI infrastructure but also in other foundation models. Microsoft has a partnership with OpenAI’s ChatGPT [114] and DALL-E [115] and is coordinating with Meta on its Llama-2 foundation model. [116] Microsoft reportedly invested over $13 billion in OpenAI by 2023. [117] Google and Amazon have invested over $2 billion and $4 billion, respectively, in Anthropic. [118] Microsoft, Nvidia, and others have invested over $1.3 billion in Inflection AI, [119] and Elon Musk’s plans to raise $1 billion in investment in the startup “xAI.” [120]
The UK’s competition authority has illustrated the Big Tech Barons’ myriad investments and strategic partnerships in the other foundation models: [121]

Source: CMA AI Foundation Models Update Paper 2024
Consequently, the Big Tech Barons have strategically collaborated with the leading independent foundation models.
Also significant is how the Tech Barons are vertically integrating across the supply chain, leveraging their relative advantages over access to data, cloud servers, and foundation models to control the downstream consumer and business interfaces.
This privileged position is unique to the Big Tech Barons. It distinguishes them from other companies, even successful ones, that, while competing with their models, also heavily rely on the Tech Barons for financial support, computing power, or data. Here a frenemy relationship may evolve, [122] where the Tech Barons both cooperate with, and compete against, smaller operators that develop foundation models and apps and services that rely on these models. These dynamics may entrench the Tech Barons’ power, widening the moat that insulates them from future disruption.
Consequently, the Big Tech Barons, as the following illustration from the CMA reflects, are vertically integrating: first, they are vertically integrating their foundation models throughout their ecosystem of existing digital products and services (such as their foundation models integrated into their search engines, personal computer operating systems, and social media). [123] Second, the Tech Barons are integrating more broadly across the foundation model supply chain—developing their own accelerator and inference chips, [124] expanding their data centers, [125] and entering into strategic partnerships with other foundation models: [126]

Source: CMA AI Foundation Models Update Paper 2024
As the Tech Barons are integrating across the foundation model supply chain and integrating their own foundation models into their downstream products and services, let us next assess the potential impact this vertical integration can have on other downstream competitors and market participants.
C Increasing Downstream Dependency
The foundation model supply system can have mixed competitive effects downstream. On the one hand, by lowering the cost of accessing a foundation model, the current supply chain can increase dynamism in secondary application markets. Firms can use the publicly available foundation models to disrupt traditional markets, lower costs, increase efficiencies, and spur innovation. There can be significant advances in many industries, from healthcare to education and commerce.
On the other hand, the developers in the downstream application layer can become increasingly dependent on the Tech Barons’ foundation models. How the technology is used and licensed turns this into a structural dependency that cuts across the application layer. The more developers who rely on the foundation model, the better the model can become, and the harder it might be for the developers to switch to other foundation models or develop their own. They become dependent on the foundation model, and all their innovations will complement (rather than potentially disrupt) the Big Tech Baron’s services, AI technology, and ecosystem. The asymmetry in power between the upstream foundation model and downstream operators increases.
But suppose the Big Tech Barons’ foundation models are open source. As we elaborate below, “Open AI,” while captivating as a catchphrase, does not reflect the genuine openness one would attribute to it. In fact, it is often closed and used strategically to enhance the power of those in control of the technology.
1 How Open Is Open?
Currently, the Tech Barons note how “open” their technology is. They emphasize how they are operating to democratize the AI environment and empower others to disrupt and evolve.
Consider OpenAI. As its name implies and its Founding Agreement reflects, the nonprofit was created to “compete with, and serve as a vital counterbalance to, Google/DeepMind in the race for AGI, but would do so to benefit humanity, not the shareholders of a private, for-profit company (much less one of the largest technology companies in the world).” [127] OpenAI has also partnered with Microsoft to help the world’s largest company (in terms of market capitalization) develop its own apps, products, and services, including Microsoft’s search engine Bing. OpenAI’s foundation model powers Bing’s Copilot function. So, we asked Bing’s Copilot about Microsoft’s investment in OpenAI. The foundation model generated the following response:
Microsoft has a significant interest in OpenAI, and their partnership has evolved over time. Here are the key points:
Investment and Collaboration:
In 2019, Microsoft invested $1 billion in OpenAI to support the development of artificial general intelligence (AGI) with widespread economic benefits 3.
This investment aimed to create a hardware and software platform within Microsoft Azure that would scale to AGI.
The collaboration focused on advancing cutting-edge AI research and democratizing AI as a technology platform.
Ongoing Partnership:
In 2021, Microsoft and OpenAI extended their partnership through a multiyear, multibillion-dollar investment.
This third phase of collaboration aims to accelerate AI breakthroughs and ensure these benefits are broadly shared with the world 1.
It includes commitments from OpenAI to make Microsoft its “preferred partner for commercializing new AI technologies” 2.
Areas of Focus:
Supercomputing: Microsoft is investing in specialized supercomputing systems to accelerate OpenAI’s groundbreaking independent AI research.
AI-powered Experiences: OpenAI’s models are being deployed across Microsoft’s consumer and enterprise products, introducing new digital experiences.
Azure Cloud: Azure is the exclusive cloud provider for OpenAI, powering all their workloads across research, products, and API services.
Shared Vision:
Both companies share a commitment to advancing AI responsibly and creating transformative impact.
Their collaboration has led to innovations like GitHub Copilot, DALL·E 2, and ChatGPT.
In summary, Microsoft’s interest in OpenAI lies in advancing AI research, democratizing AI technologies, and building powerful AI infrastructure on Azure.”
Consider the above claim that Microsoft’s interest in OpenAI was partly for “democratizing AI technologies.” In 2024, Elon Musk, a cofounder of OpenAI, sued the company for, among other things, breaching its promise to be a nonprofit “developing AGI for the benefit of humanity, not a for-profit company seeking to maximize shareholder profits,” and to be “open-source, balancing only countervailing safety considerations,” and not keeping “its technology closed and secret for proprietary commercial reasons.” [128] According to Musk’s complaint, Microsoft’s investment in OpenAI has had the opposite effect: OpenAI has become a “closed, for profit AI.” [129] OpenAI’s more advanced model GPT-4 “is now a de facto Microsoft proprietary algorithm, which is integrated into its Office software suite.” [130] OpenAI has also been accused of offering limited information on the data used to train its GPT-4 foundation model or the methods used to create it. [131] Rather than use its GPT-4 model to benefit humanity, OpenAI, as Musk alleged in his complaint, has used its most advanced foundation model to maximize the profits of the world’s largest company. Musk subsequently withdrew his lawsuit. But critics remain, [132] and while OpenAI has claimed that its lack of openness is to promote safety, that too is under attack. [133]
Other Tech Barons have also touted the openness of their AI systems and how their technology benefits others. However, several researchers have questioned the true openness of their foundation models and labeled these claims as “open-washing” of closed systems. According to these claims, behind the misleading façade of openness, aimed at influencing public perception as to the availability of the technology, many of the technologies are, in fact, closed systems that support concentrated power. [134] For example, in a critique on Meta’s release of Llama 2, several computer scientists pointed out that despite the foundation model being described as open source, the technology is “considerably closed off” since Meta did not share the model’s training data or the code used to train it. [135] Other commercial systems, as well, offer limited transparency and openness with restricted insights into the codes and data used, the way models are tuned, the nature of human feedback (known as Reinforcement Learning from Human Feedback (RLHF)), and limited peer review. [136] Doubts have also been raised about related claims from the industry, according to which the gap between closed and open AI models is “closing astonishingly quickly.” [137]
It is important to nuance the discussion on openness, as democratization of data and technology dictates tradeoffs. Openness could increase the susceptibility to distortions, manipulation, or illicit use. [138] Fears over safety and legal liability, as well as concerns that competitors will use the data and information for their own operations, may justify an approach that fosters limited transparency for numerous building blocks of the technology, including codes, model architecture, data, fine-tuning processes, weights, and biases. [139] Indeed, OpenAI’s chief scientist and cofounder, Ilya Sutskever, noted how the company changed its approach to openness and considers it now overly risky: “These models are very potent and they’re becoming more and more potent. At some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want to disclose them.” [140]
While these concerns may justify closing certain aspects of the systems, the key concern regarding limited democratization remains. As the UK’s competition authority noted, “many firms have kept their highest-performing pre-trained [foundation models] closed-source, keeping the model weights (the internal ‘knowledge’ of the model) a trade secret, and providing access via an API or through user-facing applications.” [141] Moreover, the alternative open-source foundation models were, as of 2023, “generally smaller and perform less well than the highest-performing closed-source models.” [142] Thus, the closed or semi-closed systems support the Barons’ amalgamation of power and enhance their control over the foundation models and any third-party apps, products, or services that rely on these models.
2 The Venus Flytrap Strategy
The Tech Barons often deploy a Venus Flytrap strategy to colonize their ecosystems. The Venus Flytrap (Dionaea muscipula) is a well-known carnivorous plant native to the Carolinas. To attract insects, the herb secretes a sweet sap. Once the plant’s leaves snap shut, the entrapped insects have little chance to escape. So too, the data-opolies opened their newly colonized platforms with inducements to attract advertisers, app developers, users, and smart device manufacturers. After dominating that ecosystem, the data-opoly snapped the once open-source environment shut and then imposed anticompetitive terms and fees upon the entrapped sellers, developers, advertisers, and users.
As the United States alleged in its successful monopolization action, [143] Google deployed the Venus Flytrap strategy to dominate its Android mobile phone ecosystem. Once Android became dominant, running on approximately 75% of the world’s mobile devices, Google closed its ecosystem and collected monopoly rents and personal data from the entrapped developers and smartphone manufacturers. If smartphone manufacturers wanted interoperability with Google’s apps and wanted Google’s app store loaded on their phones (a must for any smartphone to be commercially viable), they had to use Google’s version of Android (not a competing version). They also had to preload and feature Google’s search engine, browser, and other apps (and not competitors’). Google also imposed an app tax on developers (ranging between 15 and 30%) and was alleged to require apps to hand over their users’ personal data.
So, where does this leave downstream players who rely on commercial foundation models? A startup, for example, can use OpenAI’s application programming interfaces (“APIs”) to integrate GPT’s services into the startup’s applications, products, or services. [144] Under the API access business model, the developer sends its or its users’ prompts to the foundation model. However, the developer lacks access to the foundation model’s code, training data, or model weights and biases. [145] Instead, the developer shares data with the foundation model and receives the model’s output. [146]
Here, a positive feedback loop can emerge under the API access model: as more developers rely on a few foundation models, the more data and the greater the variety of the data (audio, visual, text, depth, movement, thermal, etc.) that flows from the developers and their customers to these multi-modal foundation models. The increasing inflow of data can help train and further improve these AI foundation models, which, as a result, can attract even more developers. [147] Accordingly, it might be more challenging for another company to develop a foundation model that competes against these dominant AI foundation models.
Moreover, the app developers and their end-users can become locked in to one foundation model, making it harder to switch to another model. For example, the foundation model can learn how to customize responses to particular app developers or their end-users (such as the person’s writing style, personal preferences, etc.). [148] OpenAI heard from end-users “about the friction of starting each ChatGPT conversation afresh.” [149] So OpenAI allows end-users to customize instructions for its foundation model to provide customized, unique responses for each user:
Custom instructions allow you to add preferences or requirements that you’d like ChatGPT to consider when generating its responses. . . .
ChatGPT will consider your custom instructions for every conversation going forward. The model will consider the instructions every time it responds, so you won’t have to repeat your preferences or information in every conversation.
For example, a teacher crafting a lesson plan no longer has to repeat that they’re teaching 3rd grade science. A developer preferring efficient code in a language that’s not Python – they can say it once, and it’s understood. Grocery shopping for a big family becomes easier, with the model accounting for 6 servings in the grocery list. Set your preferences, and ChatGPT will keep them in mind for all future conversations. [150]
We already see such network effects for search engines, especially for unusual (long tail) inquiries. The more one googles, the more opportunities for the search engine to learn about relevant responses, and the better its performance relative to other search engines. [151] Now, we can have network effects on multiple levels, from improving the model’s capabilities across many services and products to improving customized responses for each end-user.
Thus, even when the current foundation model is relatively open, it can close gradually later. [152] It is therefore important to distinguish the ability of downstream operators to use functions of the AI foundation models and integrate them into their product and service offering, and their inability to control, replicate, develop, or influence the model’s input and output.
There are other risks as well that emerge from the developers’ increased dependency on a foundation model. As more developers rely on a particular foundation model, any risks, biases, or flaws in that model will cascade to millions, if not billions, of users. [153] The company that controls the foundation model will largely, if not entirely, control the safety of the AI technology, the risks of hallucinations, biased, hateful, discriminatory, or misleading outputs, and may externalize many of the risks to downstream corporations. [154]
D Narrowing Innovation Paths
The concentration of the foundation model layer and the increased dependency of the application layer are only some of the potential anticompetitive effects. As the foundation model’s technology is weaved into more apps, services, and products, the control of critical inputs offers the dominant players a unique bargaining position and the ability to influence the evolution of technologies and innovation in secondary markets.
For example, a startup can use OpenAI’s APIs to integrate ChatGPT services into the startup’s applications, products, or services. [155] But let us consider Copilot’s statement (excerpted above) that OpenAI is committed “to make Microsoft its ‘preferred partner for commercializing new AI technologies.’” [156] So, if a developer and Microsoft are both seeking to commercialize a specific AI technology, OpenAI would advantage Microsoft. Thus, it is unlikely that OpenAI would support a developer seeking to use its foundation model to compete against Microsoft or disrupt Microsoft’s expanding ecosystem, especially when OpenAI’s foundation models rely on Microsoft’s cloud servers to compute.
Nor can the startup develop AI systems that disrupt OpenAI. Consider OpenAI’s terms of service for its ChatGPT, which expressly limit competing AI development: [157]
We own all right, title, and interest in and to the Services. You only receive rights to use the Services as explicitly granted in this Agreement. You will not, and will not permit End Users to:
. . . (e) use Output (as defined below) to develop any artificial intelligence models that compete with our products and services. However, you can use Output to (i) develop artificial intelligence models primarily intended to categorize, classify, or organize data (e.g., embeddings or classifiers), as long as such models are not distributed or made commercially available to third parties and (ii) fine tune models provided as part of our Services. [158]
Thus, the developer cannot disrupt OpenAI, nor can it allow its users to disrupt OpenAI’s products and services. As IBM warns,
A company specializing in voice recognition technology plans to enhance their existing system by integrating ChatGPT’s natural language processing capabilities. However, the terms of service for ChatGPT explicitly state that it cannot be used in the development of other AI systems. [159]
Now let us consider some of the other popular foundation models. Google imposes a similar restriction for licensees for its AI foundation models: “You may not use the Services to develop machine learning models or related technology.” [160]
Other foundation model providers, like Meta, offer model access (not just API access). Under this business model, the developer can download the foundation model and incorporate it into the developer’s apps, products, and services. Unlike the API access model, the developer does not send data to the foundation model’s owner through an API. [161] But Meta’s original model could only be used for research purposes and excluded commercial applications. [162] Its 2024 license with commercial and individual users restricted them from “using any part of the Llama 2 models, including the response outputs to train another AI model (LLM or otherwise).” [163] Moreover, users can only “use the outputs to further train the Llama 2 family of models,” [164] which can increase the developers’ dependency on Meta’s foundation model. (They cannot use the outputs to help train a competing foundation model).
Consequently, companies can use the Big Tech Barons’ foundation models to develop apps, products, and services that disrupt traditional industries, but they cannot disrupt the Big Tech Barons’ foundation models. Nor can they develop models that compete with the Big Tech Barons’ products and services. So, as more people use the startups’ AI apps, which rely on the Big Tech Barons’ foundation model, it may become harder to develop a foundation model that disrupts the Big Tech Barons’ models and AI-related products and services.
As a result, we may see something like an AI app store. As with Google’s and Apple’s app stores, there will be many AI apps from which to choose. But one cannot develop an application that disrupts the app store itself or the Big Tech Barons’ AI foundation models supporting these apps. The AI developers’ innovations can complement, but not disrupt, the Big Tech Barons’ AI foundation models (whether their own or their strategic partners), further entrenching the Big Tech Barons’ power. As our recent book explores, this shift is critical: first, it reduces the number and viability of innovators who can disrupt this hegemony. Second, in controlling the underlying foundation models, the Big Tech Barons can shape the innovation paths—from disruptive to complementary. [165] This asymmetry in the market will intensify, as those in control of the infrastructure and inputs can influence competition and innovation in the downstream markets. [166]
Furthermore, the Big Tech Barons’ foundation models exist within their sprawling ecosystems. In controlling the ecosystem, the Barons have many weapons to affect the supply and demand of AI innovations, including multiple weapons to marginalize or exclude potential disruptive threats: this includes downgrading or denying services to potential competitors, bundling products (for example, cloud services and generative AI), excluding the developers from their app stores and ecosystems, making it harder for end users to find the innovation, and self-preferencing. [167]
As noted above, concerns about misuse of AI may justify closing certain aspects of the systems. But the restrictions here go beyond safety concerns. They aim to ensure that any downstream disruption will not destabilize the Tech Barons’ control over the upstream levels of the AI stack. Quite the contrary, to ensure that their app gets out quicker than rivals’ apps, developers will likely rely on one of the leading foundation models and, in doing so, trade off the risk of increased dependency with speed to the market. Moreover, the developer recognizes that if they try to disrupt the Big Tech Baron’s products and services, they would likely be cut off from the foundation model and potentially excluded from their ecosystem. Nor would the developer be likely to use the Big Tech Baron’s cloud computing service to develop their own foundation model.
Thus, the emerging foundation model supply chain can advantage those who control these models and the upstream inputs and hinder the innovations of downstream developers. Under the API interface, the developers have limited insights into the foundation models’ operations, safety limitations, and risks. Moreover, they are like sharecroppers. The data they generate flows to improve the foundation model, which they do not own or control. This asymmetry increases their dependency on the model and reduces their power relative to the vertically integrated provider of the data, computing power, and foundation model.
The antitrust risks are clear: a few firms would control all aspects of a technology that can cause “a great deal of harm.” [168] Thus, the foundation model supply chain can lead to “winner-take-all-or-most,” where a few powerful firms can distort the innovation paths, chill innovations that disrupt their ecosystems, and promote innovations that complement their technologies and reinforce their power.
E Tech Barons’ Expansion
As we have seen with the current dominant ecosystems, the Tech Barons do not sit still. Rather, they are expanding their ecosystems to other products and services, and when they do, they acquire, copy, or kill off rivals in that space. Consider the United States and state attorneys general’s monopolization complaint against Apple, based partly on Apple’s rejecting or removing apps from its app store that “threaten to disrupt, disintermediate, compete with, or erode Apple’s monopoly power.” [169]
The foundation models can improve to the point that they can handle more tasks without significant customization. [170] So, the foundation layer merges with that particular application layer. Thus, the Tech Barons will likely expand their presence in the application layer, adding other services and products to their ecosystem. In doing so, the Barons’ incentives change: instead of supplying AI technology to an app (say Duolingo), they might compete with their own language app.
Suppose Apple and Google develop their own AI app stores, where the app developers rely on Apple’s or Google’s foundation model. The Tech Baron here would control not only the AI App Store but also the underlying foundation model, data, and computing power. Thus, the Baron can deliver many more powerful punches:
it can kick the third-party app out of its app store (or make it harder for consumers to find it);
it can cut the offending app off from its model (or degrade its performance by not allowing the API to access the proprietary, more advanced foundation models);
• it can deprive the developer of access to critical inputs, such as its own cloud computing service or datasets (to prevent the developer from building its foundation model); or
it can otherwise degrade the performance of the third-party app (such as greater latency in responding to end-users’ requests).
Suppose Google kicked an AI app out of its ecosystem. The app developer could turn to a rival ecosystem, such as Apple or perhaps Microsoft. But the AI app developer could not disrupt Apple’s and Microsoft’s ecosystems or their value chain. Moreover, the rival ecosystems may want to expand into that product or service (say, develop their own language app) and similarly refuse the developer access to their foundation model and app store.
Thus, as Elon Musk’s complaint against OpenAI recounts, the nonprofit was created as an alternative to Google in the race for AGI:
In 2014, it was already difficult enough to compete with Google in its core businesses. Google had collected a uniquely large set of data from our searches, our emails, and nearly every book in our libraries. Nevertheless, up to this point, everyone had the potential to compete with Google through superior human intelligence and hard work. AGI would make competition nearly impossible. [171]
Consequently, as is the case today, there may likely be little, if any, space for AI developers seeking to disrupt the Tech Barons’ hegemony with innovations that threaten the Barons’ power or their ecosystems’ underlying value chain. Instead, they must avoid the elephant’s path and seek to disrupt products and services far afield from the Tech Barons.
The anticompetitive effects addressed in this Part are not preordained. As Part IV next discusses, the foundation model supply chain, while emerging, may evolve into something else.
IV Countervailing Factors
Supply chains can change (consider the supply chain changes after Covid). So too, the AI foundation model supply chain can change, depending on several factors. One factor will be the continued importance of the inputs for the foundation model. Research by OpenAI, among others, has found power-scaling laws that “increasing model parameters, dataset size and compute with transformers and cross-entropy loss increases the model performance.” [172] But to reach optimal scaling in the future, one might not require, for example, the breadth and volume of data. [173] Moreover, the availability of computational resources may increase while computing costs may decline. With some predictions that open-source models will continue to improve, [174] the cost of computing and the cost of doing business will likely decrease. This openness can lower costs and barriers to expansion and entry and, in doing so, level the playing field. [175]
Another factor will be the proliferation and scale of other foundation models advanced worldwide. Notable in the West [176] will be other key AI platforms, such as Stable Diffusion, which describes itself as the “world’s leading open source generative AI company” that delivers “breakthrough, open-access AI models with minimal resource requirements in imaging, language, code and audio.” [177] Europe’s Mistral AI (which includes Microsoft, Amazon, and Google as investors) is seeking to develop a quasi-open-source foundation model. [178] According to the company, its foundation model cost about $22 million to train and performed similarly to OpenAI’s and Google’s models, which cost far more to train (between $50 and $100 million for OpenAI). [179] The extent to which developers can easily switch among foundation model platforms can potentially dampen the anticompetitive effects.
Third is the openness (even if limited at times) and accessibility of the foundation models and the ability to use these models to disrupt both within and outside the foundation model’s ecosystem. A notable aspect of several commercial providers of generative AI has been their effort to make the technology accessible to others. AI systems, cloud computing, and open-source repositories have offered opportunities for growth for those who would otherwise have no, or limited, access to the technology.
Fourth is the potential commoditization of foundation models. As of 2024, the emphasis is on building and training the foundation models. Thereafter, developers using their data can fine-tune the foundation model for their particular applications, and for inferences, when the model is asked to process new information and respond. [180] From 2023 to early 2024, 40 percent of Nvidia’s data center business was for deploying AI systems, not training. [181] But going forward, in the fine-tuning stage, developers may not require Nvidia’s accelerator chips and might be able to rely on less powerful, cheaper chips. [182] So, one issue will be the relative importance of the foundation model versus the fine-tuning and inference stages.
Fifth, and related to the above, is the democratization effect of the technology. Foundation models and generative AI can lower barriers to entry and enable new entities to enter markets and provide services where they could not have done so before. The technology promises to reduce costs associated with generating and editing content. It can enable reliance on less data to execute tasks, lower data collection costs, lower investment costs, reduce the need for human input, and support further automation. [183] It can also assist in faster and more efficient software development by improving code quality and generating new codes, and in doing so, opening the software development market to new players. [184]
Citing these competition and innovation dynamics, some argue against antitrust intervention and call for limited regulations on AI systems. [185] After all, in a fast-moving innovation market, intervention could easily miss the mark and chill, rather than promote, competition. Competition dynamics, it is argued, could offer a superior outcome and ensure the effective evolution of the technology and markets.
V Refining the Antitrust Playbook
Over the past two decades, we have seen many digital markets tip to one or two powerful firms. We have also seen how these powerful firms expanded their ecosystems. Positive feedback loops, control over a broad user base, extraction of personal data, and anticompetitive practices have enabled a few corporations to entrench their power and affect the dynamics of competition and innovation. Digital markets, while dynamic in many ways, have proven to not easily self-correct once markets tip in favor of these Tech Barons. Overall, experience shows that the digital economy gravitates toward concentrated economic power, which often translates to political power.
The emerging AI foundation model supply chain can disrupt these dominant ecosystems. But, as with the digital economy generally, a few firms could use the positive feedback loops to ensure that more developers rely on their foundation models, thereby strengthening their ecosystems. Indeed, in controlling the critical inputs and leading foundation models, the next wave of Tech Barons may have even more power: not only will they control the leading AI app stores, but they will also control the underlying AI technology, data inputs, data storage, and hardware design. The risks are immensely greater: biases, flaws, and vulnerabilities in these foundation models can ripple across society.
So, while the Biden administration has stepped up antitrust enforcement considerably against the Big Tech Barons, the antitrust enforcement has not appreciably increased contestability in many digital markets. Antitrust enforcement in the U.S. remains unwieldy, with costly, time-consuming litigation under the Supreme Court’s rule of reason legal standard. To put it bluntly, antitrust has failed to curb the excesses of the Big Tech Barons, and we are paying for the result.
We are at another critical juncture: the AI foundation model supply chain can potentially tip to where a few powerful firms can again chill competition and innovation. But that is not preordained. Again, the issue is whether antitrust enforcement can prevent this result.
In recent years, several jurisdictions have upgraded their antitrust and regulatory tools for the digital economy. Notable is the European Union with its Digital Market Act, Digital Services Act, Data Act, and forthcoming AI Act.
So, what needs to be done? Along with many others, we are exploring avenues to enhance the effectiveness of competition law and related regulations to align the market participants’ incentives with broader societal goals. Broadly speaking, multiple tools are necessary to ensure a healthy market trajectory (including competition, privacy, consumer protection, and IP laws).
At the competition law level, for antitrust to be more effective, the agencies and courts must better understand the power of ecosystems (as opposed to the power of particular platforms), including their power to distort innovation paths. Ecosystem power differs from traditional notions of market power. [186] Consequently, agencies and courts must focus on the possible abuse of ecosystem power aimed at entrenchment through leveraging, tying, and exclusionary practices (even when the market is not yet concentrated). Structural separation may need to be explored more often in the current state of AI supply chains.
Consideration of the scope of competition law and the use of external regulatory benchmarks to inform competition decision-making would be of value. [187] This is particularly so since intervention will undoubtedly dictate tradeoffs between efficiency and economies of scale and scope, and the need to safeguard the evolution of AI markets and ensure their contestability.
At the regulatory level, the current design and scope of the EU’s Digital Markets Act and Digital Services Act offer some, yet incomplete, relief regarding the concerns raised by foundation models. The forthcoming AI Act could supplement the regulatory structure when dealing with the risk associated with foundation models but does not directly address the problem of entrenchment and increased concentration. Changes may be required to improve the relevance of regulatory tools to address the AI supply chain. In that respect, ex-ante regulation could be important in aligning private corporate behavior with public interests, even though entrenched interests will likely challenge any intervention. When considering the dense EU regulatory landscape, it is important to note that the wider EU regulatory framework concerning data, privacy, and political stability may generate inconsistent effects for the AI supply chain. [188]
Possible changes to regulatory tools would need to consider the overall burden on undertakings, the possible externalities between jurisdictions, the interconnections between trade policies and competition law, and the potential impact on foreign direct investments.
Enforcement capacity at both the regulatory and competition levels must increase to match the level of business activity. Global cooperation and alignment are integral and necessary to ensure effective outcomes. No jurisdiction can shoulder the regulatory framework.
Support policies should be used to instill growth and innovation in AI. The EU has a role in creating the conditions and infrastructure for EU innovation and disruption. Financial grants, tax breaks, loans, guarantees, and capital investments can play an important role in increasing the competitiveness of the AI supply chain. Policies should facilitate technology transfers, commercialize innovations, facilitate access to input, and strengthen available human capital. These efforts should be guided by the “Value, Incentives, and Diversity” principles we have explored elsewhere. [189]
Finally, unregulated competition is not an elixir. We have already seen companies racing to release foundation models, even though the models suffer from biases, hallucinations, and other quality problems. Competition can pressure firms to release faster foundation models that can undertake even more services without mitigating the models’ potential risks. Deep-pocketed companies are racing to release new technologies and disregarding many emerging risks and concerns in their attempts to dominate the new space. Calls for regulation, or slowing down of some developments, are being met with warnings about the competitive disadvantages to Western countries who regulate and the risk that other less regulated jurisdictions may take the technological lead. Encouraging this toxic competition can cause even greater harm to society. Consequently, the antitrust enforcement challenges raised by generative AI are further amplified by competition of the wrong sort—namely, a race to the bottom that emerges between companies. Enforcement, through competition law or regulation, should be mindful of the nature and quality of rivalry and the innovation it fosters: namely, we want to promote a race to the top and not to a potentially horrific bottom.
Conclusion
The AI landscape has changed in the short period we drafted this Article and will continue to change over the next decade. Ultimately, two potential antitrust risks emerge: first, as we have seen, several factors and market characteristics have enabled a few Big Tech Barons to expand their ecosystems, maintain their dominance, and alter innovation paths to marginalize potential disruptive threats. Likewise, similar factors in the emerging AI foundation model supply chain can lead to a “winner-take-most-or-all” situation. Second, while AI may herald significant disruption, the risk is that it will not disrupt the dominant ecosystems. Instead, several of the current dominant ecosystems, in seeking to capture essential parts of the AI foundation model supply chain, can minimize the risk of such disruption. As consumers, workers, and market participants, we will pay the price.
Antitrust enforcers are rightfully alert. Failure to move swiftly at the substantive and enforcement levels may leave us with a repeat of past policy shortcomings. AI will only magnify the current legislative and policy deficits. And the results, as the AI experts and the public predict, can harm our wellbeing, autonomy, and democracy.
© 2025 by Theoretical Inquiries in Law
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- AI, Competition & Markets
- Introduction
- Brave new world? Human welfare and paternalistic AI
- Regulatory insights from governmental uses of AI
- Data is infrastructure
- Synthetic futures and competition law
- The challenges of third-party pricing algorithms for competition law
- Antitrust & AI supply chains
- A general framework for analyzing the effects of algorithms on optimal competition laws
- Paywalling humans
- AI regulation: Competition, arbitrage and regulatory capture
- Tying in the age of algorithms
- User-based algorithmic auditing
Articles in the same Issue
- Frontmatter
- AI, Competition & Markets
- Introduction
- Brave new world? Human welfare and paternalistic AI
- Regulatory insights from governmental uses of AI
- Data is infrastructure
- Synthetic futures and competition law
- The challenges of third-party pricing algorithms for competition law
- Antitrust & AI supply chains
- A general framework for analyzing the effects of algorithms on optimal competition laws
- Paywalling humans
- AI regulation: Competition, arbitrage and regulatory capture
- Tying in the age of algorithms
- User-based algorithmic auditing