

The rapid rise of artificial intelligence has sparked intense competition to lead on both AI innovation and user adoption of AI tools. Analysis of recent trends in this transformative technology space reveals a “wildly competitive ecosystem up and down the AI stack.” AI is not a single monolithic market; it is a family of technologies operating at different but related layers, from chips and data infrastructure to algorithms and end-user applications.
At each layer of this “AI stack,” multiple firms – incumbents and startups alike – are vying for advantage. Competition is intense across the entire AI stack. Effective competition analysis should therefore account for the multi-layered nature of AI, recognizing how rivalry at each layer disciplines market power and benefits consumers. By examining hardware, foundation models, applications, and deployment services in turn, we find a dynamic landscape where innovation is thriving and no single firm can lock in dominance without facing competitive pressure from other layers. A holistic, evidence-based view of AI markets shows robust competition – and suggests regulators should be cautious about intervention in a sector where competition is delivering lower prices, greater choice, and rapid technological progress.
The AI Stack: A Competitive Framework
AI systems are built on a layered stack of inputs and services. At a high level, we can distinguish AI hardware and infrastructure (semiconductors, computing power, data, and talent), AI development of foundation models (large-scale machine learning models that underpin AI capabilities), and AI deployment (the integration of AI models into applications and services used by businesses and consumers). In simpler terms, the bottom layer provides the compute and data resources, the middle layer creates the core AI models such as large language models (LLMs), and the top layer delivers AI-powered products to end users.
Crucially, each layer features different players and competitive dynamics. Some firms are vertically integrated across multiple layers, while others specialize in one segment – leading to a rich mix of competitors with varying business models. For example, an established company might design its own AI chips and develop proprietary models, whereas a startup might focus only on a niche application using open-source models. This diversity means that competition at one layer can come from unexpected places; a breakthrough by a chip startup or an open-source model can rapidly challenge an incumbent’s advantage in another layer.
Competition analysis of AI must therefore consider the entire stack: even a seemingly dominant position at one level may be constrained by rivalry from adjacent layers or new entrants leveraging different approaches. In short, the AI stack provides a framework to understand how competition unfolds at each stage – from the silicon that powers AI to the software services that deliver it – and evidence shows that all these stages are intensely competitive.
Competition in AI Hardware
At the base of the AI stack is the hardware and compute infrastructure – notably the chips needed to train and run AI models. This sector is highly competitive and dynamic despite the prominence of today’s market leaders. NVIDIA and AMD currently supply the majority of AI graphics processing units (GPUs) used for model training, but they face a wave of new challengers.
Major cloud companies like Google, Amazon, and Microsoft are investing heavily in in-house AI chip development, from Google’s TPU accelerators to Amazon’s Inferentia and Trainium chips. These efforts ensure that cloud providers are not beholden to a single GPU supplier and create direct competition for traditional semiconductor firms. At the same time, venture capital is pouring into silicon startups looking to disrupt the chip incumbents – roughly $8 billion of funding went to AI chip startups in both 2021 and 2022. This influx of new entrants has already yielded innovation: for instance, startup Groq has introduced a novel chip architecture delivering real-time AI inference speed that rivals or exceeds incumbents, setting “a new benchmark for the industry.”
Importantly, these competitive investments are driving rapid improvements in price and performance. Advancements in chip design are expected to sharply reduce the cost of computing power for AI. As one analysis noted, AI compute costs have dropped by several orders of magnitude over the past decade and continue to fall as new competitors enter the field. The result is a virtuous cycle of competition: rival chip suppliers and cloud providers race to offer more powerful AI hardware at lower cost, which in turn lowers barriers for new AI developers. Even firms that do not build their own chips benefit from this rivalry – they can access high-end GPUs through cloud services or choose from an increasing array of alternative AI accelerators.
In sum, AI hardware is far from a monopoly; it is an innovative market with incumbents, tech leaders, and startups all investing aggressively to outdo one another. This competition at the hardware layer ensures that no single firm can choke off AI innovation by controlling compute resources, and it continually increases the supply of affordable computing for the next generation of AI ventures.
The recent announcement by Chinese firm DeepSeek that it had succeeded in training advanced AI models like DeepSeek‐R1 on relatively few high-end chips, and potentially zero true “top of the line” chips, suggests that competition in AI hardware is likely to get more intense. This breakthrough innovation in training efficiency upends long‐standing assumptions about the need for large numbers of the most expensive, high‐end chips to build top-tier AI. In other words, rather than relying exclusively on massive compute clusters built from the very best semiconductors, DeepSeek has demonstrated that with smart architecture and optimized training techniques (such as reinforcement learning, model distillation, and mixture-of-experts strategies), advanced AI models can be produced with significantly fewer resources.
This shift has several key implications for competition in the AI stack:
- Lower Barriers to Entry: With training costs dramatically reduced, smaller companies and startups can compete with established leaders, spurring an even more dynamic, diverse market.
- Innovation Focus: Efficiency breakthroughs emphasize algorithmic and architectural innovations over sheer hardware expenditure, which could force incumbent firms to rethink their investment strategies.
- Disruption Across the Supply Chain: Reduced dependence on top-of-the-line chips may alter the competitive landscape for semiconductor manufacturers, cloud providers, and AI model developers alike.
- Democratization of AI: More players can now access and build sophisticated AI systems, potentially accelerating the pace of innovation and driving down costs industry-wide.
DeepSeek-R1 suggests that the future of AI may not be dictated solely by expensive, high-powered hardware but by how cleverly that hardware is used—creating a more competitive market.
Competition in Foundation Models
Moving up the stack, the development of foundation models – large AI models such as language models and image generators – is likewise characterized by vigorous competition among many players. Not long ago, this field was in its infancy, but the launch of OpenAI’s ChatGPT in late 2022 sparked a technological race that includes both well-resourced incumbents and agile startups. In the past year alone, a range of new frontier models have been announced in quick succession: from Anthropic’s Claude and Google DeepMind’s Gemini, to Meta’s LLaMA and Stability AI’s Stable Diffusion, to a host of others like Cohere, Aleph Alpha, Mistral AI, BloombergGPT, Amazon’s Titan, and more. No single company has a lock on innovation – established tech firms and new startups are leapfrogging each other, each releasing models with unique strengths. For example, OpenAI’s and Google’s latest models soon met competition from Meta’s open-source LLaMA model and multiple startups building specialized models (for coding, for specific industries, etc.). This flurry of activity has kept the frontier moving at a breakneck pace, to the benefit of downstream developers and users.
By any metric, the foundation model landscape is diverse and expanding. Since 2018, at least 94 different companies have developed over 250 foundation models, and notably, more than half of these models have been released under an open license. This means dozens of organizations – spanning corporate labs, academia, and open-source communities – are contributing advanced models, many of which are openly available for others to use or build upon. As CCIA highlighted to the U.S. DOJ, the “diversity of business models and strategies is striking, with a wide range of open‑source and proprietary models being developed by large and small companies alike.” This dynamic competition demonstrates ample space and growth potential for new entrants at the model-development layer.
Indeed, barriers to entry in developing AI models are not insurmountable: new research techniques and the availability of cloud computing have enabled startups to train large models from scratch or fine-tune open models with modest resources. There is even a trend toward smaller, specialized models (sometimes called “small language models”) that compete with giant models by focusing on specific tasks or efficiency. Early evidence suggests that the “best” solution need not always be the biggest LLM – leaner models that require fewer data and compute can outperform bloated ones for certain uses.
Such innovations blunt any one firm’s scale advantage and further lower the resource hurdle for competitors. In fact, recent market outcomes show that scarcity of inputs like data and hardware has not created significant barriers to entry for new model developers. Over 50% of foundation models are available openly, meaning an AI startup can download a state-of-the-art model (e.g. BLOOM or Meta’s LLaMA) and build on it rather than needing to invest millions in training from scratch.
Because no single model or company can meet every customer’s needs, rivals continually have openings to offer something different – whether it’s greater accuracy in a niche domain, better speed, lower cost, or an open platform that users can customize. The race among foundation model providers – with the leaders comprising a mix of larger and smaller firms – is therefore intense and shows no signs of abating. Each breakthrough spurs others to respond, and the presence of open-source alternatives ensures that would-be monopolists can be challenged by the community if they falter on price or quality.
Competition in AI Applications
At the top of the stack, competition is booming in AI applications and services. This is the layer where AI technology directly meets consumers and business users – through chatbots, productivity tools, creative content generators, decision support systems, and countless other applications. Here, the competitive dynamism is especially visible: while a few early entrants like OpenAI’s ChatGPT or Anthropic’s Claude grabbed headlines, they have been joined by hundreds of competitors building on AI in every imaginable field. Established software firms and nimble startups are racing to incorporate generative AI into products for education, law, healthcare, finance, marketing, entertainment, and beyond. The result is an explosion of choice. As one venture analysis noted, opportunities to innovate with AI are everywhere, and “what’s most notable is that every niche is intensely competitive.”
Take the example of AI-powered search and chat – an area initially popularized by ChatGPT. Already, OpenAI’s model faces differentiated challengers: startups like Perplexity have launched AI chatbots that can browse the internet in real-time to provide up-to-date answers (something ChatGPT couldn’t do at first). Another startup, You.com, augmented its search engine with an AI assistant that handles multi-step queries, offering a different user experience. Even web browsers are getting AI features – the new Arc browser can generate web pages on the fly for current events.
In other consumer domains, image generation apps (Midjourney, DALL-E, Stable Diffusion, Adobe Firefly, etc.) fiercely compete in creativity and quality; in coding assistants, GitHub’s Copilot is challenged by alternatives like Replit’s Ghostwriter and open-source coding models; in writing and productivity, dozens of AI copywriting and analytic tools compete on price and features. In the healthcare sector, the number of AI application providers is already staggering – multiple startups are vying in areas from drug discovery to diagnostic imaging to patient record management.
This pattern repeats across industries: for virtually any AI use-case, users have many options from different providers. New entrants continue to flood in. Y Combinator, for instance, has dramatically increased the share of AI startups it funds, and its calls for new startups highlight areas like AI for robotics, AI-driven scientific simulation, explainable AI, and back-office automation – all hotbeds of current startup activity. There is no indication that this boom is slowing. As a result of this competitive ferment, consumers and enterprise customers today enjoy a “cornucopia of AI tools” to choose from. If one AI service raises prices or lags in features, others are ready to attract its users – which keeps pressure on incumbents to keep improving. Notably, the incumbent tech platforms are also integrating AI and thereby intensifying competition among themselves: Microsoft’s addition of an AI copilot in Office spurred Google to roll out generative AI in Gmail/Docs; Adobe’s launch of Firefly for image generation came as independent rivals like Midjourney surged; and in search, Google’s longtime dominance is now challenged by a combination of Bing (integrated with OpenAI’s GPT-4), new search startups, and even content providers like Forbes launching their own AI search engines.
In short, AI is proving to be a disruptive force within tech ecosystems, breaking open markets and spawning new entrants rather than entrenching incumbents. The evidence is clear that competition in AI applications is vibrant and delivering rapid innovation and expanding consumer choice.
AI Deployment and Cloud Services
Underpinning the competitive explosion in models and applications is a robust competition in AI deployment and cloud services. Even the best AI model has no impact if it cannot be deployed at scale – which is why cloud platforms and other deployment options are crucial. Here, too, competition is intense on multiple fronts. Large cloud providers including Amazon Web Services, Microsoft Azure, Google Cloud, Oracle, and others are actively competing to attract AI developers and startups to their infrastructure. This competition has expanded options and leveled the playing field for new AI ventures: although the continued use of on-premises infrastructure for AI model development shows that this remains a viable option, a small startup no longer needs to build its own data centers to train or serve an AI model – it can rent computing power on a cloud platform at scale.
Fundamentally, expanding cloud access and competition has enabled increasing AI competition. For example, top-tier AI startups like Anthropic, Cohere, and Character.ai have struck deals with different cloud providers (some use AWS, others Google Cloud, etc.), ensuring that no single cloud company “locks up” all promising AI firms. In fact, cloud vendors often offer special incentives, optimized hardware, and dedicated AI services to attract AI developers onto their platforms, which indicates how fiercely they compete to be the go-to deployment venue. This dynamic benefits startups, who can choose among cloud providers for the best pricing and performance, and switch if needed – a form of competition that keeps cloud costs in check. It also means that even smaller cloud operators or new entrants can carve out a niche by specializing in AI workloads or embracing open-source models to attract users who prefer more flexibility.
Moreover, open-source AI frameworks and model availability have expanded deployment options dramatically. There is a thriving ecosystem of open-source models and tools (often hosted on platforms like Hugging Face) that developers can obtain and run on their own hardware or on any cloud. As noted earlier, more than half of recent foundation models are available under open licenses. Many AI developers release not only research papers but also model weights, code, and data – or offer easy API access – which allows others (including direct competitors) to deploy those models in new products. For instance, Meta’s LLaMA model was released to researchers and soon adopted by a wide community, and the BLOOM language model was made fully open-source by a consortium of researchers. Google has also open-sourced a family of AI models (Gemini’s predecessor, Gemma) that can be run on a laptop and even on rival clouds, explicitly aiming to “lower barriers to entry and spur innovation.”
This openness ensures that no single company’s cloud or API becomes a gatekeeper for AI deployment: if a provider tries to impose restrictive terms, customers can opt to use open models on alternative infrastructure. In effect, deployment itself has become competitive – whether through cloud services or on-premise solutions – because of the interoperability and portability of AI models. All major cloud providers also support popular open-source AI libraries and allow customers to bring their own models, further preventing lock-in. Additionally, competition among clouds is driving new offerings that integrate AI: for example, cloud vendors are adding proprietary AI services like Azure’s OpenAI Service or AWS’s Bedrock and Sagemaker to attract users, while startups offer innovative deployment platforms like decentralized cloud or edge AI services to undercut the big players on cost or privacy. The U.S. and global cloud markets remain a many-sided fight, and the rise of AI workloads has only intensified this rivalry.
From a competition economics perspective, this means the infrastructure layer supporting AI is competitive and resilient. The cloud’s role in AI has been to democratize access to computing and ensure that even the most compute-intensive AI projects have multiple avenues to reach users.
Policy Implications and Antitrust Considerations
The evidence of fierce competition across the AI stack carries several implications for policymakers and antitrust enforcers. First, it counsels humility and caution before intervening. Competition in AI is dynamic and working well to deliver value, innovation, and choice to consumers at this stage. The marketplace is characterized by rapid entry, falling costs, and expanding output – classic signs of healthy competition. Enforcers should recognize that AI markets are still nascent and evolving rapidly; as CCIA advised, at this early stage broad intervention “would be premature and could potentially stifle innovation in the AI sector and limit consumer choice.”
Instead of rushing to regulate, authorities should monitor the space and focus on preserving the conditions that have enabled this competitive boom rather than assume the inevitability of dominance.
Second, antitrust analysis should adopt a multi-layered approach when evaluating AI-related competition issues. Defining a single “AI market” or myopically focusing on a single layer is likely too simplistic. A company that appears dominant in one layer (e.g., a leading model provider) may face effective competition from firms in adjacent layers: for example, an open-source model community or a powerful distribution channel owned by another firm. Any assessment of market power in AI should ask: At which layer? And which competitors or potential entrants at other layers could discipline that power? For example, if concerns arise about a leading cloud AI service, one must consider the ability of rivals to switch to alternative clouds or deploy models independently. Likewise, if a certain foundation model gains a performance edge, one must consider the openness of model weights and the possibility of others fine-tuning competing models. Traditional antitrust concepts – barriers to entry, switching costs, foreclosure – may need to be evaluated in the context of this layered ecosystem. Encouragingly, early analyses by competition authorities reflect this understanding. The UK’s Competition and Markets Authority (CMA) and others have noted that GenAI markets are vibrant and that potential barriers to entry such as GPUs are not insurmountable in practice.
Potential concerns like exclusive partnerships or input hoarding should be addressed on a case-by-case basis if they arise, rather than assuming worst-case scenarios across the board. In essence, regulators should maintain a posture of vigilance but also appreciation for the competitive forces at work in AI. Overzealous moves – such as blanket prohibitions on vertical integration or heavy-handed restrictions on AI collaborations – could freeze the pro-competitive cross-pollination that is driving AI progress.
Finally, policymakers should consider the global context. AI innovation is an international race, with regions like Europe and especially China investing heavily to lead in AI deployment and even setting ambitious targets for open-source AI leadership by 2030. The United States’ competitive edge in AI has so far been fueled by a dynamic private sector and open markets. Maintaining this edge will require policies that continue to foster competition and innovation – for example, supporting R&D, ensuring talent flows, and avoiding protectionist measures that inadvertently fragment the ecosystem.
Antitrust should remain focused on real harms (like cartels or mergers that substantially lessen competition) and not become a tool to micromanage industry structure in a fast-evolving technology. Existing competition law, properly applied, is flexible enough to address genuine anticompetitive conduct in AI if it emerges. In the case of AI, the best policy for now is to let this competitive boom continue unfettered, while guarding against specific abuses if and when they arise. That means avoiding unnecessary regulatory burdens that could raise entry costs or favor one business model over another. The current evidence suggests that market forces are delivering competitive outcomes in AI – with startups challenging incumbents, open-source challenging proprietary approaches, and multiple tech leaders challenging each other – so the antitrust focus should be on preserving these forces. The European Union presents an example of a jurisdiction that eschewed this approach, and lags far behind in the global AI race as a result despite impressive human capital. The European Union imposed a large quantity of digital-focused legislation, including AI-focused legislation, that have collectively created an extensive and complex web of digital regulations holding European digital technology and artificial intelligence companies back and discouraging European activity and investment in digital technology and artificial intelligence.
Conclusion
Far from being a winner-take-all space, AI today is a story of competition at every layer. From semiconductor fabrication plants racing to produce smarter chips, to research labs big and small vying to train the most capable models, to an app ecosystem teeming with new AI-driven services, the entire AI stack is marked by rivalry, innovation, and choice. This competitive dynamism is not just theory – it is evident in tangible outcomes: the cost of AI computing has plummeted, hundreds of new AI companies are entering the market each year, and consumers have more AI-enabled products at their fingertips than ever before. Innovation that one firm introduces today becomes a baseline for others tomorrow, in a virtuous cycle that is propelling AI capabilities forward while driving prices down.
The benefits to consumer welfare and economic growth are substantial – one analysis estimates that AI advancements will contribute $15.7 trillion to the global economy by 2030 through new products and productivity gains. These gains will materialize only if competition remains the engine of progress. Fortunately, all signs indicate that competition in AI is robust: as the Springboard industry analysis put it, “competition up and down the stack is strong and only getting fiercer.” Policymakers should take confidence from this fact. The role of policy is to ensure this intensity of competition endures – by supporting open innovation and guarding against true anticompetitive conduct – rather than to impose premature constraints.
The health of the AI sector today can be credited to vigorous competition at every level of the stack. So long as this continues, we can expect AI to keep delivering breakthroughs, empowering new entrants, and enhancing consumer welfare. In the world of AI, competition is not a problem to be solved – it is the solution driving us into the future.