Data center will be >50% of semis; CSPs/OpenAI, Nvidia, TSMC, Broadcom are best positioned
At the start of 2025, I believed AI was overhyped, ASICs were a niche, and a market pullback was inevitable. My long-term view has changed dramatically. AI technology and adoption is accelerating at an astonishing pace. One of the GenAI/LLM leaders, or Nvidia, will be the first $10 Trillion market cap company by 2030.
Large language models (LLMs) are rapidly improving in both capability and cost efficiency. There are now more than 500 million users per week, led by ChatGPT, and that number is growing fast. This exponential growth is fueling massive increases in data center usage and capital expenditures, primarily driven by the leading CSPs — Amazon, Microsoft, Google, Meta, and now OpenAI. Four of these are trillion-dollar market cap companies. They will pick the semiconductor winners.
At GTC 2025, Nvidia CEO Jensen Huang projected $1 trillion in global data center CapEx by 2028. At this pace, data center CapEx could reach ~$1.4 trillion by 2030. I’m looking for the big picture in this analysis – the numbers 5 years out are ballpark estimates to understand the trends.
In the most recent quarter, data center-related shipments by Nvidia, Broadcom, AMD, Intel, Marvell, SK hynix, Micron and Samsung exceeded a $220 Billion annual run rate – excluding power chips. (Thanks to Objective Analysis for memory shipments into the data centers.)
Wall Street expects Nvidia revenues to double from 2024 to 2027 to about $275 billion per year. About 90% of this is for the data center; and much of this is networking, software, and systems, not just GPUs. Nvidia doesn’t sell semis. They sell modules, boards, racks, and systems. If Nvidia grows by another 50% by 2030, its annual revenue will surpass $400 billion, primarily driven by data center demand. This is partly because GPUs have become like smart phones. They improve so fast in performance per watt that it’s economic to replace them after just 3 to 5 years. Power is the limiter for data centers, so performance/watt is more important than performance/$.
With LLMs scaling rapidly, semiconductor spend in data centers is expected to exceed $500 billion by 2030, representing more than 50% of the entire semiconductor industry.
Rough 2030 semiconductor spend:
GPU/AI Accelerator 60%
Networking 15%
CPU (x86 & ARM) 10%
Non-HBM DRAM/Flash 10%
Power, BMC, etc. 5%
The AI tsunami will bypass most semiconductor companies. Nearly all data center semiconductor revenues are concentrated among nine companies: Nvidia, TSMC, Broadcom, Samsung, AMD, Intel, Micron, SK hynix, and Marvell. A few smaller companies with $10B or less market caps have significant data center exposure — Astera, Credo, MACOM, ASPEED, Alchip, GUC, and SiTime. The trillion-dollar giants (Amazon, Microsoft, Google, Meta, Nvidia and Broadcom) can acquire smaller firms like AMD, Intel, Marvell, Mediatek, Astera, and others to secure their AI leadership.
1) GPU/AI Accelerator
Winner: Nvidia
Contenders: Broadcom, and potentially AMD
Recent AI accelerator quarterly revenues:
Annualized, this market is more than $150B, and it’s expected to double or more to $300B to $400B in 2030.
The top four CSPs now account for about half of Nvidia’s sales. As only the largest players can build high-value fronter LLMs, this sales concentration will likely continue.
Nvidia’s top customers already spend $20B annually, and they’re growing fast. While developing custom accelerators may cost ~$500M/year, this is cost-effective if the result is cheaper accelerators optimized for internal workloads. Nvidia’s margins are ~75%, Broadcom’s about ~65%. If they both built identical AI accelerators for $25 cost, Nvidia’s would sell for $100, and Broadcom’s for $75. So building with Broadcom saves 25%. Further, GPUs are very programmable. But as Sam Altman said, it is possible to sacrifice some flexibility, say, to focus just on transformers and still run the workloads needed. Alchip and Broadcom estimate about 40% lower cost for AI accelerators.
Despite this, the big CSPs will keep buying GPUs for flexibility and risk management. Most of their cloud customers’AI models are optimized for Nvidia GPUs using PyTorch.
Nvidia announced a very aggressive GPU roadmap for the next several years, aiming to outpace competition from custom AI Accelerators and AMD.
AMD, meanwhile, has a huge opportunity if it can become the alternative GPU supplier to Nvidia. This is AMD’s ticket to the trillion-dollar club. CEO Lisa Su has shown the ability to come from behind and win. The LLM developers want AMD to succeed and will give them business where they can be competitive. But AMD has challenges:
Assuming CSPs and OpenAI purchase two-thirds of all AI accelerators in 2030, splitting their needs between custom ASICs and GPUs, GPU spend could reach $200B to $270B, mostly from Nvidia. Custom AI accelerators could account for $100B to $130B (Broadcom CEO Hok Tan has projected $60B to $90B TAM by 2027). And there could be upside to these numbers.
Key requirements for competing in AI Accelerators include:
Broadcom has ~80% of the custom ASIC market now and will likely continue to be >50% because of its team, skills, capital, and because it can package its ASICs with the best switch chips outside of Nvidia. Much smaller Marvell is #2 now. AI ASIC contenders are Mediatek, Alchip and GUC.
It is the CSPs/OpenAI that are driving the LLMs and the ASIC architectures. They could decide to acquire an ASIC player to further improve their cost structure.
2) AI Scale-up Networking
Winner: Nvidia
Contenders: Broadcom, Astera
Broadcom CEO Hok Tan estimates networking is currently 5% to 10% of data center spend today, growing to 15% to 20% as the number of interconnected GPUs increases, causing interconnects to grow faster.
Nvidia’s scale-up networking (e.g., NVLink72, moving to NVLink576) offers non-blocking, all-to-all connections within and across racks — a deep competitive moat.
Broadcom, a strong switch chip supplier, offers alternatives for non-Nvidia systems. Marvell (via the Innovium acquisition) and Astera Labs (now public, ~$10B market cap) are emerging players. Startups like Auradine and others also are developing AI-optimized switches.
Photonics and lasers will become increasingly important as copper interconnects reach their limits. Key players in this space include Coherent and Lumentum.
There are many more players for scale-out networking.
3) CPU (x86 & ARM)
Winner: Nvidia (using ARM)
Contender: AMD
Recent quarterly shipments:
Nvidia’s ARM-based CPUs (e.g., Grace) are shipped alongside their GPUs. Though figures aren’t broken out, Nvidia is likely to surpass x86 shipments by 2030 because GPU volumes will dominate. AMD remains the strongest x86 supplier but needs to scale GPU offerings to stay competitive.
4) Memory
Winner: HBM
HBM revenue (~$25B) comes entirely from data centers. It offers better performance and integration than DDR, especially for AI workloads. HBM is sold as chiplets and is crucial for accelerator efficiency. HBM margins are much, much higher than DDR and flash memory.
Micron and SK hynix lead in HBM development; Samsung trails behind.
5) Other Chips
Winner: ASPEED
Power delivery is increasingly important as GPUs surpass 1KW. AI racks require 48V systems and complex PMICs. Margins are moderate, but the market is large.
ASPEED (Taiwan, ~$4B market cap) dominates the BMC market, with chips on 80% to 90% of AI boards. It has deep connections with OEMs like Foxconn and a strong competitive moat.
6) AI Foundry
Winner: TSMC
TSMC manufactures virtually all high-value non-memory chips for data centers, thanks to its advanced nodes and 2.5D/3D packaging expertise. More than half of its revenue already comes from AI/HPC.
Samsung and Intel have the necessary technologies, but they face challenges:
But AI demand may outstrip even TSMC’s capacity, requiring the use of Samsung and/or Intel.
Summary
LLMs are fueling explosive growth in AI data center infrastructure, set to become more than half of the global semiconductor market by 2030. This surge will benefit a narrow set of companies — those with the scale, talent, and capital to serve hyperscale CSPs. Nvidia, TSMC, Broadcom, and ASPEED are best positioned due to their strong value propositions and sustainable advantages.
AMD and Intel are “on the bubble” – they could join the Trillion Dollar club if they can be a viable competitor with Nvidia; or in Intel’s case with TSMC for AI Foundry.
Semiconductor firms that fail to scale quickly, or which merge to create scale, may be acquired by the trillion-dollar giants. Or, they could be left behind as AI chips are increasingly developed in-house.
Leave a Reply