From GPU to ASIC, Broadcom and Marvell emerge as winners | AI Dehydration
In the ASIC market, Broadcom is expected to generate AI revenue of over $11 billion this year, mainly from collaborations with Google and Meta; Marvell is expected to achieve AI revenue of $7 billion to $8 billion in 2028, mainly from collaborations with Amazon and Google
Author: Zhang Yifan
Editor: Shen Siqi
Source: Hard AI
With the increase in chip design and system complexity, tech giants will collaborate more with ASIC manufacturers.
Morgan Stanley predicts that the market size of high-end customized ASIC chips will be between $20 billion and $30 billion, with a compound annual growth rate (CAGR) of 20% per year.
Broadcom and Marvell are leading players in the ASIC market and may emerge as winners.
Currently, these two companies hold over 60% of the market share.
Broadcom leads with a share of 55-60%, followed closely by Marvell with a share of 13-15%.
Furthermore, with cloud providers and large OEM manufacturers entering the ASIC market, the supply chain may shift from Nvidia's dominance towards diversification.
I. ASIC vs. Compute Cards
The competition between ASIC and general-purpose compute cards has been ongoing, intensifying with the entry of cloud providers and large OEM manufacturers.
Currently, Nvidia is the main manufacturer of general-purpose compute cards, holding nearly 70% of the AI compute market share; Broadcom and Marvell are the main manufacturers of ASIC, together holding over 60% of the ASIC market share.
ASIC offers high performance, low power consumption, cost-effectiveness, confidentiality, security, and reduced circuit board size in specific task scenarios.
This advantage is mainly due to:
• ASIC: Integrated circuits designed for specific applications, optimized for particular tasks, often outperforming GPUs in these tasks in terms of performance and power efficiency. However, they lack versatility.
• General-purpose compute cards: Provide standardized high computing performance but do not focus on specific task scenarios, suitable for a wide range of applications, and are versatile.
In other words, ASIC sacrifices versatility for high performance in specific scenarios, while general-purpose compute cards are versatile but may not perform as well as ASIC in specific scenarios.
In fact, different compute card customers have different demands.
Cloud providers may value elastic computing more, while enterprises may focus more on cluster computing, etc. Faced with specific needs, ASIC has advantages over standard compute cards and is more tailored to customers' own usage scenarios.
Currently, Google, Meta, Microsoft, Amazon, and other cloud and hyperscale companies are leading the trend of ASIC.
For example, Google's TPU, Meta's MTIA, Microsoft's Maia, Amazon's Trainium2, etc.
It is worth noting that the cost of ASIC may be higher than that of general-purpose compute cards. According to Morgan Stanley's calculations, the TCO (Total Cost of Ownership) of GB200 is 44% lower than TPUv5 and 30% lower than Trainimium 2.
II. Broadcom and Marvell
With the increase in chip design and system complexity, large cloud computing and device OEM manufacturers will collaborate more with ASIC design partners.
Broadcom and Marvell, as leaders in the ASIC market, may emerge as winners.
1) Collaboration and Development of Broadcom
Broadcom has been the main manufacturer of Google's self-developed AI chip TPU, and this partnership has been ongoing for about 10 years.
To date, the two parties have collaborated on the design of six generations of TPU, and are currently advancing the mass production of the sixth generation TPU (3nm process).
Although there have been rumors in the market that Google will abandon its collaboration with Broadcom and opt for self-development to save costs.
However, recently at an analyst conference, Broadcom still revealed that it has secured the contract to provide multiple generations of TPU for Google. Morgan Stanley believes that this contract includes the upcoming seventh generation TPU (v7), and expects the seventh generation TPU to be launched in 2026/2027.
In the past, Google's annual payment to Broadcom for TPU is estimated to be $2 billion, reaching $3.5 billion in 2023, and is expected to reach $7 billion in 2024, mainly due to the rapid expansion of AI demand.
Furthermore, Broadcom's collaboration with Meta in its AI infrastructure is also expected to generate significant revenue, with Morgan Stanley predicting that this collaboration could reach billions of dollars in the next two years.
Broadcom's customer base is not limited to Google and Meta, but also includes various customers from various industries such as Apple, Cisco, Fujitsu, Ericsson, Nokia, HPE, NEC, ZTE, Ciena, Volkswagen, and Western Digital.
2) Prospects of Marvell
Marvell has years of ASIC collaboration experience with Amazon, Google, and Microsoft.
Currently, Marvell is accelerating the production of its first two AI ASIC projects, reportedly Amazon's 5nm Tranium chip and Google's 5nm Axion ARM CPU chip.
In addition, there are several larger projects in progress: 1) Amazon Inferentia ASIC, expected to launch in 2025; 2) Microsoft Maia, expected to launch in 2026.
Morgan Stanley predicts that in 2026, Marvell will experience strong growth.
And predicts that Marvell —
• AI revenue in 2024 will be $1.6 billion to $1.8 billion, increasing to $2.8 billion to $3 billion in 2025;
• By 2028, it will achieve accelerated computing/AI ASIC revenue of $7 billion to $8 billion;
In addition, Morgan Stanley mentioned that the surge in custom chips (ASIC) is good news for companies providing EDA software (tools needed for chip design) and IP (pre-designed components that can be integrated into chips) such as SNPS, CDNS, and ARM.
III. Diversification of the Supply Chain
With cloud providers and large OEM manufacturers entering the ASIC market, the supply chain may shift from NVIDIA's dominance to diversification.
Currently, nearly 70% of AI computing in the market uses NVIDIA's computing power, and the focus of the AI supply chain has always been on NVIDIA's supply chain.
However, as cloud providers gradually adopt ASIC chips, the supply chain may show a trend towards diversification.
In the ASIC chip supply chain, the choice of suppliers mainly depends on their developers (cloud providers, OEM manufacturers), rather than NVIDIA.
For cloud providers, they have the capability to independently develop ASIC chips; however, for OEM manufacturers with relatively limited R&D capabilities, NVIDIA may be used or licensed IP, allowing OEM manufacturers to independently develop based on NVIDIA's computing power cards.
Regardless of the situation, it will have an impact on diversifying the supply chain.
From another perspective, some customers such as sovereign countries, small and medium-sized enterprises, which do not have the advantage of self-development, NVIDIA still has an advantage