NVIDIA's GPU shipment volume last year was exposed, with a market share of 98%

Wallstreetcn
2024.06.12 03:26
portai
I'm PortAI, I can summarize articles.

NVIDIA's data center GPU shipments surged to 3.76 million units in 2023, with a market share of 98% and revenue reaching $36.2 billion. Additionally, AMD and Intel also showed signs of recovery, with AMD's MI300 series performing well. Cloud service providers like Google and Amazon are also developing their own chips. The shortage and cost of Nvidia GPUs have benefited AMD and Intel, both of which showed signs of recovery in 2023 with their AI chips

According to a study by semiconductor analysis firm TechInsights, Nvidia's data center GPU shipments in 2023 saw explosive growth, totaling approximately 3.76 million units. The research shows that compared to 2022, Nvidia's GPU shipments in 2023 increased by over 1 million units, with the total data center GPU shipments for Nvidia in 2022 being 2.64 million units.

In 2023, Nvidia held a 98% market share in data center GPU shipments, similar to the market share in 2022.

TechInsights data indicates that including AMD and Intel, the total shipments of data center GPUs in 2023 will reach 3.85 million units, higher than the approximately 2.67 million units in 2022.

Nvidia also captured 98% of the revenue share in the data center GPU market, reaching $36.2 billion, more than three times the $10.9 billion in 2022.

TechInsights analyst James Sanders stated that Nvidia GPU alternatives for AI are emerging in the form of Google TPU, AMD GPUs, Intel AI chips, and even CPUs.

Sanders mentioned that AI hardware is still not keeping pace with the rapid advancements in AI software.

Sanders said, "I suspect that due to the development of AI, diversification from Nvidia is inevitable."

The shortage and cost of Nvidia GPUs have helped AMD and Intel, with both companies showing signs of recovery in 2023 with their AI chips.

TechInsights data shows that in 2023, AMD shipped around 500,000 units, while Intel filled the remaining gap with 400,000 units.

AMD's data center GPU shipments are expected to rise this year.

The MI300 series GPUs from AMD have performed well and have secured procurement orders from Microsoft, Meta, and Oracle. During the April earnings call, AMD CEO Lisa Su stated that MI300 sales reached $1 billion in less than two quarters.

According to the earnings call records on Motley Fool, Su said, "We now expect data center GPU revenue to exceed $4 billion in 2024, higher than our January estimate of $3.5 billion."

At this month's Taipei International Computer Show, AMD also announced plans to release new GPUs annually, with the MI325X planned for this year, MI350 in 2025, and MI400 in 2026 AMD is following Nvidia's blueprint of releasing a new GPU every year. Nvidia has announced the Blackwell GPU for this year, incremental upgrades for 2025, and a new Rubin series of GPUs for 2026 and 2027.

The future of Intel's GPU remains uncertain. The company recently halted production of its Ponte Vecchio GPU and redesigned its Falcon Shore GPU, set to be released in 2025. Intel also offers the Flex series for inference and media services in data centers.

Intel is currently focusing on the Gaudi AI chip, which is not as flexible as a GPU. Generative AI models need special programming to run on the Gaudi chip, requiring significant effort. Nvidia's GPUs are better suited for running various models.

According to records from The Motley Fool, Intel CEO Pat Gelsinger stated during the April earnings call, "Falcon Shores will combine the outstanding scaling performance of Gaudi 3 with a fully programmable architecture... Intel will actively launch Falcon Shores products thereafter."

Gaudi 3 has helped Intel establish a foothold in the AI chip market, with Intel expecting "accelerated revenue exceeding $500 million in the second half of 2024," as stated by Gelsinger.

Sanders from TechInsights mentioned, "Considering the capacity and pricing issues of Nvidia GPUs, there are many other actions outside of GPUs, especially Google's TPU."

Sanders added, "Google's custom silicon work generates higher revenue than the custom silicon work of commercial silicon suppliers such as AWS, AMD, Ampere, etc."

Google has equipped its Google Cloud data centers with internally developed chips, including the recently released Axion CPU and the sixth-generation TPU, named Trillium, an AI chip. This research by TechInsights does not consider these new chips.

Sanders said, "Due to some strange market forces at play, Google has ultimately become the third-largest data center silicon provider in terms of revenue."

Google introduced the TPU in 2015 and has gradually gained market share. The primary users of Google TPU include internal applications and Google Cloud customers.

"Argos is a video encoder they developed for YouTube, considering all the videos YouTube processes per hour. Each Argos video encoder ASIC they deploy replaces 10 Xeon CPUs. This is a significant change from a power consumption perspective," Sanders explained Amazon has its own Graviton CPU and AI chips named Trainium and Inferentia, trying to lower the price of AI chips for customers as much as possible.

According to TechInsights' research report, in 2023, AWS rented out processors equivalent to 2.3 million self-developed processors to customers, with Graviton accounting for 17%, surpassing the usage of AMD chips on the platform.

"Even with high sales volume, their total revenue won't be very high. They aim to maintain a relatively stable 10% to 20% discount compared to instances powered by Intel or AMD," Sanders said.

All major cloud providers and hyperscalers are developing self-developed chips to replace those produced by Intel and AMD.

Nvidia's absolute dominance forces cloud providers to allocate dedicated space controlled by Nvidia, where Nvidia places its DGX servers and CUDA software stack.

Sanders stated, "Cloud platforms will not completely eliminate Intel, AMD, or Nvidia, as there will always be demand from customers for chips from these companies in these cloud platforms."

Microsoft has also introduced its own chips, the Cobalt CPU and Maia AI accelerator, while Google began developing chips in 2013 to accelerate internal workloads, nearly a decade ago.

The mass production of internally developed chips by cloud computing companies depends on software infrastructure. Google's LLM is developed to run on its TPU, ensuring rapid mass production of chips.

Microsoft's AI infrastructure relies on Nvidia GPUs, and is currently adjusting its software stack for self-developed chips. AWS mainly rents out its chips to companies deploying their own software stacks.

In semiconductor industry news, the original title is "NVIDIA's GPU shipment volume last year exposed, market share 98%"