According to reports, Nvidia's GPU production will triple by 2024: H100 is expected to reach 2 million units.

Wallstreetcn
2023.08.25 00:59
portai
I'm PortAI, I can summarize articles.

Output: Triple the production, triple the revenue?

Facing the huge supply-demand gap of AI chips, NVIDIA has formulated an ambitious plan.

According to the Financial Times, three sources close to NVIDIA revealed that the company plans to at least double the production of its flagship H100 graphics card. By 2024, the shipment of H100 is expected to reach 1.5 to 2 million units, a significant increase from this year's forecast of 500,000 units. The demand for NVIDIA H100 is so high that it is already sold out until 2024.

In response to this, tech media tom'sHARDWARE commented that if NVIDIA successfully achieves this goal and the demand for AI and high-performance computing (HPC) applications remains strong for its A100, H100, and other computing CPUs, it could mean that NVIDIA will capture incredible revenue.

Due to NVIDIA's CUDA framework, which is tailored for AI and HPC workloads, there are hundreds of applications that can only run on NVIDIA GPUs. Amazon Web Services (AWS) and Google have their own custom AI processors for AI training and inference workloads, but they still need to purchase a large number of NVIDIA GPUs because customers want to run their applications on GPUs.

However, increasing the production of high-performance chips like H100 is not easy. NVIDIA's H100 is based on the company's GH100 processor, and manufacturing GH100 is quite challenging.

Firstly, GH100 is a huge silicon chip with a size of 814mm*2, making mass production difficult. Although the yield of this product is currently quite high, NVIDIA still needs to obtain a large supply of 4nm wafers from TSMC to expand the production of GH100-based products threefold. Roughly estimated, TSMC and NVIDIA can produce a maximum of 65 graphics cards from every 300mm wafer.

Therefore, manufacturing 2 million units of such graphics cards would require nearly 30,000 wafers, which is certainly possible. Currently, TSMC's 5nm wafer production is 150,000 wafers per month, with customers mainly including AMD, Apple, and NVIDIA.

Secondly, GH100 relies on HBM2E or HBM3 memory and uses TSMC's advanced CoWoS packaging, so NVIDIA also needs to ensure the supply in this aspect. Currently, TSMC is working hard to meet the demand for CoWoS advanced packaging.

Thirdly, since devices based on H100 use HBM2E, HBM3, or HBM3E memory, NVIDIA also needs to obtain sufficient HBM memory packages from companies such as Micron, Samsung, and SK Hynix.

Finally, H100 needs to be installed, so NVIDIA needs to ensure that its partners' AI servers can at least triple their output, which brings new challenges.