NVIDIA earnings call, CFO calculates "return on investment" on the spot: for every dollar invested in GPU, can earn 5 dollars in the next four years!

Wallstreetcn
2024.05.23 00:22
portai
I'm PortAI, I can summarize articles.

For every $1 spent on GPU, cloud providers have the opportunity to generate $5 in hosting revenue over 4 years. For every $1 spent on the HGX H200 server, API providers hosting the Llama 3 service can generate $7 in revenue over 4 years

After overnight, "the most important stock on earth", NVIDIA, the third largest weighting stock in the S&P, released its financial report for the first quarter of the fiscal year 2024. The report shows that NVIDIA's total revenue and data center revenue hit record highs for several consecutive quarters, with year-on-year growth of 262% and 427% respectively, far exceeding Wall Street's expectations.

In the subsequent conference call, Chief Financial Officer Colette Kress did some calculations for analysts, emphasizing that in the current hot market, buying NVIDIA's chips can quickly recoup costs.

Kress stated that the strong growth of the data center business is due to the surge in demand from enterprises and internet companies. She emphasized the importance of the cloud computing leasing market, believing that cloud services can help customers recoup part of the cost of purchasing NVIDIA chips. She expects that for every $1 spent on purchasing NVIDIA's AI infrastructure, cloud providers have the opportunity to earn $5 in revenue over the next four years by providing GPU-as-a-Service (GAAS).

Kress reiterated that NVIDIA provides "the fastest model training speed, the lowest training cost, and the lowest large language model inference cost" for cloud customers. She revealed that the company's current customers include well-known artificial intelligence companies such as OpenAI, Anthropic, DeepMind, Elon Musk's xAI, Cohere, Meta, and Mistral.

Of note, Kress also shared NVIDIA's close collaboration with Tesla. Tesla has accumulated a total of 35,000 H100 GPUs for AI training. Kress expects that this year, the automotive sector will become the largest enterprise vertical in NVIDIA's data center business, bringing in opportunities for billions of dollars in revenue.

She also mentioned that Meta's Llama3 large language model was trained on 24,000 NVIDIA H100 GPUs; and the GPT-4o showcased by OpenAI last week was powered by NVIDIA H200 chips, with the H200's inference performance nearly double that of the H100.

Using Llama3 as an example, Kress explained how AI companies can profit from providing API services. She stated:

For the Llama3 model with 700 billion parameters, a single NVIDIA HGX H200 server can output 24,000 tokens per second, serving over 2,400 users. This means that API providers hosting Llama3 can earn $7 in revenue over the next four years from Llama3 token billing for every $1 spent on the NVIDIA HGX H200 server.

After the financial report was released, NVIDIA's stock price surged 6% in after-hours trading, breaking through the $1000 mark, with a market value of $2.3 trillion, firmly ranking third among S&P component stocks