1. FY24Q1 Financial Report Summary
1. Revenue: $NVIDIA(NVDA.US) revenue was $7.192 billion, YoY-13%, beating market expectations (10.39% beat).
2. Profit: Non-GAPP net profit was $2.713 billion, YoY-21%, beating market expectations (19.53% beat); net profit margin was 37.72%, YoY-3.82pct, beating market expectations (3.64pct beat).
3. FY2Q24 Guidance:
(1) Revenue: Approximately $11 billion, with a fluctuation of 2%.
(2) GAPP and Non-GAPP gross margin: 68.6% and 70%, respectively, with a fluctuation of 50 basis points.
(3) GAPP and Non-GAPP operating expenses: $2.71 billion and $1.9 billion, respectively.
(4) GAPP and Non-GAPP other income and expenses: $90 million, excluding income and losses from non-affiliated investments.
(5) GAPP and Non-GAPP tax rate: 14%, with a fluctuation of approximately 1%.
(6) Capital expenditure: $300-350 million.
2. Performance Summary
1. Incremental Information:
(1) Data Center Business: Launched four inference platforms (L4, L40, H100, Grace Hopper); Google Cloud became the first AIGC cloud service provider to offer L4 Tensor Core GPUs; AWS, Google Cloud, Azure, and Oracle Cloud Infrastructure began offering cloud service products (DGX) with H100 Tensor Core GPUs.
(2) Gaming Business: Launched 4060 and 4070 GPUs; DLSS support for games increased by 33 to 300.
(3) Professional Visualization Business: Announced the launch of Omniverse Cloud; Omniverse will be integrated with Microsoft 365; launched six new RTX GPUs. (4)Automotive and Autonomous Driving Business: Automotive will grow to $14 billion within the next 6 years, up from $11 billion a year ago; BYD's new models will use Orin.
II. Original Text of Performance Meeting
(I) Management Speech
[1] Performance
1. Data Center
Q1 revenue was $7.9 billion, up 19% QoQ and down 13% YoY. Strong QoQ growth came from record data center revenue, as well as inventory adjustments in gaming and professional visualization platforms. Data center revenue of $4.28 billion set a record, up 18% QoQ and 14% YoY, benefiting from strong growth in global computing platforms.
Generative AI is driving exponential growth in computing demand and rapidly transitioning to NVIDIA-accelerated computing, the most universal, energy-efficient, and lowest TCO method for training and deploying AI. Generative AI is driving significant growth in demand for the company's products, with customers spanning three major areas: cloud service providers (CSPs), consumer internet companies, and enterprises. The world's first CSPs are deploying flagship Hopper and Ampere architecture GPUs to meet the surge in demand for training and inference of enterprise and consumer AI applications.
Multiple CSPs have announced cloud service products that offer H100 on their platforms, such as AWS, Google Cloud, Azure, and Oracle Cloud Infrastructure (DGX). In addition to enterprise adoption of AI, these CSPs also meet the strong demand of generative AI pioneers for H100. Secondly, consumer internet companies are also at the forefront of adopting generative AI and deep learning-based recommendation systems, driving strong growth, such as Meta AI production and research team has deployed the Grand Teton AI supercomputer powered by H100.
Third-party enterprises have strong demand for AI and accelerated computing, and the development momentum in vertical fields such as automotive, financial services, healthcare, and telecommunications is fierce. AI and accelerated computing are rapidly becoming internal calls for customer innovation roadmaps and competitive positioning. For example, Bloomberg announced its $50 billion parameter model number GPT, which is used to help financial natural language processing tasks, including sentiment analysis, named entity recognition, recent classification, and Q&A. CCC Intelligent Solutions, an automotive insurance company, is using AI for repair estimates. AT&T is working with the company to develop AI to improve fleet scheduling so that field technicians can better serve customers. Other enterprise customers using NVIDIA AI include Dubourg for logistics and customer service and Amgen for drug discovery and protein engineering.
This quarter, the Hopper AI system, which began shipping DGX H100, can be deployed locally by customers; and DGX Cloud was launched in collaboration with Microsoft Azure, Google Cloud, and Oracle Cloud Infrastructure. Regardless of whether the customer deploys DGX locally or through DGX Cloud, they can access NVIDIA AI software, including NVIDIA-based commands and AI frameworks, as well as pre-trained models. The company provides them with blueprints for building and operating AI, extending their expertise in system algorithm data processing and training methods. NVIDIA AI Foundation, a large-scale model service, has been launched on DGX Cloud, allowing enterprises to build, improve, and operate custom large language models and generate AI models that use their own proprietary data for training to complete unique domain-specific tasks, including NVIDIA Nemo for large language models, NVIDIA Picasso for image, video, and 3D, and NVIDIA BioNemo for life sciences. Each has a pre-trained model framework with six elements for data processing and managing knowledge-based proprietary department databases. Systems for fine-tuning alignment and Garberding optimization inference engines, as well as support from NVIDIA experts, can help enterprises fine-tune models for their customer use cases. Leading enterprise service platform ServiceNow is an early adopter of DGX Cloud and Nemo, and they have developed a custom large language model trained on data specifically for the ServiceNow platform. The company will work with ServiceNow to create new enterprise-level AI products. Global enterprises can run on the ServiceNow platform, including IT, customer service, and developers.
Generative AI is also driving exponential growth. The latest AEMO perf industry benchmark test released in April showed that NVIDIA's inference platform provides performance that is several orders of magnitude ahead of the industry and has unparalleled versatility across different workloads.
To help customers deploy large-scale generative AI applications at GTC, four major new applications have been launched: L4, L40, H100, and Grace Hopper.
Google Cloud is the first CSP to adopt the L4 inference platform and has launched G2 virtual machines for generative AI inference, presenting 3D and augmented reality experiences. In addition, Google is integrating Triton and French servers with Google Kubernetes Engine and its cloud-based Vertex AI platform.
CSP and enterprise customers have a strong demand for generative AI and accelerated computing, requiring high-performance networks such as Mellanox network platforms; demand related to general CPU infrastructure remains weak.
As the scale and complexity of generative AI applications continue to increase, high-performance networks are crucial to providing accelerated computing and data center scale to meet the huge demand for training and inference. NVIDIA Quantum-2 InfiniBand is the golden template for AI-specific infrastructure and is available on major cloud and consumer internet platforms such as Microsoft Azure. The widespread adoption of the Dolphin can be seen in various fields. With the combination of network computing technology and the industry's unique end-to-end data center scale, optimized software libraries can typically increase the throughput of large-scale infrastructure investments by 20%. For multi-tenant cloud transitions to support AI generation, high-speed Ethernet platforms equipped with BlueField and Spectrum for Ethernet switching can provide the highest available Ethernet network performance.
Bluefield 3 is in production and has been adopted by multiple hyperscale and CSP customers, including Microsoft Azure Oracle Cloud, Baidu, and others.
Grace Data Center CPU provided samples to customers at the International Supercomputing Conference held in Germany this week. The University of Bristol announced a new supercomputer based on NVIDIA Grace CPU SuperChat, which is six times more energy-efficient than previous supercomputers. This further promotes the growth momentum of Grace in CPU-only and CPU-GPU cross AI and cloud and supercomputing applications. The upcoming Bluefield3 Grace and Grace Hopper superchat waves will make the new generation of super energy-efficient acceleration a reality.
2. Gaming
Revenue was $2.24 billion, up 22% QoQ but down 38% YoY. Sales of the 40 series GeForce RTX GPU for laptops and desktops drove strong QoQ growth. Overall, demand was robust and seasonal, demonstrating resilience in a challenging environment.
GeForce RTX 40 series GPU laptops bring huge benefits in industrial design performance and battery life for gamers and creators.
40 series laptops support NVIDIA Studio platform software technology, including accelerated creative, data science, and AI workflows, as well as Omniverse, which provides unparalleled tools and features for content creators. In desktops, the RTX 4070 has been improved, and the previously launched RTX 4090, 4080, and 4070 Ti have been added. The RTX 4070 GPU is nearly three times faster than the RTX 2070.
Last week, the 60 series RTX4060 and 4060 Ti were launched, bringing the latest architecture to core gamers worldwide, starting at just $299, marking the first time that GPUs have offered the latest gaming performance at mainstream prices. The 4060Ti is available starting today, and the 4060 will be available in July.
At Microsoft Build developer conference earlier this week, Microsoft PCs and workstations equipped with NVIDIA RTX GPUs were showcased to demonstrate how they will support AI outdoor calls. NVIDIA is collaborating with Microsoft on end-to-end software engineering, from Microsoft operating systems to NVIDIA graphics drivers and the Nemo framework, to help make Microsoft based on NVIDIA RTX GPUs a turbocharged platform for generating AI. Last quarter, it announced a partnership with Microsoft to bring Xbox PC games to GeForce. The first game, Gears 5, is now available, and more games will be released in the coming months. There are now over 1,600 games on GeForce NOW.
3. Professional Visualization
Revenue was $295 million, up 31% quarter-over-quarter but down 53% year-over-year. The quarter-over-quarter increase was driven by strong demand for workstations in key verticals such as public sector healthcare and automotive, while channel inventory adjustments are now behind us.
Six new RTX GPUs for laptops and desktop workstations were launched, with plans to introduce new products in the coming quarters.
Generative AI is the primary new workload for the company's supported workstations. The partnership with Microsoft turns Microsoft into an ideal platform for creators to use generative AI to enhance their creativity and productivity. NVIDIA Omniverse Cloud, NVIDIA Fully Managed Services, and Microsoft Azure, including the full suite of Omniverse applications and NVIDIA OVX infrastructure for using this full-stack cloud environment, were launched at GTC. Customers can design, develop, deploy, and manage industrial metaverse applications with NVIDIA Omniverse Cloud, which will be available starting in the second half of this year.
Connect Office 365 applications to Omniverse; the automotive industry has been an early adopter of Omniverse, including BMW, Jaguar, and Land Rover.
4. Automotive and Autonomous Driving
Revenue was $296 million, up 1% quarter-over-quarter and 114% year-over-year. Strong year-over-year growth was driven by NVIDIA Drive Orin's growth in many new energy vehicles.
Automotive is expected to grow to $14 billion within the next six years, up from $11 billion a year ago. However, growth has slowed due to some NAV customers in China adjusting their production plans due to lower-than-expected demand, a trend that will continue for the rest of this year.
This quarter, the partnership with BYD was expanded. BYD is a leading NAV manufacturer globally, and the company's new design will expand BYD's use of Drive Orin to its next-generation mass-produced Dynasty and Ocean series vehicles, which will be produced in 2024.
[2] FY2Q24 Guidance
(1) Revenue: Approximately $11 billion, up or down 2%.
(2) GAPP and Non-GAPP Gross Margins: 68.6% and 70%, respectively, up or down 50 basis points.
(3) GAPP and Non-GAPP Operating Expenses: $2.71 billion and $1.9 billion, respectively. (4)GAPP and Non-GAPP Other Income and Expenses: $90 million, excluding gains and losses from non-affiliated investments.
(5)GAPP and Non-GAPP Tax Rate: 14%, with a fluctuation of approximately 1%.
(6)Capital Expenditures: $300-350 million.
(II) Q&A
Q: What are the driving factors for further development in data centers from April to July, and what are the constraints on visibility in the second half of the year?
A: [1] Driving factors for further development in data centers
It is expected that AI will continue to develop further in 2Q compared to 1Q. As we all know, generative AI and large language models are advancing rapidly, and the demand for related products is very strong. The company has close partnerships with consumer internet companies, and CSPs and AI startups are also very interested in Nvidia's new and old architectures, with Hopper and the previous architecture being very popular. This is not surprising, as the company usually sells two architectures at the same time, and this is also a key area where deep recommendations are driving growth. It is expected that computing and networking businesses will grow.
[2] Data center supply chain
The 2Q guidance has fully considered various factors, and the supply chain issues for this quarter are currently being addressed. There will be further measures in the supply chain in the second half of the year, and a large amount of materials and raw materials have been purchased for the second half of the year in the hope of better guaranteeing the supply chain to meet strong customer demand. Customers are very diverse, some of whom are building platforms for large companies, and some are CSPs and consumer internet companies.
The momentum of further strong development in data centers will last for at least several quarters, and the prospects are very good. We will work hard to quickly purchase to meet the large supply in the second half of the year.
Q: What is the progress in promoting server acceleration, and how to cooperate with TSMC and other partners, such as delivery time and supply and demand outlook?
A: The company is producing Hopper and Ampere-related products in full swing, and these new products will better serve the AI scene. A large amount of technology and knowledge is gradually being gathered to support the development of AI, which is why the company sees it as the next "iPhone moment."
The data center market is moving towards accelerated computing, and it has been developing for some time. The company's 15 years of technology accumulation is enough to accelerate the main applications of the entire data center and reduce energy consumption and costs. The emergence of AIGC has accelerated this process.
In the past, the global data center market of nearly $1 trillion was dominated by CPUs, but in the future, AI will become the main workload of most data centers. Therefore, the budget will tilt towards accelerated computing, shifting from traditional computing to GPUs and intelligent NICs.
In terms of orders, data center orders have continued to grow over the past 10 years, and the growth trend will continue in the future. We hope to seize the current AI turning point and achieve further growth in the data center field. Q: What changes have occurred in the relationship between the company and suppliers, including TSMC, in the face of new demand situations, and how to better achieve supply-demand matching?
A: Supercomputers are giant systems, and the company is currently producing related products at full capacity. In addition to GPUs, the entire supercomputer system includes 35,000 zero components, as well as components such as network communication, fiber optics, NIC, and intelligent NIC. The company is increasing its procurement volume.
Q: Can data centers maintain growth in Q3 and Q4?
A: First of all, the company will not make specific guidelines for the second half of the year, but the demand for the second half of the year is visible, and the company is also actively increasing its procurement volume to meet the growth in the second half of the year. The supply in H2 will be greater than that in H1.
Q: Will the competitive landscape change? Will customized ADIC bring more challenges? Will there be more competitors in the next 2-3 years?
A: (1) There are countless start-up competitors with strong funds and innovative technologies, and competition always exists.
(2) The company's core value proposition is to provide customers with the lowest cost solution-this is a full-stack problem, not just a chip architecture, but also optimization of the entire data center architecture. The company currently has 400 acceleration libraries to provide support, which is an amazing scale. The application of AI further amplifies the importance of full-stack layout, including the entire system including network, distributed computing engine, and computing architecture. In fact, it is a computer. To obtain the best performance in the future, a full-stack solution is needed, which is the value of accelerated computing.
(3) Utilization represents the number of applications that can be accelerated. High utilization requires the versatility of the architecture to be maintained. To this end, the company adopts a strategy of universal GPU.
(4) Experience, that is, professional knowledge of the data center. NVIDIA has built 5 self-owned data centers and helped customers around the world build data centers. The company's architecture will be integrated into all clouds in the world. In the past, the operation time of data centers from product delivery to actual deployment was measured in months, and supercomputers even had to be measured in years. The company will strive to shorten the delivery cycle to a few weeks, which will be a manifestation of the company's professionalism.
In summary, high throughput, low cost, and professionalism are the company's value proposition and competitiveness. Of course, challenges from the market are always great.
Q: What is the positioning and driving force of the AI Enterprise suite and other software products?
A: Software is very important for accelerating platforms. These services are gradually deployed on DGX Cloud and are essentially the operating system of AI. With the development of AI and hardware architecture, the availability and monetization potential of software will continue to be exerted, and inference is now the main driving force for accelerated computing.
On the enterprise side, large language models and other segmented scenarios have created a demand for software stacks. NVIDIA AI Foundation and AI Enterprise can meet the new needs of enterprise customers. Among them, AI Enterprise is the world's only GPU acceleration stack, including more than 4,000 software packages, covering AI running engines, end-to-end running engines, data processing, model training, etc. Model optimization and inference processes are provided as services, while also ensuring security.
In the industrial sector, Omniverse is an important engine for software-defined and robotic applications in industry, and will also become a cloud service platform.
Therefore, AI Foundation, AI Enterprise, and Omniverse will be applied to all clouds that cooperate with DGX Cloud.
Q: Opinion on the technical competition between InfiniBand and Ethernet
A: InfiniBand set a record this quarter and is expected to do so for the whole year.
InfiniBand and Ethernet have their own demand scenarios. The former is designed for AI data centers, and the throughput difference between the two may be 15%-20% for a $500 million infrastructure investment, and InfiniBand is close to free for a $100 million investment.
Therefore, if the data center is a cloud data center with multiple tenants, Ethernet is more suitable. For AI clouds, InfiniBand will be more suitable because of the need for AIGC workloads, which is a segmented incremental market.
Q: Follow-up repurchase plan
A: The company currently has $7 billion in repurchase authorization, and did not make any repurchases last quarter. The company will adjust according to the situation.
Q: The market space for inference is several times that of training
A: First of all, training has no end. As long as there is new data, there will be new training. The training needs for building recommendation systems, large prophecy models, and vector databases will continue.
The inference part is an API, which may be built by the company itself or come from partner companies such as Adobe. The API for AI inference has shown exponential growth in the past week.
Therefore, there is a $10 trillion data center market in the world, mainly dominated by CPUs, but will shift to AI and accelerated computing in the future, which is the focus of future capital expenditures.
Q: What solutions will the company propose to meet customer demands for reducing the cost of large model queries?
A: Generally speaking, customers will build a large language model and then gradually split it into models of different sizes. These segmented models have sufficient generalization ability while retaining personalization. In addition, optimizing large models can also provide guidance for segmented models. Similarly, the company's layout includes products of different sizes such as L4, L40, and H100 to match corresponding needs.
In addition, the input and output of the model require a lot of preprocessing and post-processing work, and the model itself only accounts for 25% of the inference process. Therefore, for multimodal inference work, the company has promoted AI Enterprise's related products with partners on all clouds to provide basic functionality.
Q: What changes will be made to further release network bandwidth in the face of computational complexity, low latency, and big data requirements?
A: This is a very critical link. In fact, everyone's attention is focused on accelerator chips, ignoring software and networks, which ignores the company's full-stack solution ideas. For the network, the company has launched the DOCA network stack and Magnum IO acceleration library. Although these two products have not received much attention from investors, they are actually the tools that enable tens of thousands of GPU connections. The acquisition of Mellanox further enhances the company's endowment in high-performance networks.
In addition, NVLink is also an important product that brings a lot of help in low-latency computing. The company expands outward with NVLink and deploys Infiniband, NIC, smart NIC, BlueField and other products. The company will provide further explanation at the upcoming Computex conference.
In summary, the entire computing system is complex and cannot only focus on accelerator chips. A full-stack solution for software and network is needed.
Q: The recent development of DGX Cloud and whether more potential has been tapped in actual operation
A: The ideal state is 10% from DGX Cloud and 90% from CSP Cloud.
On the one hand, the construction of DGX Cloud allows the company to form in-depth cooperation with CSP partners to provide higher-performance services, develop new applications (such as working with Microsoft to import Omniverse Cloud), create a larger market, and benefit each other.
On the other hand, for customers, this model provides a standard stack that can work in various clouds, making it easier for them to manage software, which means higher flexibility and is a win-win situation for all parties.
Dolphin NVIDIA historical articles review:
Depth
May 25, 2023 financial report review "NVIDIA: AI new era, the future is here"
February 23, 2023 conference call "Performance hits bottom and will rebound, AI is the new focus (NVIDIA FY23Q4 conference call)"
February 23, 2023 financial report review "Surviving the cycle disaster and encountering ChatGPT, NVIDIA's faith returns"
June 6, 2022 "Did the US stock market giant shock kill Apple, Tesla, and NVIDIA?"
February 28, 2022 "NVIDIA: High growth is true, but the cost-effectiveness is still not great" On December 6th, 2021, "NVIDIA: Valuation Cannot Rely on Imagination Alone"
On September 16th, 2021, "NVIDIA (Part 1): How did the chip bull, which grew twenty times in five years, refine itself?"
On September 28th, 2021, "NVIDIA (Part 2): The Dual Wheeled-Drive is No More, will Davis Turn the Tide with Double Kill?"
Earnings Season
Teleconference on November 18th, 2022, "Can Inventories Continue to Rise and be Digested in the Next Quarter? (NVIDIA FY2023Q3 Earnings Call)"
Earnings Review on November 18th, 2022, "NVIDIA: Profit has fallen by two-thirds, when will the turning point come?"
Teleconference on August 25th, 2022, "How Does Management Explain the Gross Margin Plunge? (NVIDIA FY2023Q2 earnings call)"
Earnings Review on August 25th, 2022, "NVIDIA, Stuck in the Mud, is it Returning to 2018?"
Performance Forecast Review on August 8th, 2022, "Thunder Rumbles, NVIDIA's Performance in Free Fall"
Teleconference on May 26th, 2022, "The Combination of COVID-19 and Lockdowns Causes a Slump in Gaming which Negatively Impacts the Second Quarter Results (NVIDIA Earnings Call)"
Earnings Review on May 26th, 2022, "Without the 'COVID-19 Fatty', NVIDIA's Performance is Disappointing"
Teleconference on February 17th, 2022, "NVIDIA: Multi-Chip Push, the Data Center is the Company's Focus (Teleconference Summary)"
Earnings Review on February 17th, 2022, "Behind NVIDIA's Surprising Performance Lies Hidden Concerns | Financial Report Analysis" On November 18, 2021, Telephone Conference "How does NVIDIA build the Metaverse? Management: Focus on Omniverse (NVIDIA Telephone Conference)".
On November 18, 2021, Financial Report Review "Computing Power Earns Explosively, Metaverse Empowers, Will NVIDIA Keep Being Bullish?".
Live Broadcast
On May 26, 2022, "NVIDIA Corporation (NVDA.US) Q1 FY2023 Earnings Conference Call".
On February 17, 2022, "NVIDIA Corporation (NVDA.US) Q4 FY2021 Earnings Conference Call".
On November 18, 2021, "NVIDIA Corporation (NVDA.US) Q3 FY2022 Earnings Conference Call".
Risk Disclosure and Statement of this article: Dolphin Analyst Disclaimer and General Disclosure.