Wall Street's enthusiasm for NVIDIA: Demonstrating dominance, with stable performance by 2025, bullish to $1000!
Wall Street generally believes that NVIDIA's GPU product based on Blackwell has "advanced training performance and achieved a leap in inference performance", driving revenue growth in data centers. At the same time, NVIDIA's full-stack layout in hardware, software, and systems further demonstrates its industry "dominant position"
On March 19th, NVIDIA founder and CEO Huang Renxun officially announced the "AI nuclear bomb" Blackwell architecture, B200, and the more powerful GB200 series with 2 B200 integrated at the GTC developer conference, telling challengers with strength that "there is no one who can fight".
Wall Street News reviewed the evaluations of Wall Street investment banks on the first day of the GTC conference. In general, the reports were full of praise, with the consensus that GPU products based on the Blackwell architecture can drive NVIDIA's revenue growth to 2025. The comprehensive ecosystem and partnerships in various fields will also become the moat of its business empire. Despite short-term stock price fluctuations, the long-term outlook remains positive.
Morgan Stanley analyst Joseph Moore commented that NVIDIA has launched a series of new hardware, software, and systems that have shocked the market, once again proving their leading position in AI computing chips, which may bring greater pressure to competitors like Intel.
Bank of America analyst Vivek Arya stated that the GPU products based on Blackwell "advance training performance and achieve a leap in inference performance". NVIDIA can not only consolidate its leading position in large language model training but also take an important step in AI inference, maintaining the target price of $1100 and a "buy" rating.
Goldman Sachs analyst Toshiya Hari bluntly stated after the first day of the GTC conference that NVIDIA's strong innovation capabilities and extensive customer relationships will drive its continued growth, maintain its competitive advantage in future competition, and put pressure on competitors, raising the target stock price from $875 to $1000.
Blackwell will drive NVIDIA's high-speed revenue growth in 2025
NVIDIA has launched GPU chips based on the Blackwell architecture. The training performance of Blackwell GPU is 4 times that of the previous generation Hopper GPU, the inference performance is 30 times, and the energy efficiency is about 25 times.
Analyst Hans Mosesmann from Rosenblatt Securities believes that Blackwell may be the "most ambitious project in Silicon Valley history", and products based on Blackwell are likely to be in short supply until 2025CJ Muse pointed out that the newly released Blackwell chip architecture will become the company's important growth engine in the future. Truist Securities analyst William Stein also believes that the strong performance of Blackwell will stimulate demand, supporting the company's high-speed growth at least until 2025.
Morgan Stanley analyst Harlan Sur bluntly stated that the Blackwell architecture not only solidifies NVIDIA's leading position in the field of AI but also tells competitors that NVIDIA is "still ahead of them by 1-2 steps":
If you want to train a model with 18 trillion parameters of GPT, using the Hopper architecture chip for training may require 8000 GPUs, consume 15 megawatts of power, and take around 90 days. With Blackwell, only 2000 GPUs are needed, consuming 4 megawatts of power, significantly reducing energy consumption.
NVIDIA has launched a series of system-level solutions based on the Blackwell architecture, covering various levels from chips to complete machines to clusters, providing customers with a complete AI infrastructure.
Low-power AI chips drive data center growth
The H100 chip, mainly based on the Hopper architecture, has become an important driver of NVIDIA's data center revenue. However, Huang Renxun believes that accelerated computing is the future. The B200 and GB200 series chips equipped with the Blackwell architecture have opened the era of accelerated computing.
Wall Street investment banks generally believe that the significant performance improvement and energy efficiency improvement of the B200 and GB200 series imply that data centers and AI applications will be able to run more complex and demanding computing tasks at lower costs, thereby accelerating the development and application of AI technology.
A Bank of America analyst pointed out that compared to training requirements, inference tasks have higher requirements for real-time performance and latency, as well as higher requirements for energy efficiency (performance power ratio). Therefore, the B200 and GB200 series chips will achieve a huge breakthrough in inference capabilities for NVIDIA, winning more market share:
Nearly half of NVIDIA GPUs in the cloud are already used for inference tasks. Through high-speed interconnection mechanisms, the DGX GB200 NVL72 can be considered a super GPU, with a training throughput of up to 720 PFLOPS in FP8, and an inference throughput of 1.44 ExaFLOPS in FP4, expanding the application range of NVIDIA GPUs in the cloud inference market.
Huaxi pointed out in the report that companies expected to adopt the B200 and GB200 series chips include Amazon, Google, Meta, Microsoft, OpenAI, Tesla, and xAI. New products based on Blackwell will start to be provided to partners later this year, further driving data center revenue
Compared to H100, B200's artificial intelligence reasoning performance has increased by 5 times and AI training performance has increased by 4 times, while the TCO (Total Cost of Ownership) has decreased by 25 times. B200 supports larger GPU clusters, which helps improve resource utilization in data centers.
NVIDIA's Moat in the AI Software Field
If chip manufacturing technology capability is considered NVIDIA's core technical capability, then integrated service software can be seen as the moat of its business empire.
According to the presentation, NVIDIA NIM (NVIDIA Inference Microservices) will provide a direct path from the shallowest application software to the deepest hardware programming system CUDA. NVIDIA hopes to attract customers who purchase NVIDIA servers to register for NVIDIA Enterprise Edition NVIDIA AI Enterprise 5.0 through these services, charging $4,500 per GPU annually.
JP Morgan believes that NVIDIA's software and ecosystem also help accelerate the application of AI, further building a "moat". The software business is expected to bring in billions of dollars in revenue in the future, becoming a high-profit business:
In our view, with leading GPU/DPU/CPU, hardware/software platforms, and a strong ecosystem, NVIDIA is expected to continue benefiting from major long-term trends in artificial intelligence, high-performance computing, gaming, and autonomous driving.
A strong software ecosystem not only drives the application and popularization of artificial intelligence but also builds formidable competitive barriers for NVIDIA. On one hand, NVIDIA is establishing strong partnerships with EDA (Electronic Design Automation) and system analysis vendors to promote its products in various end markets.
On the other hand, NVIDIA is taking a multi-pronged approach to drive AI applications, including supporting AI capabilities in enterprise software and data platforms; providing pre-trained models optimized for NVIDIA hardware; and promoting the Omniverse Cloud platform.
Goldman Sachs believes that the layout of software products reflects NVIDIA's full-stack strategy in the field of artificial intelligence, enhancing the application ecosystem of its hardware products and providing customers with a more comprehensive and user-friendly artificial intelligence development platform, accelerating the application innovation of artificial intelligence technology in various industries, further solidifying its industry position.
Citigroup believes that NVIDIA is accelerating the practical application of artificial intelligence in key areas such as healthcare, industry, and chip design, driving the intelligent transformation of traditional industries and opening up broad market space:
- Enterprise software and data platforms - The company is accelerating and bringing generative AI capabilities to major enterprise software and data platforms.
- Healthcare - Over 20 NVIDIA medical microservices for drug discovery, medical technology, and digital health will be integrated by Amazon Web Services (AWS) and Microsoft Azure
- Industrial Digitalization - NVIDIA and BYD have established a good cooperation based on Omniverse to build digital twins for cars and factories. In the future, Omniverse will become the birthplace of robot systems and a virtual training ground for AI.
- Electronic Design Automation (EDA) and Computer-Aided Engineering (CAE) - NVIDIA collaborates with industry leaders such as Ansys, Cadence, and Synopsys to provide GPU acceleration solutions, achieving over 10 times performance improvement. NVIDIA also introduces large language models and generative AI into this field, launching innovative products like ChipNeMo