
Jensen Huang boldly claims: By 2027, Blackwell and Rubin chips will generate at least $1 trillion in revenue

After Jensen Huang announced this forecast, NVIDIA's stock price hit a new daily high, rising nearly 5%. At the end of October last year, Jensen Huang stated that from 2025 to 2026, the data center business related to Blackwell and Rubin would achieve a total revenue of $500 billion
As the global AI computing power competition continues to heat up, NVIDIA CEO Jensen Huang has brought a more aggressive growth forecast to the market.
On March 16, Eastern Time, Huang stated at NVIDIA's annual developer conference GTC that NVIDIA expects its next-generation AI acceleration chip architecture Blackwell and the next-generation Rubin products to generate at least $1 trillion in revenue by the end of 2027. This figure far exceeds the $500 billion sales forecast Huang provided last October, further highlighting that the wave of investment in AI infrastructure is still rapidly expanding.
After Huang announced the $1 trillion forecast, NVIDIA's stock price quickly surged during Monday's midday trading session, reaching a new daily high with an intraday increase of about 4.8%, but soon retraced more than half of its gains, closing with an increase of less than 2%. Prior to this, as of last Friday's close, NVIDIA's stock price had cumulatively fallen over 3% since 2026, as market concerns about the sustainability of the AI investment cycle had put pressure on the stock price.

AI Computing Power Demand Continues to Explode
Over the past two years, with the rapid proliferation of generative AI, NVIDIA has become the most core hardware supplier in the wave of AI infrastructure.
From large tech companies to startups, various enterprises are aggressively purchasing AI servers and acceleration cards for training and running large models. Tech giants such as Microsoft, Amazon, Alphabet (Google's parent company), and Meta have all become major customers of NVIDIA's data center GPUs.
The market generally believes that in the coming years, global tech companies' capital expenditures on AI infrastructure could reach hundreds of billions of dollars.
NVIDIA is attempting to maintain its lead in AI accelerated computing by rapidly iterating its chip architecture.
At the GTC on October 28, 2025, Huang revealed that NVIDIA has "visibility" on achieving cumulative data center business revenue of $500 billion during the 2025 to 2026 period, which includes the Blackwell and next-generation Rubin architecture products.
On January 6 of this year, NVIDIA Chief Financial Officer Colette Kress stated at an event hosted by JP Morgan that due to strong demand, NVIDIA is more optimistic about its data center business, and by the end of 2026, the expected revenue from NVIDIA's data center chips will "definitely" exceed the $500 billion forecast given last October.
Blackwell Becomes the Current Core Growth Engine, Rubin Aims for the Next Round of Computing Power Upgrade
In the product roadmap, Blackwell is NVIDIA's most important AI computing platform at present.
This architecture replaces the previously widely deployed NVIDIA Hopper GPU architecture, specifically optimized for large model training and inference, supporting higher computing power, larger memory bandwidth, and greater energy efficiency.
NVIDIA has previously stated that the scale of large AI cluster deployments is rapidly expanding—from the early thousands of GPUs to gradually moving towards data centers with tens of thousands or even millions of GPUs Industry insiders expect that Blackwell will become the core product in the AI server procurement cycle in the coming years.
After Blackwell, NVIDIA has already planned the next generation of AI chip architecture - Rubin. This architecture is expected to be launched around 2026, aiming to achieve an order-of-magnitude improvement in computing power, energy efficiency, and system-level AI performance.
By maintaining a "one generation of architecture per year" product rhythm, NVIDIA hopes to maintain its technological advantage over competitors. Currently, AMD, Intel, and several cloud vendors are accelerating the development of their own AI chips, attempting to challenge NVIDIA's dominance in the AI accelerator market.
Trillion-Dollar Target Highlights AI Infrastructure Cycle
Analysts believe that NVIDIA's trillion-dollar revenue target is not only a judgment of its own product demand but also reflects the rapid expansion of the entire AI infrastructure market.
If this target is achieved, it means that in the coming years, global technology companies' investments in AI servers, GPUs, and related systems will continue to maintain high-intensity growth.
As the scale of AI models continues to expand, the demand for inference surges, and enterprise-level AI applications accelerate their implementation, AI computing power is gradually becoming a long-term capital expenditure direction similar to cloud computing infrastructure.
For the capital market, this expectation reinforces a core judgment: the AI computing power cycle is still in its early stages, rather than nearing its end
