
Seizing the "tailwind" of Gemini 3, "Google Chain" challenges "NVIDIA Chain," disrupting the AI trading landscape

Google has begun marketing its TPU chip deployment solutions in its own data centers to large clients such as Meta, attempting to expand AI chips from its Google Cloud rental business to a broader market. Although TPUs are not as flexible as NVIDIA GPUs, they have lower development costs and consume less power when running at full capacity. NVIDIA has also launched a defensive counterattack; after Google reached an agreement with Anthropic to provide up to 1 million TPUs, Jensen Huang immediately announced an investment of billions of dollars in the company
Google is leveraging its latest breakthroughs in AI models to launch a comprehensive challenge to NVIDIA's chip dominance. The search giant has begun pitching a plan to major clients like Meta to deploy TPU chips in their own data centers, attempting to expand this alternative AI chip from Google Cloud's rental business to a broader market.
According to recent reports cited by the media, Meta is negotiating with Google to use billions of dollars worth of TPU chips in its data centers by 2027, while planning to rent chips from Google Cloud next year. This potential deal could allow Google to capture 10% of NVIDIA's annual revenue, bringing in billions of dollars in new income, and some leaders in Google Cloud have expressed this goal internally.
The Gemini 3 large language model released by Google this month has triggered a strong market reaction, as it is primarily trained on TPU chips and performs close to or even surpasses OpenAI's ChatGPT. This technological breakthrough has led investors to reassess the AI chip market landscape. Google's stock surged 6.3% in a single day to a historic high of $318.58, with a cumulative increase of 68% this year, while NVIDIA's stock has dropped nearly 10% this month, narrowing the market capitalization gap between the two to $526 billion, the smallest since April last year.

NVIDIA CEO Jensen Huang has quickly taken action to address this threat. According to insiders, after Google reached an agreement with Anthropic to provide up to 1 million TPUs, Huang immediately announced a multi-billion dollar investment in the company and secured a commitment for it to continue using NVIDIA GPUs. Similarly, after news broke that OpenAI planned to rent TPUs from Google Cloud, Huang also reached an investment intention with the company worth up to $100 billion.
Google's TPU Strategy Upgrade: From Cloud Rental to On-Premises Deployment
For years, Google has rented its self-developed Tensor Processing Unit (TPU) chips to cloud customers, who use the chips in Google Cloud data centers. But now, Google has begun marketing a new plan to clients, including Meta and large financial institutions, allowing them to use TPUs in their own data centers. According to insiders, Meta is currently negotiating with Google to rent Google Cloud chips next year and deploy TPUs in Meta's data centers by 2027. Meta currently relies primarily on NVIDIA's graphics processing units.
According to a person directly familiar with the situation, Google emphasizes to enterprises that customers express a desire to deploy chips in their own data centers to meet higher security and compliance standards, especially for handling sensitive data. Google also points out that TPUs may be particularly helpful for high-frequency trading firms running AI models in their own facilities.
To support this new initiative called "TPU@Premises," Google has developed software named "TPU command center" to make it easier for customers to use these chips.This software is seen as one of Google's biggest advantages against NVIDIA— a powerful tool for Cuda software. Cuda has become the de facto standard for AI developers, who are very familiar with how to use it to run models on NVIDIA chips. Although Google's Jax programming language is relatively unfamiliar to developers, Google has informed customers that they can use software related to PyTorch to operate TPUs without needing to become Jax experts.
Google began to more actively expand the TPU market last summer. The company started reaching out to small cloud service providers that primarily rent NVIDIA chips, proposing that they host TPUs in their data centers. Google has reached an agreement with at least one of these cloud service providers—Fluidstack in London—to host TPUs in its New York data center. As part of the agreement, Google also offered an irresistible term: if Fluidstack is unable to pay the upcoming rent for the New York data center, Google will provide support of up to $3.2 billion as a "backstop."
Gemini 3 Breakthrough Reshapes Market Confidence
Google's current momentum in the AI field may provide a boost for its TPU promotion. The latest large language model, Gemini 3, released earlier this month, has received enthusiastic reviews from some well-known tech figures, who believe that Google has closed the gap between its technology and OpenAI's. Reports suggest that Gemini 3 is faster, sharper, and has deeper reasoning capabilities than OpenAI's ChatGPT, Elon Musk's Grok, and Jeff Bezos-supported Perplexity. Its pricing is comparable to or lower than that of its competitors' AI models.
More importantly, Gemini 3 is primarily trained on Google's TPUs, rather than relying on NVIDIA chips like its competitors. Although TPUs are not as flexible as NVIDIA GPUs, they have lower development costs and consume less power when running at full capacity. Ben Reitzes, a technology strategist at Melius Research, stated: "Some investors are very concerned that, due to the significant improvements in the Gemini model and the ongoing advantages of custom TPU chips, Alphabet will win the AI war."
Some developers believe that with TPUs, Google has narrowed NVIDIA's leading advantage in powering the dense server clusters needed for training new large AI models. According to insiders, Meta is discussing using TPUs to train new AI models with Google, rather than just providing inference support for existing Meta models. This is noteworthy because most analysts previously believed that the biggest opportunity to challenge NVIDIA lay in inference chips, while the training chip market was difficult for NVIDIA to shake.
D.A. Davidson analyst Gil Luria estimates that if Google's DeepMind AI research lab and TPU sales business were treated as a separate entity, its value would approach $1 trillion, making it "arguably one of Alphabet's most valuable businesses." The surge in Google's stock price also reflects this expectation, with a 68% increase this year far exceeding the 22% increase of the "Seven Giants" index and the 18% increase of the Nasdaq Composite IndexIts TPU manufacturing partner Broadcom's stock price has also risen by more than 63% this year.
NVIDIA Launches Defensive Counterattack
Regardless of whether Google's TPU efforts succeed, the specter of a strong NVIDIA alternative may have already benefited major clients like Anthropic and OpenAI that do not want to rely on a single AI chip supplier. Last month, after Google reached an agreement with Anthropic to provide up to 1 million TPUs, Jensen Huang announced another deal to invest billions of dollars in Anthropic and secured a commitment from the AI startup to use NVIDIA GPUs.
Similarly, after news broke that OpenAI planned to rent TPUs from Google Cloud, Jensen Huang reached a preliminary agreement with the ChatGPT maker to invest up to $10 billion to help it develop its own data centers and discussed leasing NVIDIA GPUs to the company. An NVIDIA spokesperson stated that the company's investments in AI startups do not require these companies to purchase its GPUs. Jensen Huang could potentially preemptively block Meta from striking a deal with Google over TPUs by reaching his own agreement with Meta, which is already one of NVIDIA's largest customers.
Jensen Huang has acknowledged Google's progress in AI chips. In a podcast last fall, he told investor Brad Gerstner that given Google has produced seven generations of TPUs, "we must show respect where respect is due." NVIDIA has also made similar financial commitments to AI cloud partners like CoreWeave.
A spokesperson for Meta declined to comment on TPU negotiations. A Google spokesperson did not comment on TPU efforts but stated that the company "is experiencing accelerated demand for our custom TPUs and NVIDIA GPUs; we are committed to supporting both, as we have been for years."
Market Landscape Facing Reconstruction
According to Wind Trading Desk, Rich Privorotsky, a trader in Goldman Sachs' Global Banking & Markets division, noted in a recent research report that the breakthrough progress of Google's Gemini 3 model is a "disruptive model" that is reshaping the entire AI investment ecosystem, leading to delays in product cycles for other companies, increased capital expenditures, and more uncertain returns on investment. The trader emphasized that despite good financial data, NVIDIA is no longer the core focus of AI investment.
Melius Research's Reitzes warned, "It is still too early to declare Alphabet's recent progress as making it a long-term AI winner. That said, semiconductor and hyperscale cloud companies (especially Oracle) need to recognize that the 'Alphabet issue' is a concerning factor. Oracle has purchased billions of dollars worth of NVIDIA chips through cloud rentals, and if other companies establish AI cloud competitors, the lower-cost TPUs could weaken its pricing power."Even if NVIDIA's competitive advantage in AI narrows moderately, it could trigger a chain reaction of volatile markets in the coming months. If it turns out that lower-cost chips can perform just as well, companies that have invested heavily in NVIDIA semiconductors may experience buyer's remorse. Valuations are already very high, from publicly traded hyperscale cloud companies to OpenAI, and there are still questions about how new technologies will benefit the real economy. In fact, according to an internal memo released by The Information last week, OpenAI CEO Sam Altman acknowledged that Google's AI advancements could bring "some temporary economic headwinds" to the company, stating, "I expect the atmosphere outside will be tough for a while."
Bernstein senior analyst Stacy Rasgon said in an interview with CNBC, "We are not at the point where we need to worry about who wins and who loses. More importantly, is the opportunity in front of AI sustainable? If it is sustainable, they will all be fine; if it is not sustainable, they will all run into trouble."
