AI Focus: What does Google's continued partnership with NVIDIA and OpenAI, despite its strong push for TPUs, mean for its $1 billion in revenue?
Google may want to leverage NVIDIA to attract customers and promote its TPU chips. OpenAI is getting closer to profitability, and other AI startups may not be far behind.
Google and OpenAI - the pioneers of the new wave of AI investment - have once again sparked heated discussions this week.
Google has launched a more powerful fifth-generation TPU chip, but why is it using rival NVIDIA's GPU on Google Cloud?
OpenAI is expected to generate over $1 billion in revenue in the next year, getting closer to turning losses into profits. What does this mean for most AI startups that are still in the red?
Google's partnership with NVIDIA: What's behind it?
At its annual Google Cloud Next event, Google unveiled its new AI chip - the fifth-generation custom Tensor Processing Unit (TPU) v5e, designed for training and inference of large models.
At the same time, Google made a high-profile announcement of its collaboration with NVIDIA - Google Cloud will provide NVIDIA's state-of-the-art AI chips and its own hardware. NVIDIA CEO Jensen Huang even attended the event in person, wearing his iconic leather jacket, to answer questions about how NVIDIA's chips benefit Google's customers, alongside Google Cloud CEO Thomas Kurian.
Although NVIDIA is a hardware supplier for Google, it can be considered a direct competitor in the AI chip space. Despite Google's introduction of the powerful fifth-generation TPU chip, it is still offering NVIDIA's chips on its cloud platform, which is somewhat surprising.
According to an article by The Information on Wednesday, this reflects a harsh reality that Google has to face - many AI engineers prefer using NVIDIA GPUs, and currently, NVIDIA's hardware is in high demand in the market.
Therefore, Google has to strike a balance.
The report states that Google needs NVIDIA's hardware to attract customers to its platform, but it also wants to promote the TPU chips it has invested heavily in over the past few years. Hence, Google may plan to allow developers to use both NVIDIA and TPU chips through its collaboration with NVIDIA.
On Tuesday, Google announced that its software for building large machine learning models will now support both NVIDIA and its own chips.
However, this alone is not enough to drive substantial expansion of TPUs. An AI developer told The Information that many companies want to avoid becoming too dependent on Google, as it would make it more difficult for them to use NVIDIA's chips.
The source also mentioned that some companies do not yet know how to achieve performance equivalent to NVIDIA's H100 with TPU chips. This partly explains why Google has successfully deployed these chips internally but has seen relatively low adoption among external customers. In addition, price and availability are also key issues that Google faces in promoting TPU. If Google can offer comparable services at a lower price than its competitors, it will have an advantage in attracting startup customers.
There are signs that Google may be making some progress.
According to The Information, Ori Goshen, co-founder and co-CEO of AI startup AI21 Labs, said that early TPUs were better at model training than running models, but the recent improved versions have performed well in both areas.
Currently, some companies like Anthropic have started using both TPUs and GPUs.
OpenAI is getting closer to profitability, is there any meat for LLM?
In this wave of artificial intelligence, NVIDIA, as a chip supplier, has taken most of the profits in the AI industry. But how much money can the large language model (LLM) running on NVIDIA chips make?
OpenAI, behind ChatGPT, may be an indicator.
According to Wall Street News, based on the current revenue growth rate, OpenAI is expected to generate over $1 billion in revenue in the next year, surpassing the revenue forecast previously reported to shareholders.
$1 billion in annual revenue also means that it's only a matter of time before turning losses into profits. Last year, OpenAI's revenue was only $28 million, and due to the development of GPT-4 and ChatGPT, the company lost $540 million for the year.
Starting in February of this year, OpenAI began charging for ChatGPT and gradually provided API services for GPT-4 to enterprises.
At the beginning of this week, OpenAI also launched ChatGPT Enterprise, targeting large enterprises, providing enterprise-level security, privacy, and advanced data analysis capabilities. In the future, they will also launch ChatGPT Business for smaller enterprises. OpenAI has officially sounded the horn for entering the enterprise user market.