Alibaba Cloud sounds the "AI Infrastructure" charge

Wallstreetcn
2024.09.19 09:29
portai
I'm PortAI, I can summarize articles.

ALL IN Investment

As large-scale models are being implemented vigorously, AI's penetration into the physical world is still in its infancy, and the entire computing architecture is undergoing fundamental changes. As the leading AI cloud provider, Alibaba Cloud is making full efforts to become the infrastructure winner in this AI feast.

At the Yunqi Conference on September 19th, Alibaba Cloud CTO Zhou Jingren stated at the beginning that Alibaba Cloud is setting new standards for AI infrastructure in the AI era. He then presented three "powerful cards" - upgraded large models and hardware platforms, as well as price reduction policies.

Over the past two years, the size of models has increased by thousands of times, while the computational cost of models is continuously decreasing, making the cost of using models lower for enterprises. Zhou Jingren emphasized, "This is the technological dividend brought about by the comprehensive innovation of AI infrastructure. We will continue to invest in the construction of advanced AI infrastructure to accelerate the application of large models in various industries."

The rapid penetration of AI acceleration brings huge imaginative space. Alibaba Cloud Chairman Eddie Wu believes that generative AI will enhance the productivity level of the entire world, "This value creation may be tens or even hundreds of times the value created by mobile internet connectivity."

However, he pointed out that the next generation of models needs to have a larger scale, be more universal, and have a more generalized knowledge system. The investment threshold for competing with advanced models worldwide will reach tens of billions or hundreds of billions of US dollars. Such a level of investment seems to make generative AI exclusive to large companies.

In order to accelerate the penetration and popularization of AGI, at the conference, Zhou Jingren announced the upgrade of the flagship model Qwen-Max, with performance close to GPT-4o; and also released the strongest open-source model Qwen2.5 series, becoming a world-class model group second only to Llama.

Alibaba Cloud stated that this time, a total of 100 models in language, audio, vision, and other fields were open-sourced, creating the largest open-source history of large models, allowing enterprises and developers to use large models at a low cost.

While intensifying open-source efforts to attract developers, Alibaba Cloud also announced a significant price reduction for its three main models of Qwen, with the highest reduction reaching 85%, and the price of a million tokens dropping to as low as 0.3 yuan. Over the past six months, Alibaba Cloud's BaiLian platform has lowered the threshold for calling large models, promoting the popularization of large models.

In the past year, large model technology has achieved several milestone breakthroughs, from large language models to video generation to multimodal models. The capabilities of large models are still expanding, with continuous improvement in mathematical, coding, and reasoning abilities.

It is reported that more than 300,000 enterprise customers including China FAW, Lenovo, and Weibo have already accessed the large models of Qwen; in the future, industries such as biomedicine, industrial simulation, weather forecasting, and gaming are accelerating their embrace of large models, bringing about a new round of AI computing power growth.

From the upgrade of large models to continuous price reductions, it is evident that Alibaba Cloud is determined to tackle the new AI era. Beneath the surface, a technological architecture transformation centered around AI is rapidly emerging.

Zhou Jingren pointed out that unlike the traditional IT era, the AI era demands higher performance and efficiency from infrastructure, and the CPU-dominated computing system is rapidly shifting to the GPU-dominated AI computing system As a result, Alibaba Cloud has comprehensively upgraded its technical architecture system with AI at the center, from servers to computing, storage, networking, data processing, model training, and inference platforms.

Zhou Jingren showcased the upgraded product family on site, including the Panjiu AI server, GPU container computing power, the high-performance network architecture HPN7.0 designed for AI, CPFS file storage supporting high data throughput, and the artificial intelligence platform PAI that enables training and inference scheduling at the level of tens of thousands of cards.

Cloud service providers have full-stack technical reserves, and through the comprehensive upgrade of infrastructure, the entire lifecycle of AI training, inference, deployment, and application has become more efficient. Zhou Jingren stated, "To meet the exponentially growing demand for GPU computing power, especially in the upcoming inference market, Alibaba Cloud is ready."

One year after the battle of large models, applications have become the main theme of the large model industry. As the earliest proponent of the Model as a Service (MaaS) concept, Alibaba Cloud has always made the prosperity of the large model ecosystem its top priority.

"We hope that enterprises and developers can do AI and use AI at the lowest cost, allowing everyone to access the most advanced large models," Zhou Jingren said