Wallstreetcn
2023.07.18 09:49
portai
I'm PortAI, I can summarize articles.

Between Disruption: 4 Speculations in the AI Era

Guosheng Securities believes that in the future, AI model capabilities will become infrastructure, and there may not be an absolute moat between the model and MaaS layer. The computing power layer and the application layer at both ends of the industry chain are expected to experience explosive growth and transformation, and the value chain may exhibit a "U-shaped distribution".

AI revolution is sweeping across industries, reshaping the industry landscape and the future society. What will the future look like?

In its report on July 16, Guosheng Securities analyzed this phenomenon and pointed out that major global technology giants are actively deploying AI large models, with rapid model iteration and increased investment in algorithm development and application.

Looking ahead, Guosheng Securities puts forward four conjectures regarding the computing power layer, model layer, and application layer, suggesting that AI model capabilities will become infrastructure, and there may not be an absolute moat for models and MaaS layer.

The computing power layer and the application layer at both ends of the industry chain are expected to experience explosive growth and transformation, with the value chain showing a "U-shaped distribution."

Conjecture 1: Intensified Computing Power Competition, Unstable Landscape

From 2022 to 2023, the global AI layout will accelerate, leading to the emergence of numerous AI models. The launch and iteration of these models will significantly drive the demand for computing power.

Increased computing power demand implies growth in various areas such as chips, servers, cloud providers, and operators. Guosheng Securities points out that the AI chip sector faces tremendous growth opportunities.

According to the China Academy of Information and Communications Technology's "China Computing Power Development Index White Paper," the global computing power scale exceeded 615EFLOPS (1 EFLOPS represents 10^18FLOPS) in 2021, with a year-on-year growth rate of about 44%. The following years will witness an era of computing power explosion.

This will have a significant impact on the demand for AI chips. On the one hand, in terms of the total number of servers, the growth in computing power demand will drive the rapid increase in GPU server shipments, thereby driving the demand for training chips. On the other hand, in terms of server structure, compared to regular GPU servers, AI servers require at least twice the number of GPUs per unit shipment.

From the supply side, there is intense competition in computing power. Guosheng Securities points out that with the iteration of AI technology and the growth in computing power demand, various chip companies have entered into fierce competition.

(1) NVIDIA: Taking the Lead with a Combination of Hardware and Software to Build a Moat

In the GPU market, NVIDIA has taken the lead with high-performance GPUs such as H100. Guosheng Securities highlights three advantages of NVIDIA:

Hardware Performance: Currently, NVIDIA's H100 and A100 products lead the world in performance.

Software Ecosystem: In addition to hardware performance, NVIDIA has built a moat through the CUDA software ecosystem. Most mainstream deep learning frameworks are based on CUDA, giving NVIDIA a strong competitive advantage.

Investment and Cooperation: NVIDIA is rapidly investing in AI model companies to further expand its AI landscape and bind potential downstream demand.

(2) AMD: GPU Acceleration Catching up with NVIDIA and Rapidly Expanding into FPGA

As the second-largest GPU manufacturer, AMD is also accelerating the improvement of GPU performance to narrow the gap with the industry leader.

Guosheng Securities pointed out:

On June 13th, AMD launched the MI300X, specifically targeting generative AI, as a response to NVIDIA's H100. The high-bandwidth memory (HBM) density of MI300X can reach up to 2.4 times that of NVIDIA's H100, and the high-bandwidth memory bandwidth can reach up to 1.6 times that of NVIDIA's H100.

In terms of hardware, AMD's MI300X product has already reached a comparable level with NVIDIA's products in certain performance indicators. However, there is still a certain gap between AMD and NVIDIA in terms of software. In addition, after acquiring Xilinx, AMD continues to refine its FPGA products.

(3) Alphabet-C: TPU Continuously Iterating, TPUv4 Performs Excellently

Guosheng Securities pointed out that TPU has been continuously iterating and its performance has greatly improved. Since the release of TPUv1 in 2015, Alphabet-C has been continuously upgrading its TPU, and it already supported training with TPUv2. The TPUv4 released in Q2 2021 achieves reconfigurability and high scalability through optical interconnects. It adopts a 7nm process and has a peak computing power of 275TFLOPS, significantly improving performance.

(4) Intel: Increasing FPGA Release Frequency, Gaudi2 Performs Strongly

Guosheng Securities pointed out that due to strong downstream demand since the beginning of this year, Intel has accelerated its product release speed and plans to launch 15 new FPGAs in 2023. We expect that the arms race between AMD and Intel in the FPGA field will escalate again. In terms of ASIC, Gaudi2's performance also surpasses NVIDIA's A100 in certain aspects.

In summary, Guosheng Securities pointed out:

The competition in the AI chip market has pressed the "acceleration button," and all players are racing to gain market share. From the perspective of the track, the growth in computing power demand is not only beneficial to upstream chip manufacturers but also drives the midstream server, downstream cloud computing providers, and operators, among others.

Hypothesis 2: If the models are homogeneous, MaaS is not scarce

Furthermore, the demand for AI model outputs will drive the continuous growth of MaaS. Whether it is a general-purpose model or an industry-specific model, the output capabilities of these models will increase the demand for MaaS services.

From basic models to industry-specific models, the battle of hundreds of models has begun, with the emergence of pioneers like OpenAI and the open-source release of models like LLAMA. Currently, both domestically and internationally, large-scale model products are emerging one after another like mushrooms after rain.

Guosheng Securities believes:

Faced with the "battle of hundreds of models," although there may be differences in computing resources and data resources among model companies, most models themselves may not have absolute differentiation. As Huang Tiejun, the director of the Beijing Zhigen Artificial Intelligence Research Institute, said, "There are no absolute barriers and moats in the development of large-scale models. Just as humans learned to generate electricity many years ago, there will be various 'means of generating electricity' that will continue to evolve and iterate in the future." "The competition is about cost and efficiency, and the competition is about applications and ecosystems."

If the models are homogeneous, where is the barrier for MaaS? Judging from the recent actions of domestic giants, creating an open model store with multiple model sources is also becoming a common pattern for MaaS services. Guosheng Securities points out:

In the future race track structure, upstream model companies will have the motivation to access multiple cloud service platforms to expand their downstream customer base. Terminal application companies will have the motivation to use different models to optimize different business and industry scenarios.

Based on this, we believe that "multi-model" will become an important cooperation model for MaaS platforms, and the similarity between MaaS platforms will continue to increase. MaaS services will become infrastructure. From a long-term perspective, in addition to computing resources and other resources, the level of refined operation and price may become the core competitiveness of MaaS platforms.

Hypothesis 3: B2B SaaS Services Embrace the Magic of AI

Guosheng Securities points out that AI is expected to provide various SaaS companies with new capabilities and help the SaaS industry reach a "singularity moment."

  1. In the office field, taking Microsoft as an example, Microsoft has launched Microsoft 365 Copilot, which embeds Office and other business communication processes, optimizes office software functions, improves office efficiency, and enhances user experience.

  1. In the CRM field, taking Salesforce as an example, AI is expected to empower sales services, data analysis, marketing, internal communication, and code development, improving efficiency and experience, and driving the expansion of the customer base and the increase of ARPU. AI Empowerment, Expected to Drive Faster Growth for B2B SaaS Companies. According to Gartner's forecast, global public cloud SaaS spending is expected to grow by 17.9% YoY to $197 billion in 2023, and by 17.7% YoY to $232.3 billion in 2024.

Guosheng Securities pointed out that looking ahead, AI is expected to enable B2B SaaS companies to achieve more innovative features and drive faster growth. Major participants in the SaaS sector, such as Microsoft, Salesforce, Oracle, etc., are expected to benefit from this transformation.

Hypothesis 4: C-end application search engines may decline, traffic entry may return to scenarios

We believe that the continuous iteration of generative AI in the application end will reshape the traffic entry of online and offline formats in a disruptive manner.

Among them, the centralized entry of search engines may be weakened. Guosheng Securities pointed out:

Before the AI wave came, it was already reflected in the era of mobile internet. From the PC internet to the mobile internet era, the way netizens access information has undergone subtle changes: search, which used to dominate, has been increasingly replaced by recommendations. As a result, in the online advertising market, the share of search ads has gradually been eroded by various types of recommendation ads.

The continuous iteration of AI technology is expected to empower content operation capabilities for terminal scene-based platforms such as tourism and catering, so that these platforms can provide full-chain services to users in the form of "content + services".