
Domestic computing power performance expands, AI terminal innovation year... Soochow Securities provides the top ten predictions for the electronics industry in 2026

Dongxing Securities Co., Ltd. released the top ten predictions for the electronics industry in 2026, pointing out that the domestic computing power industry chain will see a significant increase in performance, and domestic GPU and AI ASIC service providers will benefit from market competition and capacity release. Edge computing power will promote AI applications through an edge-cloud hybrid architecture, and it is expected that 2026 will become the year of NPU implementation. The demand for 3D DRAM storage will rapidly increase, becoming an important support for AI hardware. Overall, 2026 will be a key year for AI innovation and the release of related products
Investment Highlights
Cloud Computing Power: Welcoming the performance surge driven by the resonance of the domestic computing power industry chain. By 2026, leading domestic computing power chip companies are expected to enter a performance realization period, with optimism for domestic GPUs benefiting from capacity release due to advanced process expansion. Considering that various participants in the domestic computing power chip market are competing for market share and capacity resources, we are optimistic about the key role of AIASIC service providers in the supply chain. At the same time, as domestic computing power enters the super-node era, it will test both the single-card strength of GPU manufacturers and the localization level of Switch chips, with domestic and foreign governments likely to impose controls on this aspect.
Edge Computing Power: Hybrid edge-cloud empowers AI scenarios, with edge-side SoCs continuously benefiting from the AI innovation wave. Edge-side AI takes over from cloud AI, and the hybrid edge-cloud architecture solidifies the scenario foundation. Overseas large models are expected to drive the landing of AIoT first, benefiting three leading scenarios: glasses, automobiles, and robots. From an investment perspective, upstream consumer companies in H1 2026 may generally be suppressed by rising storage prices, but looking ahead to H2 2026, we are optimistic about the opportunities in the supply chain brought by the launch of new wearable AIoT products, considering multiple factors such as the consumer cycle and AI innovation. Additionally, 2026 is expected to be the year of NPU landing.
3DDRAM: Edge-side AI storage will be the year of volume growth in 2026, with industry trends gradually established. The landing of AI hardware in 2026 will lead to a rapid increase in storage demand, and high-bandwidth/low-cost 3DDRAM is expected to see volume growth in multiple fields. We believe that the deployment of local large models in areas such as robotics, AIoT, and automobiles relies on the support of 3DDRAM storage, which is a key hardware innovation that transforms applications from "usable" to "user-friendly." The release of multiple NPU tape-outs also provides rich adaptation scenarios for 3DDRAM. Furthermore, scenarios such as mobile/cloud inference will gradually be introduced, becoming key scenarios in H2 2026 and 2027, with application fields continuously expanding.
Edge-side AI Models: Architectural optimization breaks physical bottlenecks, and profit-sharing determines the ecological landscape. Looking ahead to 2026, cloud models will continuously enhance complex planning capabilities through data quality and post-training optimization; edge-side will inherit cloud model capabilities through distillation, and with attention dimensionality reduction, MTP, and other structural optimizations combined with skill modularization and on-chip memory systems, improve execution success rates and latency performance; the Agent route presents both API and GUI. In terms of ecological landscape: terminal manufacturers control the OS and take over system-level entry; super apps build closed loops for in-app Agents and selectively open interfaces; third-party model manufacturers rely on profit-sharing mechanisms to promote cooperation.
AI Terminals: The year 2026 marks the beginning of innovation in AI terminals. Meta, Apple, Google, and OpenAI all have new terminal products launched. The form of AI terminals is represented by glasses, along with new forms such as AI pins and camera headphones. With the acceleration of model iteration and the development of application scenarios for new terminals, the next generation of blockbuster terminals may emerge during the innovation cycle of major manufacturers. The emergence of new terminals is inseparable from the upgrade of key components, and it is recommended to pay attention to directions such as SoC, batteries, heat dissipation, communication, and optics Changxin Chain: Changxin Storage is expected to drive sufficient capacity expansion, and Changxin Chain will directly benefit. The CBA technology that Changxin is focusing on, which is moving towards 3D, is expected to release continuous expansion momentum in the future. This alternative approach will narrow the generational gap with Samsung and SK Hynix, ensuring the scale of capacity expansion, and its industrial chain companies will fully benefit. In the equipment segment, while benefiting from Changxin's ample capacity expansion, some high-quality companies will also enjoy rapid increases in penetration rates, ushering in a Davis double hit; some foundry and testing companies will undertake Changxin's foundry demands.
Wafer Foundry: Advanced logic capacity expansion is expected to double, and the wafer foundry boom is maintained. Currently, there is a severe shortage of supply in domestic advanced processes, especially at 7nm and below. With potential pressures from supply cuts in the US and Japan and the foreseeable strong demand for domestically produced advanced logic chips, advanced capacity expansion starting in 2026 for supply assurance will be substantial. SMIC and HuaLi Group are expected to continue expanding advanced processes. In addition, more entities will come to expand 14nm, including Yongxin, ICRD, etc.
PCB: Rubin opens a new era of high-end materials for AIPCB, M9 and Q fabric spark a wave of PCB value leap. The requirements for high-speed signal integrity and low dielectric performance in AI servers continue to rise, driving PCB materials into a comprehensive upgrade cycle. M9CCL, with its ultra-low Df/Dk performance and excellent reliability, is becoming a key substrate for AI servers and high-speed communication systems, expected to rapidly increase the value of PCBs and upstream high-end materials, becoming the core driving force for the PCB industry chain to enter a new round of high prosperity cycle.
Optical-Copper Interconnection: Scaleup & out iteration continues, optical and copper dual-line resonance. The commercial GPU market is expected to continue growing in 2026, with CSPASIC entering a critical year for large-scale deployment. Data center scaleup is giving rise to an explosion of super nodes, with copper cables being the optimal solution for short-distance low consumption interconnections; scaleout drives continuous expansion of clusters, with the ratio of optical modules to GPUs soaring, and the release of 1.6T highlighting the gap in optical chips. The dual-line resonance of copper and optical interconnections is leading to a simultaneous increase in demand and prices, with a core focus on the optical chip and high-end copper cable sectors.
Server Power Supply: Data center power density surges, HVDC power supply architecture becomes the core mainline. The surge in power density of AI data centers drives HVDC power supply architecture to become the core mainline, with primary power laying the foundation for 800V high-voltage direct current transmission, secondary power undertaking key voltage conversions, and tertiary power precisely adapting to chip power supply needs. The full-link upgrade opens up incremental space. In addition, the upgrade of server power supply technology brings simultaneous increases in PCB quantity and price, with the power density of AI servers continuously rising, promoting the upgrade of power PCBs towards high-end technologies such as thick copper, embedded modules, and advanced cooling, significantly increasing the value of single boards.
The following is the main text of the report
1. Cloud Computing Power: Welcoming the resonance of the upstream and downstream of the domestic computing power industry chain to drive performance growth
The process of self-sufficiency in the computing power industry chain is accelerating, and we are optimistic about the performance release of domestic computing power. SMIC has steadily advanced the expansion of advanced processes over the past few years, with good progress in market development Overall, the expansion of domestic advanced manufacturing processes is steadily progressing, coupled with accelerated advancements in the self-controllable supply chain, which will significantly enhance the supply guarantee capability of the domestic computing power industry. Against the backdrop of continuously increasing demand for AI inference and training, domestic computing power manufacturers are expected to fully benefit, with performance releases anticipated.
At the same time, the computing power industry chain is witnessing new industrial trends, with "transport capacity" becoming a key link for domestic computing power to "catch up" with overseas leaders. The industry is shifting its interconnection method from Scale-Out networks between cabinets to Scale-Up networks within cabinets (such as NVLink, UALink, PCIe, etc.), utilizing shorter transmission distances to achieve higher bandwidth and lower latency, thereby enhancing overall throughput. The second half of 2025 will be a phase where domestic supernode solutions gradually enter the public eye, with internet companies, switch manufacturers, and GPU self-research manufacturers all set to release impressive products. We are optimistic about the upcoming flourishing of domestic solutions; on one hand, significant solutions have already been released by full-stack self-research paths represented by Huawei and Sugon, while on the other hand, we are optimistic about third-party switch chip manufacturers binding with major internet companies to develop terminal solutions. As domestic computing power gradually enters a period of volume production, the domestic supernode industry chain is expected to welcome higher certainty growth opportunities.

2. Edge Computing Power: Edge-Cloud Hybrid Empowers AI Scenarios, Edge SoCs Continuously Benefit from the AI Innovation Wave
By 2026, edge AI will officially take over from cloud AI, driving the edge-cloud hybrid architecture scenario to become the core paradigm of technological infrastructure, with its arrival being a triple necessity of technological evolution, hardware support, and scenario demand. The bandwidth costs, latency bottlenecks, and privacy security issues of pure cloud architectures are becoming increasingly prominent, while the global decision-making capabilities of large models naturally complement the real-time processing capabilities of lightweight models on the edge. Through a collaborative model of "cloud training, edge usage, and edge supplementation," the contradictions between computing power allocation and privacy security can be resolved. Edge chips enable terminals to smoothly run lightweight large models through technologies such as NPU acceleration. By 2026, smart cars, AI glasses, and robots will become the core carriers that first explode, laying a solid foundation for the architecture's implementation. Specifically, the demand for autonomous driving in smart cars requires real-time perception and long-term decision-making collaboration with the cloud, AI glasses achieve full-scene interaction between humans and machines, and robots rely on rapid responses on the edge and global scheduling from the cloud. Policy support and the improvement of the industrial ecosystem further accelerate this process; the edge-cloud hybrid architecture is not only an inevitable result of technological upgrades but also a key support for AI transitioning from the laboratory to large-scale commercial use.
Overseas edge AI has entered a substantial implementation phase, and it is recommended to pay attention to overseas chain SoC-related manufacturers. Amlogic has been deepening its binding relationship within Google's smart home ecosystem. The company has collaborated with Google to launch several new products compatible with its edge large model Gemini, including smart speakers, video doorbells, and indoor and outdoor cameras, promoting a comprehensive upgrade of Google's smart home to the next generation of products embedded with edge large model capabilities Overall, the certainty of AI demand on the overseas edge is increasing, which is expected to drive SoC manufacturers to achieve structural breakthroughs in the next generation of lightweight large model terminals.
The upgrade of edge models is driving the evolution of hardware architecture towards dedicated co-processors, and we recommend paying attention to the independent NPU architecture leader, Rockchip. From an industrial trend perspective, on one hand, model inference is partially migrating from the cloud to the edge, generating local computing, bandwidth, and storage demands; on the other hand, edge devices are more sensitive to power consumption, cost, and space, making it difficult for general-purpose CPUs/GPUs to balance energy efficiency, parallelism, and real-time performance. Therefore, manufacturers represented by Rockchip are taking the lead in launching innovative co-processor solutions for edge AI. The series of edge computing co-processor chips are equipped with high-performance NPUs and high-bandwidth embedded DRAM, which can better meet the dynamic balance needs of computing power, storage, and transport for edge model deployment. Rockchip has launched the RK182X series dedicated co-processors, capable of supporting inference demands of 3B–7B LLMs, and has been implemented in scenarios such as in-vehicle cockpits, smart homes, conference terminals, educational devices, robots, set-top boxes, and edge gateways. We are optimistic about Rockchip's independent NPU products, which have formed a significant competitive advantage in performance, ecosystem, and product portfolio.

3. 3DDRAM: 2026 is the year of volume for edge AI storage, and industrial trends are gradually being established
In 2025, 3DDRAM-related products have been released, the R&D path has been established, and commercial progress has begun. Rockchip announced two NPU products for edge AI at the developer conference in July 2025: (1) RK1820: with 2.5GB DDR, supporting 3B models; (2) RK1828: with 5GB DDR, supporting 7B models, both adopting 3DDRAM architecture. Leveraging the high bandwidth, low cost, and scalability of 3DDRAM, the main control chip can support applications such as voice recognition, video analysis, and long-context conversations, suitable for edge scenarios in security, robotics, automotive, consumer electronics, office, education, home, and industrial sectors.

The landing of AI hardware in 2026 will lead to a rapid increase in storage demand, and high-bandwidth/low-cost 3DDRAM is expected to see growth in multiple fields. In the future, improvements in deep aspect ratio etching, photolithography technology, and capacitor structures will gradually slow down the upgrade of 2DDRAM, making the architectural innovation of 3DDRAM increasingly important. The deployment of local large models in fields such as robotics, AIOT, and automotive relies on the support of 3DDRAM storage, which is a key hardware innovation for various edge applications to transition from "usable" to "user-friendly." Qualcomm, Xiaomi, Honor, OPPO, major PC manufacturers, and major automotive manufacturers are expected to enter the market one after another. Domestically, we anticipate multiple NPU products to be launched, initiated, and taped out in 2026. Domestic mobile terminal manufacturers, Guangyu Xincheng, and international leading SoC manufacturers will regard the "NPU + customized storage" solution as the best answer for edge AI. Companies like Zhaoyi Innovation are widely connecting with SoC partners, with multiple projects following one after another. Scenarios such as mobile/cloud inference may become key scenarios in the second half of 2026 and 2027, with application fields continuously expanding.
4. Edge AI Models: Architectural Optimization Breaks Physical Bottlenecks, Benefit Distribution Determines Ecological Landscape
The core bottleneck exposed after the launch of Doubao Mobile Assistant lies in the system-level calls touching the boundaries of App discourse power and AI behavior risk control issues; there are also experience issues such as slow execution. Subsequently, terminal manufacturers, model manufacturers, and internet platforms have accelerated the iteration of related products. Looking ahead to 2026, from a technical architecture perspective:
➢ Cloud models elevate the upper limit of complex planning capabilities: Driven by continuous optimization of data quality and the maturation of subsequent training systems, further elevating the upper limit of complex planning and multi-step reasoning capabilities;
➢ Edge models will realize intelligent compression: By distilling cloud model capabilities and employing structural optimization methods such as quantization, dimensionality reduction of attention complexity, and MTP, combined with the engineering design of skill modularization and on-chip memory systems, continuously improving core experience indicators such as edge execution success rates and interaction delays;
➢ Agent technology route: API first, GUI as fallback. The API path has significant advantages in compliance, stability, and execution success rates, with its penetration rhythm depending on the feasibility and maturity of the benefit distribution mechanism; while the GUI path, as a general technical means, will long-term undertake the adaptation functions for long-tail applications and complex heterogeneous environments.

From an ecological structure perspective, industrial competition and cooperation coexist and evolve. Terminal manufacturers, relying on their control over the OS, gradually take over the system-level Agent entry and key scheduling authority; super apps strengthen user stickiness by constructing closed loops of in-app Agents and selectively open API interfaces to the outside; third-party model manufacturers mainly promote cross-ecological cooperation and commercial landing by establishing standardized access systems that are auditable, permission-limited, and revenue-sharing.
5. AI Terminals: 2026 Marks the Beginning of Innovation Year for AI Terminals. Meta, Apple, Google, and OpenAI All Launch New Terminal Products
The year 2026 marks the beginning of the innovation year for AI terminals. Between 2026 and 2028, major overseas companies such as Meta, Apple, Google, and OpenAI will launch new terminal products. The form of AI terminals is represented by glasses, with smart glasses being considered an ideal hardware terminal form as the closest wearable device to human senses, and each major company has products in the glasses form under development In addition to the form of glasses, we also see new forms of hardware such as AIpin, camera headphones, and desktop robots being laid out. With the acceleration of model iteration and the development of application scenarios for new terminals, the next generation of blockbuster terminals may emerge during the innovation cycle of major manufacturers. The emergence of new terminals is inseparable from the upgrade of key components, and it is recommended to pay attention to directions such as SoC, batteries, heat dissipation, communication, and optics.

6. Changxin Chain: Changxin's expansion certainty has been significantly enhanced, and the DRAM industry chain has entered a new round of prosperity.
The listing of Changxin significantly enhances the certainty of expansion, and storage enters a new round of medium to long-term expansion cycle. As Changxin Storage continues to advance its process of landing in the capital market, the company is expected to significantly enhance its capital strength through IPO fundraising and subsequent refinancing tools, providing solid financial support for its long-term technological iteration and capacity expansion in the DRAM field. Against the macro background of accelerated replacement of domestic storage and the continuous strengthening of the industry chain's self-control strategy, leading manufacturers with stable financing capabilities and continuous capital expenditure investment capabilities will become the core carriers of industrial upgrading. We judge that after the capital constraints are significantly alleviated, Changxin's capital expenditure intensity is expected to maintain a high level in the coming years, with strong certainty in the pace of production line construction and the scale of expansion, and the expansion cycle is expected to have a long duration and high visibility.
Transitioning from catch-up expansion to large-scale mass production, the supply chain's prosperity is systematically elevated. From the perspective of the industry cycle, DRAM is a capital-intensive and scale-effect-driven industry, where successful mass production of a single process node often requires several years of continuous equipment investment and process optimization. Changxin is currently at a critical juncture of transitioning from catch-up expansion to large-scale mass production. With the opening of financing channels, the company has the ability to accelerate progress on both technological upgrades and capacity construction, thereby gradually narrowing the gap with international leading manufacturers in process maturity and product structure. We believe that under the joint action of policy support, market demand, and financing environment, Changxin will enter a new round of medium to long-term expansion channel, and its core suppliers will directly benefit from the continuous investment in production lines and the process upgrade cycle, leading to a systematic elevation of the industry chain's prosperity.
CBA architecture opens a new cycle for the evolution of DRAM to 3D, fully elevating equipment value. In terms of technology path, the CBA architecture that Changxin is currently focusing on is regarded as one of the key technological routes for the evolution of DRAM to 3D. Compared to traditional 2D DRAM architecture, CBA fundamentally reconstructs the transistor structure and storage unit layout, which is expected to significantly improve the storage density per wafer while improving power consumption and performance, and laying the process foundation for subsequent higher stacking layers of 3D DRAM. As the complexity of the process continues to rise, the CBA architecture imposes higher technical requirements on photolithography, etching, thin film deposition, measurement, and inspection, significantly raising the requirements for process windows and equipment precision
7. Wafer Foundry: Advanced Logic Capacity Expansion Expected to Double, Wafer Foundry Prosperity Maintained
Supply-demand constraints combined with supply assurance demands are about to initiate a new medium to long-term expansion cycle for domestic advanced logic. Against the backdrop of global semiconductor supply chain restructuring and accelerated domestic substitution processes, the supply of domestic advanced logic processes has long been in a tight balance, especially at the 7nm and below advanced process nodes, where the domestic capacity gap is particularly prominent. Due to increasing uncertainties in overseas supply chains and potential disruptions from export restrictions on advanced manufacturing equipment and technology by countries such as the United States and Japan, leading domestic system manufacturers and chip design companies have increasingly relied on local advanced foundry capacity, with a significant rise in the emphasis on supply security. We anticipate that starting in 2026, based on the strategic consideration of "supply assurance first," domestic advanced logic capacity will enter a considerable and sustained expansion cycle, with the overall prosperity of the wafer foundry industry expected to remain at a high level.
Leading companies are the first to release advanced capacity, with SMIC Southern and HuaLi as the main expansion forces. In terms of specific expansion pace and scale, core capacity will still be released first by leading manufacturers. SMIC Southern, as one of the most important advanced process manufacturing bases in China, has strong certainty in expanding capacity around the N+2 process node (corresponding to the 7nm level). At the same time, HuaLi Microelectronics continues to increase investment in the 14nm and below advanced process fields, further alleviating the tight supply situation of domestic advanced logic.
Expansion is spreading to multiple entities, accelerating the formation of domestic advanced logic manufacturing capabilities. In addition to the aforementioned leading capacities, advanced process expansion is gradually spreading to the second tier and new entrants. Manufacturers such as Yongxin and ICRD have planned or advanced the construction and expansion of production lines related to the 14nm process node. With the continuous growth of domestic advanced logic demand and supportive policy environment, more manufacturing entities are expected to join the advanced process supply system construction, collectively forming a "blossoming" pattern of domestic advanced logic manufacturing capabilities. This round of expansion will not only enhance the overall advanced process supply capacity in China but also promote the collaborative maturity of domestic equipment, materials, EDA, and process systems, further solidifying the foundation of supply chain security.
8. PCB: Rubin Opens a New Era of AIPCB High-end Materials, M9 and Q Launch a Wave of PCB Value Leap
NVIDIA's cabinet architecture upgrade significantly drives the simultaneous increase in PCB volume and price, with ASICs following suit, pushing the PCB market scale towards 60 billion yuan by 2026. In the current GB200/300NVL72 cabinet, the computing board (Biancaboard) uses 22-layer HDI, and the switching board is a 26-layer through-hole board, with PCB materials utilizing high-performance M8 grade. Entering the next generation Rubin series cabinets, PCB design and specifications have undergone significant leaps: firstly, the RubinNVL144 cabinet adds a Midplane and CPX-CX9 network card module, while the computing board and switching board have also undergone major upgrades, greatly enhancing the value of each PCB. Additionally, the RubinUltraNVL576 (Kyber) cabinet introduces a revolutionary orthogonal backplane solution, replacing copper cable connections and significantly increasing chip density Due to the data transmission rate requirement exceeding 224Gbps, PCB materials must be upgraded to higher-grade ultra-low loss materials such as M9 or PTFE, while the layer count and process requirements are also very high. The four orthogonal backplanes required for the Kyber cabinet, combined with higher specification computing boards, will significantly increase the total value of the PCB in a single cabinet. In terms of ASIC, Google's TPU, Amazon's Trainium, and Meta's MTIA series chips are continuously iterating, with new products like V7 Ironwood, V8 Zebrafish/Sunfish, and Trainium3 continuously driving the PCB boards towards higher performance materials and higher-tier structures, further expanding the AI PCB market scale.
The demand for AI computing power continues to upgrade, driving the iteration of PCB materials towards high frequency and high speed, accelerating the replacement process of M9 level CCL, and the upstream core raw material sector is expected to usher in a pattern of simultaneous increase in both volume and price. With the mass production of the new generation of AI chip architecture, the requirements for signal transmission efficiency and stability in PCBs have significantly increased. M9 materials, with their core advantages of low dielectric constant and low power loss, have become key materials to meet these demands, directly driving the demand expansion for upstream materials such as quartz cloth, HVLP4 copper foil, and hydrocarbon resin.
From the perspective of segmented materials, quartz cloth, as the core enhanced substrate of M9 level CCL, has dielectric performance and thermal stability far superior to traditional fiberglass cloth. Currently, global production capacity is concentrated in a few manufacturers, with limited supply elasticity. Regarding HVLP4 copper foil, as a core material for high-frequency and high-speed signal transmission, its core technical challenge lies in balancing low surface roughness with high peel strength, which has long been monopolized by overseas manufacturers, with only a few domestic companies like Tongguan Copper Foil and Defu Technology achieving mass production. Hydrocarbon resin, as a key component of the insulation layer of M9 level CCL, directly determines the high-frequency transmission performance of copper-clad laminates, requiring a dielectric constant as low as around 2.5 and dielectric loss ≤0.001 in the 15GHz frequency band, with a very high technical threshold. Coupled with the trend of AI server PCBs generally increasing the layer count from 20 layers to over 40 layers and increasing board thickness, the consumption of these three types of raw materials per device has significantly increased compared to traditional servers, further reinforcing the tight balance between supply and demand in the industry, providing solid support for raw material prices.
The mass production of NVIDIA's Rubin architecture in 2026 and the iteration of ASIC chips form a rigid driving force. Coupled with the demand for 224Gbps high-speed transmission, M9 level materials have become a necessity for AIPCB. The core materials such as quartz cloth and HVLP4 copper foil have high technical barriers, with global production capacity concentrated and low domestic substitution rates, clearly indicating a tight balance between supply and demand. The initiation of upstream material stocking in 2026 will trigger simultaneous increases in both volume and price, with strong certainty of high prosperity in the industry chain.
9. Optical Copper Interconnection: Scaleup&out iteration continues, optical copper dual-line resonance
**The Scaleup&out iteration upgrade of AI computing clusters continues to advance, with short-distance high-density interconnection relying on copper cables to solidify the foundation. The large-scale high-density AI clusters are driving the demand for optical modules, and we are optimistic about the optical copper dual-line resonance ** On one hand, the continuous improvement of AI computing power clusters drives the demand for optical modules due to the increasing ratio of chips to optical modules: Taking NVIDIA Rubin NVL144 fully equipped CPX as an example, we estimate that in a three-layer network, the chip to optical module ratio can reach 1:12. Under the Meta DSF architecture, a cluster of 18,432 MTIA requires 184,320 800G optical modules. On the other hand, copper cables remain the core choice for short-distance, low-latency interconnections in AI computing power centers. Google TPU V7 uses a 3D Torus topology, with 64 chips in a single cabinet interconnected through 80 copper cables, where copper cable ports account for over 60%. Amazon Trainium3 completes backplane and cross-rack connections using AEC copper cables, requiring 216 64-port PCIe AEC copper cables to support the Scaleup bandwidth needs of 144 chips.

In 2026, the AI computing power industry will enter a critical explosion period: the demand for commercial GPUs continues to rise, and CSP ASIC chips like Google TPU and Amazon Trainium3 officially enter the large-scale deployment phase. Driven by the dual engines of Scaleup and Scaleout in computing power clusters, the interconnection track will welcome certain opportunities. In terms of Scaleup, the explosion of super nodes becomes a core trend, with significantly increased chip integration density in single cabinets and a sharp rise in demand for short-distance, low-latency interconnections within cabinets. Copper cables, with their core advantages of low transmission loss over short distances, controllable costs, and strong reliability, become the optimal solution for Scaleup scenarios in cabinet servers like Google TPU 3D, Amazon Trainium3, and Meta Minerva, with high-speed copper cable demand expected to continue to grow. In terms of Scaleout, computing power clusters are expanding to larger scales, with the ratio of optical modules to GPUs continuously increasing, and 1.6T optical modules also entering the phase of large-scale implementation. As a core component of optical modules, the supply gap for high-end optical chips is becoming increasingly prominent, posing a key bottleneck that restricts capacity release.
10. Server Power Supply: Data Center Power Density Soars, HVDC Power Supply Architecture Becomes the Core Mainline
As the scale of AI large model training and inference continues to expand, data centers are rapidly evolving from "compute-intensive" to "power-intensive" infrastructure, with cabinet power density becoming a core variable limiting computing power expansion. Currently, the power of AI cabinets has rapidly risen from the traditional general server era of 10–20kW to over 100kW, and further points to the 300–600kW range in the roadmaps of leading manufacturers like NVIDIA. At this power level, the traditional power supply system centered on "AC distribution + UPS + multi-stage AC/DC, DC/DC conversion" is nearing engineering limits in terms of energy efficiency loss, copper cable volume, system complexity, and long-term reliability, gradually becoming a key bottleneck restricting the large-scale deployment and cost optimization of AI data centers In this context, NVIDIA officially released the white paper "800V HVDC Architecture for Next-Generation AI Infrastructure" at the OCP Global Summit in October 2025, systematically proposing a reconstruction path for the next-generation AI factory power architecture centered around 800V high-voltage direct current (HVDC). The core idea is to complete centralized AC/DC conversion at the park or data center side, directly distributing high-voltage direct current to cabinets or even at the board level, significantly reducing the multi-stage energy conversion links in traditional power supply paths, thereby lowering system losses, reducing copper cable usage, enhancing power density, and better adapting to the trend of increasing power consumption of GPUs and various accelerator cards. The white paper clearly states that HVDC is not an aggressive replacement but has a clear progressive implementation path: in the short term, it coexists with existing UPS architectures through solutions like Sidecar Power Rack, in the medium term, it gradually weakens or even eliminates traditional UPS, and collaborates with energy storage systems to smooth out the strong fluctuations of AI loads, while in the long term, it moves towards a new power supply form characterized by high-voltage direct current, modularity, and high system integration. Overall, HVDC is not a single-point technology upgrade but a reconstruction of the power system centered around the leap in AI computing power density, with its certainty and industry consensus rapidly forming.

At the same time, the rapid increase in power density of AI servers also places higher demands on upstream key materials and components, particularly the upgrade of power supply PCBs. As the core carrier of electronic components, PCBs are widely used in power switching, power filtering, voltage regulation, and heat dissipation modules. Compared to general servers, AI server power supply PCBs have undergone significant changes in materials, processes, and structural design. First, in high current scenarios, PCBs need to enhance their current-carrying capacity by increasing copper foil thickness, and copper foil and copper-clad laminates account for a high proportion of PCB cost structure. Increasing copper thickness not only directly raises raw material costs but also imposes higher requirements on manufacturing processes such as lamination, drilling, and electroplating, significantly increasing the value of a single board. Second, to further enhance power density, embedded power module technology in PCBs is beginning to accelerate in application. This technology can reduce semiconductor usage and lower switching losses under the same power output conditions, providing greater space for high-power, high-frequency power supply design. Finally, under the constraints of high power density, heat dissipation becomes a key variable in PCB design. By increasing residual copper rates, adding thermal vias and copper thickness within the vias, and introducing high thermal conductivity materials such as copper blocks and ceramics, along with more reasonable routing and thermal design, the ability to improve heat diffusion and hotspot control is continuously enhanced. Overall, power supply PCBs are transitioning from "cost-type components" to a key link that balances "technology and value." Comprehensive judgment indicates that the surge in power density of AI data centers will drive HVDC power supply architecture to become a medium- to long-term certainty, continuously releasing structural increments in power systems, power electronics, and high-end PCBs. Against this backdrop, the upgrade of server power supply technology combined with the clear logic of simultaneous increase in PCB quantity and price is evident.

Risk Warning and Disclaimer
The market carries risks, and investment should be cautious. This article does not constitute personal investment advice and does not take into account the specific investment objectives, financial conditions, or needs of individual users. Users should consider whether any opinions, views, or conclusions in this article align with their specific circumstances. Investment based on this is at one's own risk
