
KNOWLEDGE ATLAS up 40%, MINIMAX-WP up 25%, the chip sector also soared! Hong Kong stocks AI concept exploded

On February 12th, the Hong Kong stock market's AI sector exploded, with KNOWLEDGE ATLAS rising over 40%. The "Four Little Dragons of Domestic GPUs," ILUVATAR COREX saw its afternoon gains expand to 25%, while BIREN TECH rose nearly 10%. KNOWLEDGE ATLAS's GLM-5 parameters doubled and raised prices against the trend, while MiniMax M2.5 focuses on native Agent capabilities. Domestic models are shifting from a "price war" to a technology race centered on programming and intelligence
On February 12, Hong Kong stock AI concept stocks collectively surged, driven by the intensive release of domestic large models during the Spring Festival, leading to a comprehensive strengthening of related sectors.
KNOWLEDGE ATLAS rose over 40%, as the company announced an increase in the price of its AI programming subscription package and officially launched its new flagship model GLM-5 the previous day. The large model stock MINIMAX-WP soared over 21%. On the same day, MiniMax launched its latest programming model M2.5, claiming it to be the world's first production-level model natively designed for Agent scenarios. Both companies have positioned programming and agent capabilities as core upgrade directions.

The chip sector also rose in tandem. The so-called "Four Little Dragons of Domestic GPUs," ILUVATAR COREX saw its afternoon gains expand to 25%, while BIREN TECH rose nearly 10%. Zhaoyi Innovation increased by over 8%. The sustained rise in demand for AI computing power has boosted expectations for related hardware manufacturers.

Behind this round of market activity is the concentrated launch of new products by domestic large model manufacturers during the Spring Festival window. Following DeepSeek's earlier release of a new model, Alibaba's Qwen 3.5, ByteDance's SeeDance 2.0, and other products have recently made their debut, indicating that industry competition has entered a heated phase.
KNOWLEDGE ATLAS GLM-5 Parameter Scale Doubles
The domestic large model track has entered a phase of intensive releases during the Spring Festival window. DeepSeek has previously launched a new model, and products like Alibaba's Qwen 3.5 and ByteDance's SeeDance 2.0 have recently appeared. Multiple manufacturers have coincidentally chosen to concentrate their new releases at this time, showing that industry competition is continuing to heat up.
According to a previous article by Wallstreetcn, KNOWLEDGE ATLAS launched GLM-5 on February 11, expanding the parameter scale from the previous generation of 355B to 744B, with activated parameters increasing from 32B to 40B. The amount of pre-training data increased from 23T to 28.5T. KNOWLEDGE ATLAS confirmed that the mysterious model "Pony Alpha," which previously topped the popularity chart on the global model service platform OpenRouter, is indeed GLM-5.
This model introduces DeepSeek's sparse attention mechanism for the first time, reducing deployment costs and improving Token utilization efficiency while maintaining long text processing performance. In terms of architecture, GLM-5 constructs 78 hidden layers, integrating 256 expert modules, activating 8 at a time, with activated parameters of about 44B, sparsity of 5.9%, and a maximum context window supporting 202K tokens. Internal evaluations show that GLM-5 has improved average performance by over 20% in programming development scenarios such as front-end, back-end, and long-range tasks compared to the previous generation, with real programming experience approaching the level of Claude Opus 4.5 According to Shanghai Securities News, an AI professional in Shanghai analyzed that the previous competition among domestic large models was about who had the lower price, while this time, ZHIPU has significantly raised its prices, indicating that the technical capabilities and market competitiveness of domestic models have clearly improved.
MiniMax Benchmarking Against International Top Levels
MiniMax M2.5 is positioned as the world's first production-grade model designed natively for Agent scenarios, with its programming and agent performance directly benchmarking against Claude Opus 4.6. This model supports full-stack programming development for PC, App, and cross-platform applications, especially leading the industry in core productivity scenarios such as advanced Excel processing, deep research, and PPT.

The M2.5 model has an activation parameter count of only 10B, showing significant advantages in memory usage and inference energy efficiency, supporting over 100 TPS ultra-high throughput, with inference speeds far exceeding those of international top models. This is MiniMax's rapid iteration just over a month after the release of the previous version 2.2.
Crowded "Spring Festival Release Season"
The Spring Festival of 2026 is no longer just a consumption frenzy but has evolved into a speed and passion contest among China's AI giants for the "mobile entry point."
JP Morgan's latest research report released on February 11 pointed out that the Chinese internet and AI industry is experiencing the most intense flagship model release wave in history. This is no longer a solo performance of a single model but a game of musical chairs about who can convert "technological spillover" into "consumer-grade hits" the fastest.
ByteDance took the lead, launching a three-model "package": Seedance 2.0 (video), Seedream 5.0 (image), and Doubao 2.0. Among them, Seedance 2.0 has already signaled a "hit" potential.
Alibaba is also not to be outdone, reportedly preparing to launch Qwen 3.5 in mid-February, along with a 3 billion yuan incentive plan to drive customer acquisition.
DeepSeek is said to be targeting a V4 version release in mid-February, focusing on improvements in coding and ultra-long prompt processing. Reports on February 11 indicated that DeepSeek has updated its new model to support a maximum context length of 1 million tokens.

