
Morgan Stanley: Tencent Holdings Limited opens source Hy3 Preview with a rating of "Overweight"
Morgan Stanley has given Tencent Holdings Limited an "Overweight" rating, with a target price of HKD 650. Tencent has released and open-sourced the next-generation large language model Hy3 Preview, significantly enhancing code generation and agent capabilities, achieving a SWE-Bench Verified score of 74.4%. This model is integrated into multiple Tencent products and offers competitive API pricing on Tencent Cloud. Tencent plans to collect feedback through open sourcing to further optimize the model's capabilities
According to Zhitong Finance APP, Morgan Stanley released a research report stating that Tencent Holdings (00700) launched and open-sourced its next-generation large language model Hy3 Preview yesterday (23rd), positioning it as the first step in rebuilding the Hy (Hybrid) foundational model. This model adopts a MoE architecture, integrating fast and slow thinking, with a total parameter count of 295B (active parameters 21B) and a context window of 256K, showing improvements in complex reasoning, instruction following, contextual learning, code generation, agent capabilities, and reasoning performance. The firm has given Tencent an "Overweight" rating with a target price of HKD 650.
The firm noted that Hy3 Preview has achieved significant improvements in code generation and agent capabilities, with a SWE-Bench Verified score of 74.4% (compared to 53% for Hy2), roughly on par with GLM 4.7's 73.8%. Tencent's comprehensive optimization offers the best cost-performance ratio, with reasoning efficiency improved by up to 40% and overall costs significantly reduced. The model has been integrated into Tencent products such as Yuanbao, CodeBuddy, WorkBuddy, QQ, and ima, and offers competitive API pricing and customized token plans on Tencent Cloud, with a personal monthly fee as low as RMB 28.
Morgan Stanley pointed out that Tencent plans to collect real feedback from developers and users through open sourcing to refine the official version, and will subsequently expand training and reinforcement learning, deepen product ecosystem integration, and pursue differentiated model capabilities to enhance real-world applicability
