Tencent Hunyuan Hy3 Preview Open Source Launch, Defining a New Path for Chinese Open-Source Foundation Models
2026-04-25 17:44
Favorite

Pre-training, reinforcement learning, and infrastructure — all completely overhauled; from the official start of training to launch took less than three months. On April 23rd, Tencent officially released and fully open-sourced the Hunyuan Hy3 preview large model. Utilizing a Mixture of Experts (MoE) architecture with 295B total parameters and only 21B activated parameters, it achieved a leap in coding capability on the SWE-Bench benchmark test, jumping from 53% to 74.4%.

As parameter scales of large models climb towards trillions, AI is shifting from passive invocation to active planning. On April 23rd, Tencent officially released and fully open-sourced the Hunyuan Hy3 preview large model. Instead of chasing extreme parameter counts, it focuses on optimizing "medium-scale models" to enhance intelligence density per unit.

Core Architecture: A MoE Architecture Merging Fast and Slow Thinking

Hy3 preview has a total of 295 billion parameters, with only 21 billion activated parameters. Using the MoE architecture, it activates only about 7% of its parameters during inference. It supports up to 256K context length, achieves an inference speed of 23 tokens/second, and integrates a "Fast and Slow Thinking" mechanism, enabling both rapid responses and deep, complex reasoning.

R&D Efficiency: Comprehensive Infrastructure Overhaul Completed in Three Months

Training officially started at the end of January 2026, and the full process was completed in three months, internally defined as the beginning of a transition from "reading ten thousand books" to "traveling ten thousand miles." The team completely rebuilt pre-training, reinforcement learning, and infrastructure, with Tencent's Chief AI Scientist Yao Shunyu overseeing the project throughout.

Agent and Coding Capability: SWE-Bench Leap from 53% to 74.4%

Programming ability saw the most significant improvement — a jump on the SWE-Bench benchmark from 53% (Hunyuan 2.0) to 74.4%, over a 40% increase. Competitive results were also achieved in benchmarks like Terminal-Bench 2.0, BrowseComp, and WideSearch, with agent capabilities excelling in evaluations such as ClawEval.

Cost-Effectiveness and "Practical Utility"

Costs have dropped significantly: API input is as low as 1.2 RMB per million tokens, with a cache hit price of 0.4 RMB per million tokens, and output is as low as 4 RMB per million tokens. The minimum personal subscription fee is 28 RMB per month.

Furthermore, the model proactively avoids public leaderboards that are prone to "benchmark gaming." Instead, it evaluates "real-world combat effectiveness" through self-created questions, the latest examinations, human evaluations, and public product trials, achieving excellent results in advanced science olympiads, the International Mathematical Olympiad (IMO), and the mathematics doctoral qualifying exam at Tsinghua University's Qiuzhen College.

Ecosystem Integration: Full Open Source and Multi-Product Rollout

Model weights and code are open-sourced on GitHub, Hugging Face, ModelScope, and GitCode. It has been deployed on products including Yuanbao, ima, CodeBuddy, WorkBuddy, QQ, QQ Browser, Tencent Docs, Tencent LeXiang, Tencent Maps, and Tencent e-Sign. WeChat Official Accounts, Tencent News, Peacekeeper Elite, and Tencent Customer Service are coming online gradually. It supports open-source agent products like OpenClaw, OpenCode, and KiloCode, and is available on the Tencent Cloud large model service platform TokenHub.

Defining a New Path for Chinese Open-Source Foundation Models

Moderate parameter size, high intelligence density per unit, and affordable to use. Competition in the domestic large model industry is shifting from a one-dimensional contest of technical indicators to a comprehensive, synergistic development of models, products, engineering, and the ecosystem.

Building a Positive Innovation Ecosystem

After DeepSeek and Kimi released new versions within the same week in late April, Hunyuan furthers the momentum by open-sourcing its model, fostering a "catch-up" dynamic in technology based on leveraging and optimizing each other's underlying technologies, building a healthy cycle of innovation.

Powering Full-Scenario Intelligent Applications

In interactions with general users, there are improvements in intent understanding, long-form text processing, response stability, and anthropomorphism. XiaoQ, the official AI assistant for QQ, can search for information, set reminders, and solve photo-based homework problems. The agent use case is its core area for differentiation, providing systematic capabilities where even a single Agent task requires deep synergy across multi-faceted skills like reasoning, long-text handling, instruction following, dialogue, coding, and tool use.

This Hy3 preview is just an initial milestone. Yao Shunyu stated that achieving open source and receiving real-world feedback are key pathways to improving the practical utility of the final version. The Hunyuan team is currently scaling up pre-training and reinforcement learning to push the intelligence ceiling, while also exploring distinctive model capabilities through deeply collaborative hardware-software co-design with its products.

This bulletin is compiled and reposted from information of global Internet and strategic partners, aiming to provide communication for readers. If there is any infringement or other issues, please inform us in time. We will make modifications or deletions accordingly. Unauthorized reproduction of this article is strictly prohibited. Email: news@wedoany.com