Qwen App Launches Gray-Scale Test of Alibaba’s Video Model HappyHorse, Supporting 15-Second Multi-View Narrative and 1080P Super-Resolution Output
2026-04-28 09:44
Favorite

en.Wedoany.com Reported - The video generation model HappyHorse 1.0, under China’s Alibaba Group, officially launched gray-scale testing via the Qwen App on April 27. After updating the Qwen App to the latest version, users can directly access the creation panel by clicking the “HappyHorse” button at the bottom of the homepage, allowing them to experience AI video generation for free.

Developed by Alibaba’s ATH Innovation Business Division, HappyHorse 1.0 is a professional AI video model with 15 billion parameters and a unified 40-layer self-attention Transformer architecture. Leveraging a native multi-modal architecture, the model employs an audio-video co-generation approach, merging text, image, video, and audio tokens into a single sequence through unified sequence modeling. During the denoising process, it automatically achieves cross-modal alignment, eliminating the traditional multiple-step workflow of “first video generation, then separate audio dubbing, and finally lip-sync alignment.” A single forward inference can directly output a finished piece complete with synchronized sound.

In terms of core capabilities, HappyHorse 1.0 specializes in 15-second multi-view video storytelling. The model autonomously generates multi-camera videos from simple text descriptions, complementing them with corresponding camera movements and transitions. It demonstrates distinct advantages in rhythm control, depth-of-field management, and medium/close-up shot expression. The maximum supported output resolution is 1080P, with optional aspect ratios of 16:9, 9:16, or 1:1. It performs excellently in visual texture and lighting effects, smooth camera transitions, and realistic character portrayal. For audio-video synchronization, the model natively supports lip-sync matching in seven languages: Mandarin Chinese, Cantonese, English, French, Korean, Japanese, and German, accurately aligning dialog with lip movements, intonation, and tone. Environmental sound effects, such as rain and footsteps, are also generated simultaneously.

This gray-scale test is no small-scale trial. Concurrently, the HappyHorse official website and Alibaba Cloud’s Bailian platform opened for registration, enabling global professional creators and enterprise clients to utilize the model’s capabilities directly via the web. Third-party AI video creation platforms, including Juri Lub and Libtv, announced formal integration on the same day, while enterprise-level Agent platforms such as Alibaba Wukong, MuleRun, and JVS Claw completed initial adaptation. This simultaneous rollout across official channels and third-party ecosystems signifies that the model’s API interfaces are fully open to external developers from day one, transforming model capabilities into quantifiable and tradable cloud products.

Regarding pricing, on the HappyHorse website, generating 720P video costs 0.9 RMB per second, and 1080P video costs 1.6 RMB per second. New users receive a free trial quota. For professional-signed monthly plans, discounted prices reduce the cost to as low as 0.44 RMB per second for 720P and 0.78 RMB per second for 1080P.

HappyHorse’s debut was notably impactful. On April 7, the model anonymously topped the Video Arena blind test leaderboard of the authoritative AI evaluation platform, Artificial Analysis, with an Elo score of 1383 for text-to-video and 1413 for image-to-video, leading the second place by a significant margin and sparking global speculation in the AI community. It wasn’t until April 10 that Alibaba’s ATH Business Division formally claimed responsibility, revealing HappyHorse’s identity. The official start of gray-scale testing indicates the model has entered the critical phase of large-scale deployment.

The Qwen App simultaneously announced that, starting April 28, it would launch the “Wild Imagination Challenge” to invite global creators to produce content using HappyHorse. In the near future, an interactive “Test and Video” feature will be released, where users can identify their “signature character” in the universal short-drama universe through simple quizzes, upload a photo, and have HappyHorse generate a personalized 10-second character short drama clip. As of the initial testing launch, numerous creators have used the model to produce short videos in styles such as TVB Hong Kong drama, CCTV Three Kingdoms style, and classic film aesthetics, which they have published for public display in the Qwen App’s AI creation community.

The ATH Business Division, formally established on March 16, 2026, is Alibaba Group’s core AI department directly led by CEO Wu Yongming. It comprises Tongyi Lab, the MaaS business line, and the Qwen Business Unit, forming Alibaba’s AI network. As the first video generation model introduced after the division’s formation, HappyHorse demonstrates a clear open-ecosystem orientation in its commercialization path: covering casual users via the Qwen App, serving professional creators and enterprise clients through the official website and Bailian platform, and integrating with third-party developers and Agent platforms via API interfaces. These three pathways advance concurrently to construct a complete business loop from model capabilities to application deployment.

This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com

Related Recommendations
Microsoft Revises Partnership Agreement with OpenAI: License Changed to Non-Exclusive, Revenue Sharing Terminated, Azure Priority Rights Maintained
2026-04-28
TSMC's Hou Yongqing Stated That to Meet AI Computing Demands, the Company Is Accelerating Capacity Expansion at "Double Speed," With 2nm First-Year Output Expected to Be 45% Higher Than 3nm at the Same Stage
2026-04-28
Qwen App Launches Gray-Scale Test of Alibaba’s Video Model HappyHorse, Supporting 15-Second Multi-View Narrative and 1080P Super-Resolution Output
2026-04-28
South Korea's Ministry of Science and ICT and Google DeepMind Sign AI Cooperation MOU to Jointly Advance the K-Moonshot National Innovation Project
2026-04-28
Embodied AI Company Galactic Dynamics Completes Over $200 Million in New Funding, Led by SF Express, With Batch Delivery of Thousands of Robots Underway
2026-04-28
Baidu Library and Baidu Netdisk Jointly Release the General Agent GenFlow 4.0, With Monthly Active Users Exceeding 100 Million and Monthly Task Delivery Reaching 200 Million
2026-04-28
Samsung Heavy Industries Signs MOU with M3 for Joint Development of Floating Data Centers to Meet Hyperscale and AI Computing Demands
2026-04-28
Lightmatter Appoints Roy Kim as Vice President of Products to Accelerate Mass Deployment of Photonic Interconnect Platform
2026-04-28
IQM to Deploy First Enterprise-Purchased Quantum Computer in Japan
2026-04-28
Australia's Quantum Clock TEMPO Successfully Enters Orbit, Achieving Timing Precision Ten Times Greater than Global Navigation Satellite Systems
2026-04-28