Google Releases Gemma 4 Open-Source Large Model Family, Covering Four Parameter Sizes from 2B to 31B
2026-04-03 14:13
Favorite

en.Wedoany.com Reported - Google has released four specifications of open-source large models in one go. On April 2 local time, Google officially launched the Gemma 4 large model family, covering a complete product line from 2 billion parameters to 31 billion parameters.

According to Google's official announcement, Gemma 4 includes four general-purpose models: the efficient 2-billion-parameter version (E2B), the efficient 4-billion-parameter version (E4B), a 26-billion-parameter Mixture of Experts (MoE) model, and a 31-billion-parameter dense model (31B). The characteristic of the MoE architecture is that only a portion of the parameters are activated during inference, meaning the actual computational cost of the 26B total parameter model is much lower than that of a dense model of the same scale. The 31B dense version is the largest and most capable model in the series in terms of parameter size.

The Gemma series is Google's lightweight large model family aimed at the open-source community. Previous versions, Gemma 1 and Gemma 2, have been released and can run on consumer-grade hardware. The newly added E2B and E4B focus on efficient inference, making them suitable for edge devices and mobile scenarios. The 26B MoE and 31B dense versions are intended for cloud deployment and complex tasks.

In the open-source large model landscape, Meta's Llama 3 series (8B, 70B, 405B) and Microsoft's Phi series (focusing on small parameters and high efficiency) are the main competitors. The entry of Gemma 4 fills the gap in Google's product matrix within the 2B to 31B parameter range, providing developers with more choices.

This article is compiled by Wedoany. All AI citations must indicate the source as "Wedoany". If there is any infringement or other issues, please notify us promptly, and we will modify or delete it accordingly. Email: news@wedoany.com