..but where's the bigger one?

#18
by jerryblunder - opened

too smol google love open ai google give more parameters

Hi @jerryblunder ,

We hear you, when the model sizes are often measured in the hundreds of billion of parameters, a 31B model can seem relatively small at first glance.

The design philosophy behind the Gemma family is intentional. Rather than focusing purely on scaling parameter count, we prioritise efficiency and capability per parameter. With Gemma 4, this includes both dense (31B) and MOE (26B) architectures, designed to deliver strong performance across reasoning, multimodal tasks and long context use cases.

A key goal is accessibility. Gemma models are built so developers can run, fine-tune, and deploy them on their own infrastructure, ranging from workstations to smaller scale GPU setups, without requiring large data center resources. This is paired with a permissive Apache 2.0 license, enabling broad experimentation and customisation.

For applications that require significantly larger models, we offer the Gemini Family. These models operate at a different scale and are delivered via API on Google infrastructure, rather than as open weights. So while Gemma models may appear smaller on paper, the tradeoff is deliberate: practical deployability, openness and strong performance for their size, rather than maximum parameter count alone.

Thank you!

Sign up or log in to comment