Google’s IA efforts are synonymous with Gemini, which has now become an integral element of its most popular products through software and hardware of the Works Theater. However, the company has also published several Open Source models under the Gemma label for more than a year now.
Today, Google revealed Its third-generation open-source IA models with impressive trailer claims. The Gemma 3 models are available in four variants – 1 billion, 4 billion, 12 billion and 27 billion parameters – and are designed to operate on devices ranging from smartphones to strong workstations.
Ready for mobile devices


Google says that Gemma 3 is the best model to a single accelerator in the world, which means that it can work on a single GPU or TPU instead of requiring an entire cluster. Theoretically, this means that a gemma 3 AI model can natively run on the Treatment Treatment Unit of the Pixel Smartphone Tensor, just as it executes the Gemini nano model locally on phones.
The biggest advantage of Gemma 3 on the Gemini family of AI models is that, as it is open-source, developers can pack it and ship it according to their unique requirements in mobile applications and office software. Another crucial advantage is that Gemma supports more than 140 languages, 35 of which are part of a pre-formulated package.
And like The latest models in the Gemini 2.0 seriesGemma 3 is also able to understand text, images and videos. In a word, it is multimultimdal. On the performance side, Gemma 3 is supposed to exceed the others Open Source models such as Deepseek V3,, The O3-Mini reasoning operaiand the Meta Llama-405B variant.
Versatile and ready to deploy
By taking the entry beach, Gemma 3 offers a context window worth 128,000 tokens. This is enough to cover a full book of 200 pages pushed like an entrance. For comparison, the context window for the Google Flash Lite Gemini 2.0 model is at a million tokens. In the context of AI models, an average English -language word is about 1.3 tokens.


Gemma 3 also supports the function call and the structured output, which essentially means that it can interact with external data sets and perform tasks as an automated agent. The closest analogy would be Gemini and how he can do work on different platforms such as Gmail or Docs transparently.
Google’s latest Open Source models can be deployed locally, is via platforms based on the company’s cloud such as the AI Vertex suite. The Gemma 3 AI models are now available via the Google AI studio, as well as third -party standards such as Hugging Face, Olllama and Kaggle.


Gemma 3 is part of an industry trend where companies work on large -language models (Gemini, in the case of Google) and simultaneously grow models of small languages (SLM). Microsoft also follows a similar strategy With its Open Source Phi Series of Small Languages models.
Models of small languages such as Gemma and Phi are extremely effective in resources, making it an ideal choice to operate on devices such as smartphones. MoroEver, as they offer lower latency, they are particularly well suited to mobile applications.