Google releases pint-size Gemma open AI model

Google releases pint-size Gemma open AI model

In recent years, major technology firms have focused on developing increasingly large AI models, utilizing vast arrays of costly GPUs to deliver generative AI services through the cloud. However, the significance of smaller AI models cannot be overlooked. Google has introduced a miniature iteration of its Gemma open model, specifically designed to operate on local devices. The newly launched Gemma 3 270M is engineered to be easily adjustable while still offering impressive performance despite its reduced size. Earlier this year, Google released its first Gemma 3 models, which boasted between 1 billion and 27 billion parameters. In the realm of generative AI, parameters represent the learned variables that dictate how the model interprets inputs to generate output tokens. Typically, a model's performance improves with an increase in parameters. With only 270 million parameters, the Gemma 3 can function on devices such as smartphones or even run entirely within a web browser. Operating an AI model locally presents numerous advantages, including improved privacy and reduced latency, which are essential considerations for many users. The Gemma 3 270M was crafted with these applications in mind. In tests conducted on a Pixel 9 Pro, the Gemma model successfully managed 25 conversations using the Tensor G4 chip, consuming merely 0.75 percent of the device's battery. This positions it as the most energy-efficient model in the Gemma lineup. While developers may not achieve the same performance as seen in multi-billion-parameter models, the Gemma 3 270M holds considerable potential for specific tasks. Using the IFEval benchmark, which evaluates a model's ability to adhere to instructions, Google demonstrated that its latest model exceeds expectations for its size. The Gemma 3 270M achieved a score of 51.2 percent in this assessment, outperforming other lightweight models with a greater number of parameters. Although it does not match the capabilities of models with over a billion parameters, such as Llama 3.2, it comes surprisingly close given its limited parameter count.

Sources : Ars Technica

Published On : Aug 14, 2025, 20:05

Startups
Musk Takes Legal Action Against Apple and OpenAI Over Alleged Anticompetitive Practices

Elon Musk has escalated his ongoing legal conflicts by filing lawsuits against both Apple and OpenAI, claiming they are ...

Business Insider | Aug 25, 2025, 16:15
Musk Takes Legal Action Against Apple and OpenAI Over Alleged Anticompetitive Practices
Startups
Meet the New Power Players Shaping the Future at Startup Battlefield 2025

The lineup of judges for the Startup Battlefield 2025 is becoming increasingly impressive. Following an initial group of...

TechCrunch | Aug 25, 2025, 14:40
Meet the New Power Players Shaping the Future at Startup Battlefield 2025
AI
Elon Musk's Ventures Take Legal Action Against Apple and OpenAI for Alleged Antitrust Practices

In a bold legal move, two companies founded by Elon Musk have initiated a lawsuit against tech giants Apple and OpenAI. ...

CNBC | Aug 25, 2025, 15:15
Elon Musk's Ventures Take Legal Action Against Apple and OpenAI for Alleged Antitrust Practices
Science
SpaceX Launches Innovative Cargo Mission to Elevate Space Station's Orbit

In a significant achievement, SpaceX has successfully completed its 33rd cargo delivery to the International Space Stati...

Ars Technica | Aug 25, 2025, 14:25
SpaceX Launches Innovative Cargo Mission to Elevate Space Station's Orbit
Mobile
TRAI Highlights Need for Enhanced Indoor Mobile Connectivity in India

In a recent statement, Anil Kumar Lahoti, Chairperson of the Telecom Regulatory Authority of India (TRAI), revealed that...

Mint | Aug 25, 2025, 13:30
TRAI Highlights Need for Enhanced Indoor Mobile Connectivity in India
View All News