Common Sense Media, a nonprofit dedicated to promoting the safety of children in the digital space, has released a critical assessment of Google’s Gemini AI products. The evaluation, which came out on Friday, highlights several areas where the platform could improve its safety protocols for children and teens. While the organization acknowledged that Gemini successfully clarifies to young users that it is an AI and not a human friend—an important distinction to prevent potentially harmful delusional thinking—it raised alarms about the platform's overall design. Common Sense pointed out that the 'Under 13' and 'Teen Experience' tiers of Gemini appear to be repurposed adult versions, with only superficial safety features added. The organization insists that AI tools aimed at younger audiences should be developed with child safety as a primary focus from the beginning. The analysis also revealed that Gemini may expose children to inappropriate content, including sensitive topics like sex, drugs, and mental health issues, which could be unsettling for young users. Such exposure raises significant parental concerns, especially in light of recent incidents where AI tools have been linked to tragic outcomes among teenagers. OpenAI is currently facing legal challenges related to a teenager's suicide after using ChatGPT, underscoring the urgent need for responsible AI development for youth. Furthermore, as reports suggest that Apple may integrate Gemini into its upcoming AI-enhanced Siri, the potential for increased risk to teenagers is evident unless appropriate safety measures are implemented. Common Sense criticized Gemini for not adequately addressing the distinct needs of younger users, leading to its classification as 'High Risk' despite the presence of some safety filters. Robbie Torney, Senior Director of AI Programs at Common Sense Media, commented, "Gemini gets some basics right, but it stumbles on the details. An effective AI platform for children should cater to their developmental stages rather than adopting a uniform approach for all ages. For AI to be truly safe and effective for kids, it must be designed with their specific needs in mind." In response to the assessment, Google defended its Gemini AI, emphasizing ongoing improvements to its safety features. The company stated that it has established protocols to protect users under 18 from harmful interactions and collaborates with external experts to enhance these safeguards. However, Google also acknowledged that some of Gemini's responses had not met expectations and confirmed that they are actively working to rectify these issues. The safety assessment from Common Sense Media is part of a broader effort to evaluate AI services, with previous assessments covering platforms from OpenAI and others. These evaluations categorize the risks associated with various AI products, with Gemini now positioned among those deemed 'High Risk' for young users.
In a groundbreaking move, Google has announced a substantial investment of $15 billion to establish a cutting-edge data ...
TechCrunch | Oct 14, 2025, 11:05In a recent report, Goldman Sachs has raised concerns about the potential emergence of 'jobless growth' in the United St...
Business Insider | Oct 14, 2025, 08:55Frank McCourt, a billionaire investor who previously expressed interest in acquiring TikTok, is now scrutinizing the leg...
CNN | Oct 14, 2025, 09:15In a groundbreaking announcement, Google revealed its plan to invest a staggering $15 billion over the next five years t...
Mint | Oct 14, 2025, 08:50At Dreamforce 2025, Slack introduced a groundbreaking update, transforming itself into an "agentic operating system" aim...
Business Today | Oct 14, 2025, 09:00