
Meta has announced new protective measures for Instagram accounts managed by adults that showcase children, a move aimed at bolstering safety within the platform. As part of this initiative, accounts that primarily feature child content will automatically be subjected to the app's most stringent messaging settings, effectively blocking unwanted interactions. Additionally, the 'Hidden Words' feature will be activated to filter out offensive comments. These changes particularly target accounts run by parents or talent managers who frequently post images and videos of children. While Meta acknowledges that the majority of these accounts are used innocently, there are instances of misuse, such as inappropriate comments or requests for explicit content through direct messages. In a blog post, the company stated, "Today we’re announcing steps to help prevent this abuse." To further enhance safety, Meta will work to prevent suspicious adults from discovering accounts that feature children. This includes not recommending potentially harmful users to these accounts and making it more challenging for them to connect through Instagram's search functionality. This announcement comes in the wake of heightened scrutiny regarding the mental health implications of social media, as raised by the U.S. Surgeon General and various state authorities, some of whom have mandated parental consent for social media access. The implications of these new guidelines will be significant for family vloggers and creators managing accounts for young influencers, who have faced criticism regarding the dangers of sharing children's lives online. A recent investigation by The New York Times highlighted that many parents may be aware of or complicit in their child's potential exploitation on social media. Meta also indicated that accounts affected by these new message settings will receive notifications prompting them to review their privacy options. The company has previously removed nearly 135,000 Instagram accounts that were involved in sexualizing children’s profiles, along with an additional 500,000 related accounts across both Instagram and Facebook. In conjunction with these changes, Meta is rolling out new safety features for teen accounts as well. Teens will gain access to safety tips, encouraging them to verify profiles and be cautious about what they share. Furthermore, the account's join date will now be visible at the top of new chat conversations, and a new feature allows users to block and report suspicious accounts simultaneously. Meta aims to empower teens by providing them with more context about the accounts they interact with, helping them recognize potential scams. The company noted that, in June alone, teens blocked accounts a million times and reported another million after receiving safety notifications. An update on the nudity protection filter revealed that 99% of users, including teens, have opted to keep this feature active, with over 40% of blurred images in DMs remaining unseen last month.
Travis Kalanick, the former CEO and co-founder of Uber, has rebranded his latest endeavor as Atoms and announced a signi...
CNBC | Mar 13, 2026, 22:15
As players gear up for the Early Access launch of Slay the Spire 2, nostalgia is palpable. Developed and published by Me...
Ars Technica | Mar 13, 2026, 22:30
In a remarkable medical incident, a 58-year-old woman from Greece has become the subject of medical fascination after sn...
Ars Technica | Mar 13, 2026, 22:40
Digg, the revamped version of the once-popular link-sharing platform created by Kevin Rose, is undergoing significant ch...
TechCrunch | Mar 13, 2026, 22:15
As the investigation into the tragic Tumbler Ridge school shooting in Canada unfolds, chilling details are emerging abou...
TechCrunch | Mar 14, 2026, 24:15