
Meta is currently piloting a controversial feature on Facebook that has sparked new concerns about user privacy. This feature scans users' photo libraries, including images and videos that they haven't shared, to enhance the company's AI offerings. Initially highlighted by TechCrunch, users may see a prompt when attempting to upload a Story on Facebook. This pop-up encourages them to activate 'cloud processing,' a function that grants Meta continuous access to their phone's gallery. In exchange, the company promises to deliver personalized content, such as themed photo collages and AI-generated filters for special occasions like birthdays and graduations. While this functionality seems aimed at enhancing user experience, agreeing to the terms allows Meta to analyze the entirety of users' photo and video collections. This includes unpublished media, which the company can leverage to refine its AI capabilities by examining metadata, facial features, and various objects present in the images. Privacy advocates are particularly cautious about the lack of transparency surrounding this feature. Meta has not made a formal announcement regarding its introduction, instead providing a discreet help page for Android and iOS users. This ambiguous rollout means that many individuals may unwittingly consent to extensive data access without fully grasping the consequences. Once the feature is activated, uploads occur seamlessly in the background, transforming private, shared media into potential resources for training Meta’s AI systems. Although Meta claims this feature is optional and can be deactivated at any time, significant questions linger. The company asserts that these images are not currently utilized for training its generative AI models, but it has left the door open for future use. Furthermore, there is insufficient clarity regarding the rights Meta retains over any user content uploaded via this cloud processing. Historically, the company has acknowledged using public content from Facebook and Instagram for AI training, yet the definitions of what constitutes 'public' content and the criteria for including individuals in these datasets remain vague. This uncertainty is compounded by new AI terms of service that took effect on June 23, 2024, which do not specify whether unpublished photos collected through cloud processing are exempt from AI training. Users can opt out of this feature by adjusting their settings to disable cloud processing. If they choose to do so, Meta claims it will begin deleting any unpublished images from its cloud servers within 30 days. However, this shift toward automatic media scanning highlights a broader trend of tech companies increasingly collecting sensitive user data under the premise of helpful AI tools. In countries like India, where smartphones often contain sensitive personal information, this kind of access may pose serious risks, especially since the feature is not adequately explained in local languages. As Meta tests this feature in the US and Canada, its potential global launch could reignite discussions surrounding digital consent, transparency in algorithms, and the ethical limits of artificial intelligence.
The FBI has initiated an investigation into a hacker believed to have released multiple video games embedded with malwar...
TechCrunch | Mar 13, 2026, 15:10
Alex Karp, CEO of Palantir, has voiced significant concerns about the impact of artificial intelligence on society, warn...
Business Insider | Mar 13, 2026, 16:45Recently, I received an eye-opening email from Kiran Maya Sheikh, a computer science graduate from the University of Cal...
Business Insider | Mar 13, 2026, 18:00If you're a FirstNet user with AT&T and receive an unexpected charge of around $6,200, take heart—it's likely a billing ...
Ars Technica | Mar 13, 2026, 17:50
In January 2025, a measles outbreak emerged on the outskirts of Texas, quickly spreading to New Mexico and other neighbo...
Ars Technica | Mar 13, 2026, 15:45