
Meta is experimenting with a contentious new feature on Facebook that scans users' photo libraries, raising significant concerns about user privacy and data management. Initially highlighted by TechCrunch, this feature prompts certain users with a pop-up when they try to upload a Story. It encourages them to activate 'cloud processing,' granting Meta access to automatically upload images from their device to its cloud servers regularly. In exchange, users are promised personalized content, including photo collages and themed recaps, along with AI-generated filters for special occasions like birthdays and graduations. While this feature seems to offer creative tools and convenience, users who click 'Allow' are giving Meta permission to scan all photos and videos on their devices, including those that haven't been shared publicly. The AI can analyze metadata, such as the date and location, as well as facial features and objects in the images, to enhance its suggestions and capabilities. However, privacy advocates are alarmed not only by the extent of this access but also by the lack of transparency surrounding the feature. Meta has not made any formal announcements about its rollout, aside from a discreet help page for Android and iOS users. The feature's sudden emergence and ambiguous description mean that many users might consent without fully grasping the consequences. Once activated, the background uploads continue seamlessly, turning private, unpublished media into possible training material for Meta's AI systems. Although the company asserts that this feature is optional and can be disabled at any time, concerns persist. For instance, while Meta claims that these images are not currently used to train its generative AI models, it hasn't ruled out future use. Additionally, it has not clearly articulated the rights it retains over user-uploaded content through cloud processing. Historically, Meta has acknowledged scraping public content from its platforms to train AI models, but the definitions of 'public' content and who qualifies as an 'adult' in these datasets remain murky. This ambiguity is compounded by recent updates to the AI terms of service, effective from June 23, 2024, which do not clarify whether unpublished photos obtained through cloud processing are exempt from AI training. Users do have the option to opt out by disabling the cloud processing feature in their settings. Meta asserts that if users turn off the feature, any unpublished images will be deleted from its cloud servers within 30 days. Nonetheless, this trend of automatic media scanning raises broader issues about the increasing collection of private user data by major tech firms under the pretense of offering helpful AI features. In regions like India, where personal devices often hold sensitive data, such as identification documents and family photos, the implications of this data access could be serious, especially given that the feature's explanation may not be clearly communicated in local languages. Currently being tested in the U.S. and Canada, Meta's feature could spark renewed discussions about digital consent, transparency in algorithms, and the ethical limits of artificial intelligence.
Recent studies reveal that ChatGPT's energy consumption is staggering, with each query requiring at least ten times the ...
Business Today | Mar 13, 2026, 10:05
After an illustrious 18-year tenure, Shantanu Narayen, the Chief Executive Officer of Adobe, is set to step down, leavin...
Business Today | Mar 13, 2026, 08:15
Rox, a pioneering startup focused on autonomous AI agents designed to enhance sales productivity, has successfully secur...
TechCrunch | Mar 12, 2026, 22:40
In the rapidly evolving world of technology, understanding the nuances of coding remains crucial, especially when harnes...
Business Insider | Mar 13, 2026, 07:10In the evolving landscape of AI, many startups are reevaluating their tools. Sidhant Bendre, co-founder of Oleve, an AI-...
Business Insider | Mar 13, 2026, 09:40