
As the trend of creating 3D figurines powered by Google’s Gemini technology gains traction on social media, cybersecurity experts are sounding alarms regarding the risks associated with uploading personal images to AI platforms. While the quirky and retro-inspired figurines have captured the attention of millions, they also raise significant concerns about digital privacy, especially in relation to facial imagery. A recent incident highlighted these concerns when an Instagram user, known as @jhalakbhawani, shared her unsettling experience with the AI saree portrait trend. She explained, "I noticed something strange; there is a mole on my left hand in the generated image, which I actually have in real life. The original image I uploaded did not have a mole. How did Gemini know? It’s very scary and creepy. Please be careful." This raised questions about how AI systems process and store user images. Cybersecurity expert Saikat Datta, CEO of DeepStrat, emphasized the importance of managing identity when uploading facial images. He noted that platforms might retain these images for various purposes, including model improvement and analytics. Even anonymized data poses risks, as breaches could result in personal images being exposed online. "In India, for example, the compliance surrounding KYC and facial recognition could lead to serious crimes if data were misused," he explained. Echoing these sentiments, Dr. Anil Rachamalla, a cybersecurity advocate and founder of the End Now Foundation, cautioned about the ethical implications of AI trends. He remarked, "Trends like Nano Banana AI image generation are reshaping how we perceive beauty. Once users see themselves through AI’s lens, it can distort their sense of reality, leading to misrepresentation and bias. Privacy remains a critical issue, especially with apps like MyFace that have repurposed images without consent. The risks associated with deepfakes further complicate the landscape, making it essential for users to stay informed." The trend has evolved beyond 3D figurines to encompass vintage-style portraits depicting individuals in traditional attire against nostalgic backdrops. While visually striking, these images underscore the delicate balance between creative expression and data security. With the ease of generating images through AI, millions of faces are uploaded daily, raising concerns about potential misuse if these platforms are compromised. Google's AI policies stress the importance of user responsibility. The company states that while users can generate original content, it may also create similar content for others, and users must comply with applicable laws when using the generated material. Experts urge caution, as uploading facial images without a thorough understanding of the associated risks could lead to identity theft, fraud, or misuse of sensitive information. This situation illustrates that even seemingly innocuous experiments with AI can harbor significant dangers.
Archival data storage presents numerous challenges, particularly the need for a medium that is both highly dense and rem...
Ars Technica | Feb 18, 2026, 19:05
Figma experienced a remarkable 15% surge in its stock price during after-hours trading on Wednesday, following the relea...
CNBC | Feb 18, 2026, 21:20
In a thought-provoking critique, software engineer Dax Raad takes a hard look at the current state of artificial intelli...
Business Insider | Feb 18, 2026, 20:05A significant data breach at the blockchain-based lending firm Figure has compromised the personal information of approx...
TechCrunch | Feb 18, 2026, 18:40
Mastodon, the decentralized social media platform offering an alternative to mainstream apps like X and Threads, has unv...
TechCrunch | Feb 18, 2026, 18:40