In an age where artificial intelligence has seamlessly integrated into our daily routines, the importance of safeguarding personal information cannot be overstated. Harsh Varshney, a 31-year-old AI security professional at Google, emphasizes the need for vigilance while using AI tools. With a background in software engineering focused on privacy and now a member of the Chrome AI security team, he understands the risks associated with AI technologies. Varshney points out that the convenience of AI comes with significant privacy concerns. He shares four key practices aimed at protecting personal data while interacting with AI systems. One critical habit is to refrain from sharing sensitive information such as credit card numbers, Social Security details, or personal medical history with AI chatbots. This caution stems from the potential for data leaks, where information provided by one user could inadvertently influence responses given to others. He likens sharing information with public AI tools to sending a postcard that anyone can read—if it's not something you'd want the public to see, don't share it. Users should also be aware of the differences between public AI models and enterprise-grade solutions. While public models may use shared data for future training, enterprise models typically do not, offering a more secure environment for discussing sensitive work-related topics. Varshney advises against discussing company projects with public chatbots, highlighting instances where employees have accidentally disclosed confidential information. Instead, he opts for enterprise models, even for minor tasks, to ensure his conversations remain private. In addition to using enterprise-grade tools, Varshney recommends regularly deleting conversation histories from both public and enterprise AI models. This practice mitigates the risk of data breaches and unauthorized access to personal information. He recounts a surprising experience with an enterprise chatbot that retained his address from a previous interaction, underscoring the importance of vigilance in data management. For casual inquiries, Varshney suggests utilizing temporary chat features akin to incognito mode, which prevent conversations from being stored and used for model training. He encourages users to choose reputable AI tools with clear privacy guidelines, such as Google’s offerings, OpenAI's ChatGPT, and Anthropic's Claude. Reviewing the privacy policies of these tools can also provide insights into data usage and training practices. Ultimately, while AI offers remarkable capabilities, it is imperative to approach its use with caution to protect our identities and personal data.
A recent conversation with a CEO from a leading software firm revealed alarming predictions for the industry. He warned ...
Business Insider | Mar 12, 2026, 18:20Rivian has unveiled the specifications and pricing details for its highly anticipated R2 SUV, but customers eager to pur...
TechCrunch | Mar 12, 2026, 21:00
In a significant corporate shift, Adobe has announced that its CEO, Shantanu Narayen, will be stepping down once a succe...
CNBC | Mar 12, 2026, 20:25
Sam Altman, the CEO of OpenAI, recently engaged in a crucial dialogue with several lawmakers in Washington, D.C., where ...
CNBC | Mar 12, 2026, 20:25
Since Donald Trump’s presidency began, the founder of FTX, Sam Bankman-Fried, has been on a mission to rebrand himself a...
Ars Technica | Mar 12, 2026, 19:00