As artificial intelligence chatbots, such as OpenAI's ChatGPT, become increasingly popular for personal conversations and support, legal experts are raising alarms about the potential risks involved. Users often confide their deepest secrets to these AI tools, but unlike traditional therapists or legal advisors, the conversations lack legal protections, which can expose users to significant risks. Juan Perla, a partner at Curtis, Mallet-Prevost, Colt & Mosle LLP and an authority on AI-related legal matters, cautions that individuals should be mindful of the terms of service and data retention policies of AI platforms. Sensitive information shared in these chats might be subject to legal scrutiny during lawsuits or governmental inquiries. “People are just pouring their hearts out in these chats, and I think they need to be cautious,” Perla stated. Notably, OpenAI CEO Sam Altman highlighted this concern during a recent podcast, emphasizing that while users feel comfortable discussing personal issues with ChatGPT, the absence of confidentiality laws poses a major risk. “If you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that,” he warned, underscoring the lack of protective measures currently in place. Perla further explained that conversations related to legal issues—such as workplace disputes, divorces, or custody battles—could be disclosed in litigation. He advised against sharing anything that one would typically reserve for discussions with a doctor or lawyer. Even attempts to anonymize conversations do not completely eliminate the risks involved. The safest approach, according to Perla, is to refrain from discussing sensitive subjects with AI chatbots altogether. “If you're talking about your personal intimate affairs with a chatbot that have nothing to do with the commission of a crime, then the likelihood that these chats are going to be public or turned over to a court is pretty low,” he noted, adding that users should carefully consider the implications of their digital interactions. James Gatto, a partner at Sheppard Mullin and co-leader of the firm's AI industry team, echoed these sentiments and stressed the importance of understanding how different AI tools manage user data. While some paid versions may offer enhanced privacy features, free versions often do not. “If I was going to use a tool for anything sensitive, I’d want a tool that deleted the information,” Gatto advised, encouraging users to thoroughly review the terms of service before sharing any sensitive information. In summary, both experts agree that users must weigh the risks of using AI chatbots for confidential discussions. Understanding the potential legal and personal ramifications is crucial, as certain scenarios could lead to severe consequences. Ultimately, users should ask themselves whether they are comfortable with the possibility of private conversations being accessed by unintended parties.
In a surprising turn of events, the FDA has chosen not to approve the use of the generic drug leucovorin for treating au...
Ars Technica | Mar 10, 2026, 22:15
A groundbreaking development has occurred just outside Dublin, Ireland, where a new data center has become the first fac...
CNBC | Mar 11, 2026, 06:20
Microsoft has thrown its support behind Anthropic in a critical legal dispute. In a recent court filing, the tech giant ...
Business Insider | Mar 10, 2026, 23:10Thinking Machines Lab, an innovative startup spearheaded by Mira Murati, the former CTO of OpenAI, has announced a signi...
Business Today | Mar 11, 2026, 02:55
In a strategic move amidst ongoing legal disputes involving Anthropic and the U.S. Department of Defense, Google is expa...
Business Today | Mar 11, 2026, 06:25