
In a significant update to its data policy, Anthropic has informed users of its Claude platform that they must decide by September 28 whether they want their conversations to be included in future AI training. Previously, the company refrained from utilizing consumer chat data for this purpose. Under the new policy, user interactions, including conversations and coding sessions, will be stored for up to five years unless users actively choose to opt out. This change affects all tiers of Claude users—Free, Pro, and Max—including those utilizing Claude Code. However, business accounts such as Claude Gov, Claude for Work, and Claude for Education will not be impacted by this update. Historically, conversations and prompts from consumers were deleted after 30 days unless they raised policy violations, which could lead to a retention period of up to two years. Anthropic has characterized this policy change as a choice for users, emphasizing that those who consent to data sharing will contribute to enhanced model safety and effectiveness. The company argues that this participation will aid in refining capabilities such as coding, analysis, and reasoning. Industry analysts suggest that this shift may be motivated by Anthropic's desire for high-quality conversational data to remain competitive against major players like OpenAI and Google. Having access to extensive Claude interactions could provide the company with critical insights for system improvements. This policy update aligns with broader trends in the AI landscape, where companies are facing increased scrutiny over data usage and retention. For instance, OpenAI is currently engaged in a legal battle regarding a court order mandating the indefinite storage of all ChatGPT interactions, even those previously deleted, due to a lawsuit from The New York Times and other publishers. Concerns have also been raised about how Anthropic is rolling out this policy change. Users are presented with a pop-up labeled 'Updates to Consumer Terms and Policies' featuring a prominent 'Accept' button, while a smaller toggle for training permissions defaults to 'On.' Critics, including privacy experts, warn that this design approach may lead users to consent to data sharing without fully understanding the implications. The U.S. Federal Trade Commission has previously indicated that companies could face repercussions for utilizing complex language or hidden hyperlinks that obscure the true nature of policy changes. Whether the FTC will take action in Anthropic's situation is still uncertain.
In a surprising move, the Pentagon has blacklisted Anthropic, the AI startup founded by Dario Amodei, citing supply chai...
Business Insider | Mar 08, 2026, 15:40Elon Musk's social media platform X (formerly known as Twitter) is currently investigating troubling reports involving i...
Business Today | Mar 09, 2026, 05:10
ModRetro, the nostalgic gaming venture founded by Palmer Luckey, is reportedly seeking to secure funding that could valu...
TechCrunch | Mar 08, 2026, 21:40
In the rapidly evolving landscape of artificial intelligence, corporate leaders are emphasizing their AI adoption rates ...
Business Insider | Mar 09, 2026, 09:05Lewis Dickson, a 78-year-old retiree and former technology consultant, is redefining the narrative around aging and tech...
Business Insider | Mar 09, 2026, 24:00