How to Stop Anthropic From Training Its AI Models on Your Conversations

Did you know that you can customize Google to filter the garbage? Take these steps For best search results, especially by adding my work at Lifehacker as a favorite source.
You should never assume that what you say to a chatbot is private. When you interact with one of these tools, the company behind it probably stops session data, often using it to train underlying AI models. Unless you explicitly withdraw from this practice, you have probably formed many models in your time using AI.
Anthropic, the company behind Claude, adopted a different approach. The company’s privacy policy has declared that Anthropic does not collect user entries or exit to train Claude, unless you report equipment to the company or opt for training. Although this does not mean that Anthropic was refrained from collecting data in general, you can rest easily knowing that your conversations did not affect future versions of Claude.
It changes now. As indicated by The Verge, Anthropic will now start to train its AI models, Claude, on user data. This means that new cats or coding sessions on which you interact with Claude will be transformed into anthropic to adjust and improve the performance of models.
This will not affect past sessions if you leave them. However, if you re -engage with a past cat or coding sessions after change, Anthropic will scratch all the new data generated from the session for its training.
This will not happen without your permission – at least, not immediately. Anthropic gives users until September 28 to make a decision. New users will see the option when they configure their accounts, while existing users will see a contextual window when they connect. However, it is reasonable to think that some of us will click on these menus and contextuals too quickly, and will accidentally accept the collection of data that we may not mean otherwise.
What do you think so far?
At the Anthropic Credit, the company claims that it is trying to hide the data from sensitive users via “a combination of automated tools and processes” and that it does not sell your data to third parties. However, I certainly do not want my conversations with AI to form future models. If you feel the same thing, here is how to withdraw.
How to withdraw from the anthropic AI training
If you are an existing Claude user, you will see a contextual warning the next time you connect to your account. This popup, entitled “Consumer terms and policies update“Explain the new rules and, by default, opts you for training. To withdraw, make sure that the rocking next to” You can help improve Claude “is deactivated. (The rocking will be defined on the left with a (x), rather than to the right with a check.) Press” Accept “to lock your choice.
If you have already accepted this contextual window and you do not know if you have opted for this data collection, you can always withdraw. To check, open Claude and direct Settings> Confidentiality> Privacy parametersSo make sure that the decline in “helping to improve Claude” is deactivated. Note that this parameter will not cancel any data that Anthropic has collected since you have opted.



