Or is it a storm in a teacup? Either way, some people are hysteric for no good reason.

Just as it has been reported by The Register, Anthropic is changing the default storage for Claude chats to 5 years. I have received an e-mail to that effect, but I deleted it.

When I opened the Android app, I’ve been presented with this screen:

Of course, the switch was set to ON, but I slid it to OFF, then I pressed “Accept.” And, sure enough, the option can still be reverted:

The panic-driven retarded readers of The Reg were unable to understand a simple screen:

There is no option to not opt in. The two options are “Accept” and “Not now”.

If you click “Not now”, you get automatically opted in on September 28th. You’re only given the option to opt-in today, or wait and be auto-opted in later.

Being lucky enough to have access to a few different AI services, this was a fairly easy decision for me to dump Claude.

The facts are as described in Updates to Consumer Terms and Privacy Policy, in the FAQ section:

  • We will train new models using data from Free, Pro, and Max accounts when this setting is on (including when you use Claude Code from these accounts). …
  • We are also expanding our data retention period to five years if you allow us to use your data for model improvement, with this setting only applying to new or resumed chats and coding sessions. If you don’t choose this option, you will continue with our existing 30-day data retention period.

These updates do not apply to services under our Commercial Terms, including:

  • Claude for Work, which includes our Team and Enterprise plans
  • Our API, Amazon Bedrock, or Google Cloud’s Vertex API
  • Claude Gov and Claude for Education

Therefore:

  1. By clicking “Accept,” you only accept the new policies that apply at the latest after September 28. (If you choose to accept the new policies now, they will go into effect immediately.)
  2. If you do not accept your chat and coding sessions to improve Claude (the slider is set to OFF), then the data retention period does not extend to five years! The 30-day retention period still applies.
  3. If you, however, accept to have your data used for training, then it will also be stored for 5 years.
  4. The only paid accounts to which these policies applies are the Individual (Free, Pro, Max) plans, but not Team, Enterprise, Gov, Education. API usage is also excluded.

Unfortunately, there are too many mentally handicapped people out there, and they’re using chatbots, despite being functionally illiterate.

🤖

Now, of course, one has to trust a company (any company, not just AI ones!) when they say they “protect” or “don’t store” your data. GDPR or not (or CPRA in California), this is all hogwash. It’s literally impossible to prove that a company (including your bank or your government) has not sold your data to third parties. When you receive a spam mail or a telemarketing call, how could you possibly know how exactly they obtained your data? You just cannot.

It’s also naive to trust anyone that, once your data is shared with them, it’s 100% secure and confidential. OK, Proton claims this to be the case with Lumo, and they have open-sourced their apps (including the web app) so that you could trust them that your data is encrypted in such a way that only you can see it decrypted. But the only way to be sure is to run an LLM locally, on your machine, and to block it from accessing the Internet. How many people and companies are doing that?

On the other hand, regardless of their official policies, what prevents an AI company from reporting you to the relevant authorities if you ask a chatbot, “How do I build an atomic bomb in my basement?” or “How do I kill someone and dispose of the body without getting caught?” And that, even if the chatbot refuses to give you an answer.

Or maybe you’re afraid that the entire planet would find out that you had a lengthy chat about the size of your penis and your erectile dysfunction. I’m pretty sure this is not such marketable information. Oh, about your heart condition and the life insurance policy you just contracted… Well, did you tell the chatbot who your insurer is?

🤖

This being said, I’m using Claude less and less lately. It’s not getting any better, and I even found Sonnet 4 to be slightly worse than Sonnet 3.7, which wasn’t necessarily better than Sonnet 3.5. Oh, I just noticed that I can have three free interactions per day with either Opus 4.1 or Opus 4!

But maybe Claude’s forte is still coding, the non-free mode (hint: API credits). I should try Claude Code (documentation), preferably with Claude Code Router. But why should I feed the hype and pay for it while at that?

Of course, there’s also Qwen Code, using Qwen3-Coder, which has a decent free tier: 1,000 free API calls per day worldwide via OpenRouter. The Chinese know how to lure you into their ecosystem by allowing you to save money!

Food for thought:

But I’m so sick of this “let AI write your code”!

🤖

Meanwhile, for everyday questions, as described in this comment, I’ve changed my habits:

Claude for quick questions, with or without web search.
Kimi for quick questions, usually when web search is required, but also for more complex topics (with follow-ups).
Copilot, especially in a browser, where not only I can explicitly select the model, but Deep Research was made available to me:

ChatGPT when I expect the talk to be long, although as a free user I can only hope it will stick to GPT-5 for 8–10 questions in a 5-hour timeframe (I know when I reach the limit: “Because an image attachment was used, this chat requires GPT-5, but you’ve hit your GPT-5 usage limit. Upgrade or wait until …”). No, GPT-5 is not that bad.

🤖

In the long run, I wouldn’t bet my money on Gemini, Grok, Mistral, Perplexity, or Lumo, nor would I trust the countless shady “services” that use one or more AI models only to sell you API calls within an IDE. 

Anyway, if you’re a company, just run a specialized model locally or in your Cloud.