The Most Comfortable Confession Booth Ever Built
There’s a feature that all successful AI chatbots share, and it’s not intelligence. It’s the feeling they create: that you’re talking to something that isn’t judging you.
You’d tell your chatbot things you wouldn’t tell your doctor, your spouse, or your best friend. Why? Because it doesn’t react badly. It doesn’t gossip. It doesn’t judge. It’s available at 3 AM. And it remembers — or at least, it used to feel like it didn’t.
Now it does. Literally.
As of 2026, memory features are default-on in ChatGPT, Claude, and most major AI assistants. Every conversation you have is being woven into a growing profile of who you are. What you worry about. What you’re working on. What your health situation is. What your finances look like.
And here’s the problem that Stanford privacy researcher Jennifer King identified in plain terms: “The ultimate problem is that you just can’t control where the information goes, and it could leak out in ways that you just don’t anticipate.”
This is the privacy crisis hiding in plain sight — not a breach, not a hack, just you, voluntarily, one conversation at a time.
The Numbers Are Stark
- 52% of US adults now use AI large language models like ChatGPT (Elon University, 2025 survey)
- 43% of workers have shared sensitive information with AI chatbots — including financial data and client data (ZDNet, 2025)
- Chatbots are being used as substitutes for therapists, financial advisors, and medical consultants — categories with the most sensitive personal information possible
- Most users have never read the privacy policy of the AI tools they use daily
The conversation that starts as “help me write an email” often turns into something far more revealing. You mention your boss’s name, the company conflict, your health issue that’s making you stressed, the mortgage payment you’re worried about. Each detail, taken alone, seems harmless. Together, they build a profile.
The Five Ways Your Data Leaves the Room
1. Training Data
Most AI providers explicitly state in their terms of service that conversations may be used to train future models. When you share something with a chatbot, you may be contributing to the dataset that trains the next version — meaning your words, your context, your private situation becomes part of a model that millions of people interact with.
ChatGPT allows you to opt out of training data use. Most people never change the default.
2. Data Breaches
AI chatbot providers are extraordinarily high-value targets for hackers. They hold conversation logs for millions of users discussing topics ranging from business strategy to personal health crises. A breach doesn’t just expose usernames and passwords — it can expose your entire history of AI-assisted thinking.
OpenAI has already experienced security incidents exposing user data. As the industry grows, the attack surface grows with it.
3. Third-Party Sharing
Behind many AI products is a web of API partnerships, data licensing arrangements, and business relationships. When you use a chatbot embedded in a productivity app, a customer service widget, or a healthcare portal — where does that data go? The answer is often: multiple places, with varying privacy policies, few of which you’ve read.
4. Government Subpoenas
Conversation logs stored by a US-based company are subject to law enforcement requests. If you’ve ever used an AI chatbot to work through a legal problem, discuss a business dispute, or process a difficult personal situation — that record exists, in a server, potentially producible in court.
Unlike conversations with a lawyer or therapist, chatbot conversations carry no privilege.
5. Memory Features: The Growing Profile Problem
This is the newest and most significant risk. When ChatGPT or Claude remembers you across sessions, a profile is actively being maintained. Not just your stated preferences — your patterns, your concerns, your relationships, your recurring problems.
Over months of daily use, this profile becomes detailed enough to be genuinely intimate. It knows what you’re worried about this week. It knows your kids’ names, your job situation, the health scare from last year, your financial stress, your relationship status.
That profile lives on someone else’s servers, subject to all four risks above.
Why Chatbots Make You Overshare By Design
This isn’t accidental. The engagement mechanics of modern AI assistants are engineered for disclosure.
Non-judgmental responses lower your guard. You say something you might be embarrassed to tell a human — the chatbot responds helpfully, warmly. The shame circuit doesn’t fire. So you keep going.
Continuity and memory create the feeling of a relationship. Once a chatbot “knows” you, you naturally continue sharing more — it feels strange to withhold context from something that already knows so much.
Availability removes the friction of discretion. 3 AM anxiety spirals, health scares, relationship crises — you can process them with the AI immediately, in the moment, without the cooling-off period that might normally limit what you’d put in writing.
Sycophancy — a real issue documented by Stanford researchers this week — means the AI validates your thinking rather than challenging it. This builds trust, sometimes unwarranted trust. The more you trust something, the more you tell it.
The result: a confidant that is designed to make you feel safe confiding in it, that logs everything, and that you have very little control over once the words leave your fingers.
The Chatbot Diet: What You Should Never Share
Privacy experts aren’t saying you should stop using AI chatbots. They’re saying you should use them like a stranger on a bus, not like a therapist.
Never share — in any chatbot:
- Your full name combined with location and identifying details (any one alone is fine; together they’re a fingerprint)
- Medical diagnoses or symptoms tied to your identity
- Financial account details, balances, account numbers
- Your children’s names, schools, or routines
- Work documents, client data, or confidential company information
- Passwords or credentials, even “to fix” a login problem
- Legal matters you have pending or upcoming
- Your home address
Share with caution:
- Relationship problems (strip out names and identifying details)
- Career situations (genericize: “my manager” not “Sarah at [Company]”)
- Health questions (ask generally — “what causes these symptoms” vs “I have these symptoms and I’m 45 and have diabetes”)
The goal isn’t paranoia — it’s proportionality. Use the chatbot like a useful tool, not like a confessor.
How to Clean Up What’s Already There
ChatGPT
- Click your profile icon → Settings
- Under Personalization, click Manage Memory — review and delete specific memories
- To delete all memories: Settings → Personalization → Clear Memory
- To opt out of training data: Settings → Data Controls → Improve the model for everyone → Toggle OFF
- To delete conversation history: Settings → Data Controls → Delete all chats
Claude (Anthropic)
- Go to claude.ai → Settings
- Under Privacy, review memory and projects
- Delete individual projects or clear conversation history
- To opt out of training: Claude’s privacy settings → Do not train on my conversations (check current policy at anthropic.com/privacy)
Google Gemini
- In the Gemini app or at gemini.google.com → Settings
- Gemini Apps Activity — review and delete your history
- Linked Google Account → myactivity.google.com → Gemini Apps → Delete
Privacy-First Alternatives
If you need AI capabilities but want to keep your data off cloud servers entirely:
Ollama — Run powerful open-source models (Llama 3, Mistral, Qwen) entirely locally on your own machine. Zero data leaves your device.
LM Studio — Desktop app for running local models with a ChatGPT-like interface.
Jan.ai — Local AI assistant, fully offline.
For web-based options with stronger privacy commitments, look for providers that explicitly offer:
- No training data use from paid tiers
- End-to-end encrypted conversation storage
- Verifiable data deletion
The Trust Asymmetry
Here’s the core issue: these systems are designed to create intimacy while the data is handled industrially.
Your chatbot conversation feels personal. But it’s being stored in a database alongside hundreds of millions of other conversations, processed by data pipelines, potentially reviewed by human moderators, subject to legal processes, and — in the case of a breach — potentially exposed to people who actively want to exploit what you’ve revealed.
The intimacy is real. The privacy isn’t.
That doesn’t make AI chatbots bad tools. It makes them tools that deserve the same thoughtfulness you’d apply to any powerful technology that touches your private life. You wouldn’t hand your diary to a stranger and trust them to keep it safe. You shouldn’t hand your most vulnerable moments to a cloud service and assume the same.
Use chatbots. They’re genuinely useful. Just use them with eyes open.
Your Action List
- Today: Go into your chatbot settings and review what it remembers about you. Delete anything sensitive.
- Today: Turn off training data sharing if you haven’t already.
- This week: Establish your chatbot diet — decide what categories of information you won’t share.
- If you need privacy: Try Ollama for local AI that never leaves your machine.
- Going forward: Before you share something with an AI, ask: would I be comfortable if this appeared in a news story about a data breach?
If the answer is no, find another way to work through it.
More privacy guides, audits, and tools at MyPrivacy.blog. No trackers. No ad networks. Just privacy content.



