You opened an incognito window. You felt safer. You asked Perplexity AI about your health symptoms, your financial situation, maybe some personal matters you wouldn’t want anyone reading over your shoulder.
That feeling of safety was an illusion.
A proposed class-action lawsuit filed in California on April 1, 2026 — first reported by Bloomberg’s Robert Burnson — alleges that Perplexity AI embedded “undetectable” trackers that routed sensitive user data to Meta and Google. The allegations claim this happened even when users were browsing in incognito or private mode.
If the allegations are accurate, every query you typed into Perplexity’s search bar — your medical symptoms, your legal questions, your financial worries — may have been transmitted to the largest advertising surveillance networks on the internet. The same networks many privacy-conscious users switched to Perplexity to escape.
Let’s break down what’s alleged, why this matters technically, and what actually works to protect your privacy when using AI tools.
What the Lawsuit Alleges
The complaint names Perplexity AI as a defendant and alleges violations of the California Consumer Privacy Act (CCPA) and potentially California’s Electronic Communications Privacy Act (CalECPA).
The core allegations:
1. Undetectable tracking technology — Perplexity allegedly embedded tracker code in its platform that was not disclosed to users and was designed to be difficult to detect through normal means.
2. Data routed to Meta and Google — The tracked data — including search queries and associated metadata — was allegedly transmitted to Meta (Facebook’s parent company) and Google’s advertising infrastructure.
3. Bypassed incognito/private browsing — The tracking allegedly continued even when users accessed Perplexity in private browsing modes, including Chrome’s Incognito, Firefox’s Private Window, and Safari’s Private Browsing.
4. Sensitive query data — Unlike a general web search for “best pizza near me,” AI search queries often contain deeply sensitive information. Users ask AI assistants about medical symptoms, financial problems, legal situations, relationship issues, and personal dilemmas. This data carries far greater privacy weight than conventional search queries.
The lawsuit seeks class-action status to represent California users affected by the alleged tracking, along with damages and injunctive relief under the CCPA.
Why Incognito Mode Was Never Designed for This
Here’s the fundamental misunderstanding that makes this lawsuit’s allegations so important: incognito mode was never designed to protect your privacy from the website you’re visiting.
Incognito mode does exactly one thing: it prevents your browser from storing a local record of your session. Specifically, it:
- Does NOT save your browsing history to your local machine
- Does NOT save cookies, site data, or form inputs after the session ends
- Does NOT share your activity with other browser profiles or accounts
What incognito mode does NOT do:
- Does NOT hide your activity from websites you visit
- Does NOT prevent websites from setting or reading cookies during your session
- Does NOT block server-side tracking or analytics
- Does NOT prevent your ISP from seeing your traffic (without a VPN)
- Does NOT protect against browser fingerprinting
- Does NOT anonymize you in any meaningful way from the site’s perspective
Google literally tells you this in the incognito warning screen: “Your activity might still be visible to websites you visit.”
The mental model most people have — “incognito = private” — is wrong in the way that matters most.
How Server-Side Tracking Bypasses Private Browsing
To understand why incognito mode can’t protect you from what Perplexity allegedly did, you need to understand the difference between client-side and server-side tracking.
Client-Side Tracking
Traditional web tracking happens in your browser:
- Cookies stored on your machine
- JavaScript that runs in your browser and reads local storage
- Pixel tags that load from third-party servers
- Browser fingerprinting scripts
Incognito mode and browser privacy extensions (uBlock Origin, Privacy Badger) can block much of this, because the tracking code runs in your browser where you have some control.
Server-Side Tracking
Server-side tracking happens on Perplexity’s servers — entirely outside your browser’s control:
When you type a query into Perplexity, your query goes to Perplexity’s servers. At that point, Perplexity can do whatever they want with that data on their servers: store it, analyze it, share it via server-to-server API calls with Meta Conversions API, Google Analytics Measurement Protocol, or any other data recipient.
Your browser never sees this happening. Your ad blocker can’t block it. Your incognito mode can’t prevent it. The tracking happens entirely server-side, after your request leaves your browser.
This is why the allegations describe the trackers as “undetectable” — they’re undetectable from the user’s end because they don’t manifest as anything your browser can observe or block.
The Conversions API Problem
Meta introduced its Conversions API (formerly Server-Side API) specifically to allow businesses to send user event data directly from their servers to Meta — bypassing browser-based ad blockers entirely. Google has equivalent server-side tracking infrastructure.
The dark irony: these server-side APIs were developed to work around user privacy tools like ad blockers and incognito mode. They’re designed specifically to ensure advertisers can track users even when those users are trying to opt out of tracking.
If Perplexity was using these APIs (as the lawsuit alleges), turning on incognito mode would be irrelevant. Your AI search query would travel from your keyboard → your browser → Perplexity’s servers → Meta’s servers. Your browser was never in the loop for the transmission to Meta.
Why AI Search Queries Are Especially Sensitive
The CCPA carve-out that makes this lawsuit interesting isn’t just about cookie tracking — it’s about the nature of AI search queries specifically.
When you Google something, your query is typically a few keywords. “Symptoms chest pain.” “Bankruptcy lawyer.” “Divorce process.”
When you ask an AI assistant, you provide context. Full sentences. Personal details. You might type: “I’ve been having chest pain on my left side for three days, I’m 47 years old, I have high blood pressure and take metoprolol, what should I do?” Or: “My business partner has been embezzling money for about six months based on the invoices I found, can I fire them without telling the other investors first?”
AI search queries are qualitatively different from keyword searches. They contain personal medical histories, financial situations, legal strategies, and intimate personal details — things the user would never put in a Google search bar but feel comfortable telling an AI that presents itself as a private assistant.
If that data is being routed to Meta and Google, it’s not keyword metadata being fed into an ad profile. It’s personally identifiable sensitive life details being handed to surveillance networks.
The CCPA provides specific protections for sensitive personal information including: “Personal information collected and analyzed concerning a consumer’s health” and financial information. If the alleged tracking captured queries containing this kind of information, the CCPA exposure is significant.
The Perplexity Privacy Betrayal
This lawsuit represents a particular kind of privacy betrayal — one that’s different from getting hacked or having your data stolen.
Perplexity positioned itself as an alternative to Google Search — an AI-powered search tool that gives you direct answers instead of ad-laden results. Many privacy-aware users chose it specifically because they were trying to reduce their exposure to Google’s surveillance infrastructure.
The allegation that Perplexity was routing data to Google and Meta anyway — secretly — is a betrayal of the trust users placed in it as an alternative.
This pattern isn’t unique to Perplexity. The entire “privacy-friendly alternative” market is polluted with products that market themselves as privacy-respecting while quietly monetizing user data in the background. DuckDuckGo faced criticism for Microsoft tracking exceptions. Various “private” VPNs turned out to maintain logs. “No-ads” services have hidden analytics partnerships.
The lesson: trust is not a privacy control. A company’s privacy-friendly branding is marketing, not a technical guarantee. Only technical controls — encryption, local processing, verifiable open-source code — provide real privacy assurance.
What CCPA Says and What You Could Get
The California Consumer Privacy Act gives California residents specific rights regarding their personal data:
Your rights under CCPA:
- Right to Know — You can ask what personal information a business has collected about you, where it came from, what it’s used for, and who it’s been shared with
- Right to Delete — You can request deletion of your personal information (with some exceptions)
- Right to Opt Out — You have the right to opt out of the “sale” or “sharing” of your personal information to third parties
- Right to Non-Discrimination — Businesses can’t penalize you for exercising your CCPA rights
For the lawsuit specifically:
The CCPA’s private right of action is limited but meaningful: California residents can sue for $100-$750 per consumer per incident (or actual damages if greater) for unauthorized disclosure of specific categories of sensitive personal information, if the business failed to implement reasonable security measures.
For a class action covering many affected users over a period of time, this could amount to substantial damages — and more importantly, an injunction requiring Perplexity to change its data practices.
Under CalECPA (California’s Electronic Communications Privacy Act), the stakes are potentially higher — it governs the interception and disclosure of electronic communications.
📋 For a deep-dive into the CCPA compliance and business liability implications of this lawsuit, see our sister analysis at ComplianceHub.wiki: CCPA in the Age of AI: What the Perplexity Lawsuit Means for Privacy Compliance
The Broader Pattern: Which AI Tools Are Actually Private?
Perplexity isn’t the only AI search or assistant tool raising privacy questions. Before we get to practical protection steps, it’s worth understanding the privacy landscape of major AI tools.
The uncomfortable reality about cloud AI assistants:
Every major cloud AI assistant — ChatGPT, Claude, Gemini, Copilot, Perplexity — processes your queries on their servers. All of them have privacy policies that describe data use for model training, safety monitoring, and service improvement. Some of them have third-party analytics or advertising relationships. None of them run your queries locally on your device.
Opting out of training data use (which most platforms offer) doesn’t necessarily opt you out of analytics, logging for safety review, or third-party tracking integrations like what Perplexity is alleged to have used.
Red flags to look for in AI tool privacy policies:
- Vague language about “trusted partners” or “third-party services”
- Analytics integrations not disclosed in the privacy policy
- No mention of server-side tracking
- “We may share data with affiliates” without specifying which affiliates
- Training data opt-out being buried in account settings rather than the default
Practical Steps to Actually Protect Your AI Privacy
Here’s what actually works — ranging from easy-to-implement to more technical approaches.
Level 1: Basic Hygiene (Do This Now)
1. Read the privacy policy before you share sensitive information
Not the whole thing — focus on the “sharing” and “third parties” sections. Look for:
- Server-side analytics tools (Segment, Amplitude, Mixpanel)
- Advertising platforms (Meta Pixel, Google Analytics)
- Data broker relationships
If you can’t find clear, specific answers, that’s a red flag.
2. Use AI query sanitization
Before typing anything into a cloud AI assistant:
- Remove your name, location, and identifying details
- Describe your situation in the third person (“someone I know has…”)
- Break sensitive multi-part questions into separate, less-identifying fragments
- Don’t use the same AI account for sensitive and non-sensitive queries
3. Check your AI account data exports
Most platforms let you download your data. Do this periodically and review what they have. It’s often eye-opening.
4. Use separate accounts for sensitive queries
A throwaway email + a different browser profile = your AI activity is less linkable to your identity. Not perfect, but better than a single account.
Level 2: Browser and Network Controls
5. Use a DNS-based tracker blocker
Services like NextDNS or 1.1.1.1 with WARP can block known tracking domains at the DNS level — before they even connect. Configure them to block Meta’s and Google’s tracking domains specifically.
# NextDNS allows blocking specific categories and domains
# Block: Facebook Trackers, Google Trackers under "Privacy" settings
# This won't block server-side tracking but catches client-side analytics
6. Use Brave Browser
Brave’s built-in shields block most client-side tracking. It also has a built-in VPN option. While it won’t stop server-side tracking, it significantly reduces your tracking surface.
7. Use a VPN — but choose carefully
A VPN hides your queries from your ISP, prevents IP-based tracking, and makes it harder to build a location-linked profile. But it doesn’t prevent server-side tracking.
Reputable privacy-focused VPNs:
- Mullvad VPN — accepts cash, no account needed (just an account number), no-logs policy verified by audits
- ProtonVPN — open-source, Swiss-based, strong privacy track record
- IVPN — small team, strong privacy stance, accepts Monero
Avoid: free VPNs, VPNs with unclear ownership, VPNs based in jurisdictions with mandatory data retention laws.
8. Tor Browser for maximum anonymity
Tor routes your traffic through multiple relays, making IP-based tracking and profiling extremely difficult. It’s slower than a VPN, but for truly sensitive queries, it’s the gold standard for anonymizing your network connection.
The Tor Browser also blocks most JavaScript-based tracking by default.
# Access Perplexity (or any web AI) through Tor:
# Download Tor Browser from torproject.org
# Navigate to perplexity.ai normally
# Your IP appears as a Tor exit node — not your real IP
# No account = no identity linking
However: even through Tor, if you’re logged into your Perplexity account, server-side tracking still knows it’s you. Tor + no account + throwaway credentials is the anonymity stack.
Level 3: Technical Alternatives
9. Use local LLMs on your own hardware
This is the only approach that guarantees your queries never leave your device. The landscape of local LLMs has improved dramatically in 2025-2026:
For most people — Ollama:
# Install Ollama (macOS/Linux/Windows)
curl -fsSL https://ollama.com/install.sh | sh
# Download and run Llama 3.1 (8B — runs on most laptops)
ollama run llama3.1
# Or Mistral 7B for faster responses
ollama run mistral
# Or Phi-4 Mini for lighter hardware
ollama run phi4-mini
Ollama runs entirely locally. Your queries never leave your machine. No company can track them. No lawsuit can affect your data because there’s no data to collect.
For power users — LM Studio: LM Studio provides a GUI for running local models with a ChatGPT-like interface. Supports models from Hugging Face. Great for trying different models without command-line friction.
For privacy + internet search — Open WebUI + Ollama + SearXNG:
# Run a fully private AI search stack locally
# Ollama for the LLM
# SearXNG for private web search (self-hosted, no tracking)
# Open WebUI to connect them
docker compose up # with appropriate compose file
This gives you an AI assistant that can search the web — routing queries through SearXNG (which doesn’t track you) rather than sending your queries to Google, Meta, or Perplexity.
10. Self-hosted AI alternatives
If you don’t want to run models on your own hardware, you can self-host:
- Ollama on a VPS — Deploy Ollama on a cheap VPS you control. Queries go to your server, not a third party.
- Jan.ai — Another local-first AI assistant with a clean UI
- AnythingLLM — Self-hosted AI with RAG capabilities, supports local and remote models
- PrivateGPT — Designed for private document analysis with local processing
11. Privacy-first cloud AI alternatives
If you need cloud AI, some options have stronger privacy postures than Perplexity:
- DuckDuckGo AI Chat — Routes queries through a proxy, doesn’t associate queries with your identity (use without an account for maximum privacy)
- Venice AI — Explicitly privacy-focused, processes queries on-device or without logging
- Mistral Le Chat — European company, GDPR-bound, no advertising business model
The Self-Hosted Stack: Complete Setup Guide
For readers who want maximum control, here’s a complete privacy stack you can run on a spare computer or NAS:
Hardware requirements for local AI:
- Minimum: Any computer with 8GB RAM (smaller quantized models only)
- Good: 16GB RAM + modern CPU (comfortable with most 7B models)
- Ideal: 16-32GB RAM + dedicated GPU with 8GB+ VRAM (fast with 13B+ models)
The stack:
# 1. Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# 2. Pull your preferred model(s)
ollama pull llama3.1:8b # General purpose, fast
ollama pull mistral:7b # Excellent for text tasks
ollama pull deepseek-r1:8b # Strong reasoning
# 3. Install Open WebUI (ChatGPT-like interface)
docker run -d -p 3000:8080 \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
-v open-webui:/app/backend/data \
--name open-webui \
ghcr.io/open-webui/open-webui:main
# 4. Access at http://localhost:3000
This gives you a private ChatGPT-like interface running entirely on your machine. Zero tracking. Zero third-party data sharing. No monthly subscription.
What This Lawsuit Means for the AI Industry
The Perplexity lawsuit isn’t just about one company. It’s a test case for how privacy law applies to AI search tools.
If the allegations are proven and Perplexity faces significant liability, it sends a clear message to every AI company that server-side tracking without disclosure is legally risky. The CCPA’s enforcement posture in 2026 is more aggressive than in 2018-2020 — regulators have had time to staff up and the political environment favors enforcement.
Potential precedents if the lawsuit succeeds:
- Server-side tracking disclosure requirements — AI companies may need to explicitly disclose in their UI when queries are transmitted to third parties
- Sensitive AI query protections — Courts may recognize AI search queries as a sensitive data category with heightened protections beyond standard browsing data
- Opt-in requirements for tracking — Rather than the current opt-out model, sensitive AI data use may require explicit opt-in consent
More practically: every AI company is now aware that their tracking practices can become the subject of a class action. Expect privacy policies to become either much clearer (if the company wants to build trust) or much more heavily lawyered (to avoid liability).
Neither outcome guarantees your privacy. Technical controls are still the only reliable answer.
Bottom Line
The Perplexity lawsuit is a wake-up call, not because this kind of tracking is surprising — it’s not — but because users expected better from a platform they chose as an alternative to Big Tech surveillance.
The hard truth: no cloud AI service can guarantee your query privacy. The moment your query leaves your device and hits someone else’s servers, you’ve lost control of it.
The only AI that can guarantee your privacy is one running on your own hardware, processing your queries locally, with no network connection. Everything else involves some degree of trust — and as today’s lawsuit demonstrates, that trust can be misplaced.
Start with Ollama. Run a local model tonight. Ask it your sensitive questions there. The quality of open-source models in 2026 is genuinely impressive — good enough for most of the questions you’d ask a cloud AI.
For everything else, apply the privacy stack: VPN, Tor when stakes are high, throwaway accounts, and query sanitization. None of these are perfect, but layering them dramatically reduces your exposure.
Your private browsing mode was never going to save you from server-side tracking. But the right tools — the ones you control — actually can.
Resources
- Tor Browser: https://www.torproject.org/
- Ollama (local LLMs): https://ollama.com/
- LM Studio: https://lmstudio.ai/
- Open WebUI: https://github.com/open-webui/open-webui
- Mullvad VPN: https://mullvad.net/
- ProtonVPN: https://protonvpn.com/
- NextDNS: https://nextdns.io/
- SearXNG (self-hosted search): https://searxng.org/
- Brave Browser: https://brave.com/
- CCPA Rights Guide (California AG): https://oag.ca.gov/privacy/ccpa
- Bloomberg lawsuit report: https://www.bloomberg.com/news/articles/2026-04-01/perplexity-ai-machine-accused-of-sharing-data-with-meta-google
My Privacy Blog covers practical privacy for real people — no PhD required. We test the tools, read the policies, and tell you what actually works.


