Her name, in court documents, is KGM. In news coverage she goes by “Kaley.” She is 20 years old. She started watching YouTube at age six. She was on Instagram by nine. By the time her lawyers put her story before a Los Angeles jury, the damage — documented, clinical, real — had already been done.
On March 25, 2026, that jury came back with a verdict. Meta and YouTube are liable. Both companies were found negligent in the design of their platforms. Both were found to have caused Kaley’s documented mental health harm. Both failed to adequately warn users. And in a finding that carries enormous financial weight, the jury determined that both companies acted with malice, oppression, or fraud.
That last part unlocks something big: a separate punitive damages phase. The $3 million compensatory award — $2.1 million against Meta, $900,000 against YouTube — may be only the opening number.
What the Jury Actually Decided
This wasn’t a general “social media is bad” verdict. Juries answer specific questions, and in this case they answered seven of them for each defendant. Every single answer was yes.
Yes, Meta was negligent in the design of its platform. Yes, that negligence was a substantial factor in Kaley’s harm. Yes, Meta failed to provide adequate warnings. Yes, Meta acted with malice, oppression, or fraud.
Same answers for YouTube. Seven for seven. Twice.
The precision matters. This jury wasn’t swept up in moral panic about screens and teenagers. They worked through the legal framework, applied it to the evidence, and reached a conclusion on each element. That’s a harder thing to dismiss on appeal, and a harder thing for the next 1,500+ pending cases to ignore.
Because that’s what this verdict is: a bellwether. Hundreds of similar lawsuits have been sitting in state and federal courts, waiting to see how a jury would actually respond when confronted with the full record. Now they know. Both sides know.
The Legal Theory That Cracked the Shield
For years, big tech companies sheltered under Section 230 of the Communications Decency Act — the provision that says platforms aren’t liable for content their users post. Sue Meta for something a user wrote? Section 230. Sue YouTube for a video someone uploaded? Section 230.
This case didn’t go near that shield. The plaintiffs’ legal theory targeted something different: the design of the platform itself.
Infinite scroll — engineered so there’s no natural stopping point. Autoplay — so the next video begins before you’ve decided whether you want it. Notification cadence — calibrated to create urgency and pull you back. Recommendation engines — built not to serve your interests but to maximize your engagement time, feeding content that provokes the strongest neurological response regardless of whether it’s good for you.
These aren’t user-generated. No third party posted the autoplay feature. No user uploaded infinite scroll. These are deliberate engineering decisions made by the companies themselves. And a product that is negligently designed — one that causes foreseeable harm — is actionable. That’s not a novel legal theory. That’s products liability law that’s been on the books for decades.
What’s new is applying it to software. To algorithms. To the invisible machinery underneath an app that a child opens on her phone.
”If We Wanna Win Big With Teens, We Must Bring Them In as Tweens”
That’s a real sentence from a real internal Meta document introduced at trial. Read it again.
This isn’t an engineer’s offhand comment in a chat log. This is company strategy. And the data that accompanied that strategy showed what Meta already knew: 11-year-olds who used Instagram were four times more likely to return to the app than kids using competing platforms. Four times more likely to come back. At eleven.
Lead attorney Mark Lanier put it plainly during the trial: “How do you make a child never put down the phone? That’s called the engineering of addiction.”
The tobacco parallel has been floating around Big Tech liability discussions for years, and it applies here in a specific, non-metaphorical way. The tobacco companies knew their products were harmful and addictive. Internal documents proved it. They marketed aggressively to young people anyway — because young users are where lifetime habits form, and lifetime habits are where the money is. The question that eventually brought them down wasn’t whether nicotine was harmful. It was whether knowing about the harm, and continuing the behavior, constituted actionable wrongdoing.
That’s precisely the question before the court now. Meta and YouTube knew. The documents say they knew. The question is what we do about knowing.
TikTok and Snap Settled. Notice What That Means.
Two of the defendants in the broader litigation — TikTok and Snap — reached settlements before this case went to trial. On the surface that looks like prudent legal strategy.
Look closer: when you settle, you don’t have to hand over your internal documents.
The companies that stayed and fought — Meta and YouTube — had to produce theirs. Those documents became part of the trial record. The internal strategy memos about capturing tweens. The data on return rates for young users. The engineering decisions that were made with full knowledge of the engagement mechanics.
Settling before trial is a way of keeping your institutional knowledge out of the public record. It’s not an admission of guilt — legally, anyway — but it does raise the question of what, exactly, those internal records would have shown.
The Day Before: $375 Million in New Mexico
This verdict didn’t arrive in a vacuum. The day before — March 24, 2026 — a New Mexico jury found Meta liable in a separate child safety case and awarded $375 million.
Two verdicts. Two states. Two days. Both going the same direction.
The summer of 2026 brings another milestone: a consolidated federal trial in the Northern District of California covering hundreds of school districts nationwide. The districts argue that rampant social media use among students has damaged their ability to function — driving up costs for mental health services, intervention programs, and staff. If the LA verdict is a bellwether, the California consolidated trial is a siege.
And before any of that: punitive damages. The hearing is scheduled soon, with each side getting 20 minutes to argue. Punitive damages in cases involving malice can multiply the compensatory award by 10 times or more. The $3 million verdict may be a rounding error compared to what comes next.
The Privacy Angle Nobody’s Talking About Enough
There’s a thread in this story that’s specifically important if you care about privacy, not just child safety in the abstract.
The same recommendation engine that serves you an ad for running shoes because you searched for a 5K last week — that same engine, operating on the same logic, can serve a predator content featuring children. The same personalization infrastructure that matches advertisers to audiences can match bad actors to vulnerable kids. It’s not a different system. It’s the same system with catastrophic different use cases.
When we talk about platform design as the defect, we’re talking about an optimization function that doesn’t know or care what it’s optimizing toward. It maximizes engagement. Engagement from an advertiser with a product is revenue. Engagement from someone with harmful intent is a different kind of cost — one that gets externalized onto users, families, and society.
Privacy advocates have long argued that the surveillance infrastructure underpinning behavioral advertising is dangerous not just because of what advertisers do with it, but because of what that infrastructure makes possible. This trial is, in part, a case study in that argument.
The data profiles that power Kaley’s Instagram feed are built on years of tracked behavior that began when she was nine years old. Nine. The same profiling machinery that built a picture of her preferences and vulnerabilities for advertisers also powered the recommendation engine that the jury just found caused her documented harm. You cannot fully separate the advertising model from the engagement design. They are the same machine.
What Changes Now
The practical answer is: we don’t know yet. Verdicts don’t rewrite platform code overnight. Appeals are coming. The punitive damages phase hasn’t resolved. The federal consolidated trial hasn’t started.
But something shifted on March 25, 2026. A jury of regular people — not tech experts, not policy wonks — looked at the evidence and said: this was negligent. This caused real harm. And this was done with malice.
That matters. Not just legally, but culturally. The argument that platforms are neutral conduits, that algorithms are just math, that addiction is a metaphor — those arguments are getting harder to sustain in front of people who’ve watched the documents come out.
For families navigating a world where their six-year-olds can open YouTube and their nine-year-olds can sign up for Instagram: this verdict says you were right to be worried. It says the design that captured your child’s attention was not an accident or an emergent property. It was engineered. Someone chose it. The documents prove they knew what they were doing.
The algorithm is on trial. And this week, it lost.
The punitive damages phase in the KGM v. Meta/YouTube case is scheduled imminently. Follow this space for updates as the case continues — and as the summer federal trial approaches.



