Clarence Thomas guided Meta and YouTube’s big court losses.
Sign up for Executive Dysfunction, a weekly newsletter that surfaces under-the-radar stories about what Trump is doing to the law—and how the law is pushing back.
You may have heard about last week’s twin verdicts against Meta and YouTube, which held the tech companies liable for harms their products inflicted on young people. But the significance of these cases runs deeper: They threaten the legal architecture that has allowed Big Tech to reap trillions of dollars in profits with little risk of consequence in court. The plaintiffs pressed a shrewd theory to pierce the federal shield that these companies have relied upon for decades to escape liability, putting them on the hook for millions in damages today—and quite possibly billions more to come.
Lawyers for the companies have already vowed to appeal, insisting that they can persuade higher courts to nix these awards as a misapplication of the law. But success is far from a guarantee: The plaintiffs pursued a strategy endorsed by none other than Justice Clarence Thomas, the intellectual leader of the Supreme Court’s 6–3 conservative supermajority. Thomas’ ideas have gained considerable support throughout the lower courts in recent years. And it is entirely possible that his hostility toward Big Tech will carry the day when the plaintiffs’ theory gets tested at SCOTUS.
To see why these verdicts are social media’s “Big Tobacco moment,” as Slate’s What Next: TBD put it, it’s important to understand how Silicon Valley evaded this kind of reckoning for so long. In 1996 Congress enacted Section 230 of the Communications Decency Act, which immunized websites for content that other people post on them. This policy created the internet as we know it by fostering free expression: It allowed platforms to host a vast range of speech without worrying that they would be liable for its content. An individual can still be sued for defamation if they post something libelous on Facebook. But Facebook itself cannot be sued for merely hosting that speech thanks to the immunity provided by Section 230. That protection is the main reason why social media companies can facilitate wide-open discussion online.
But these corporations have sought to expand Section 230 further, often with the support of courts that treat it like a get-out-of-liability-free card. As Thomas first warned in 2020, these decisions read “extra immunity” into the statute based on “policy and purpose arguments” rather than the plain text of the law. Most alarmingly, courts have wielded the statute to crush lawsuits that accuse social media companies of negligently providing a defective product. These suits claim that developers created, or refused to provide, certain features that made them especially susceptible to harm and misuse. YouTube, for instance, allegedly designed an algorithm that recommended violent terrorist content. Grindr reportedly refused to install basic safety features to prevent harassment. Backpage gave users special privileges that evidently facilitated sex trafficking. Snapchat allegedly rolled out a feature that encouraged reckless driving.
In each of these cases, Thomas noted, the lower courts granted the companies immunity under Section 230. Yet in none of them were the plaintiffs actually suing over third-party speech posted on the platforms. Each had sought to hold the companies liable for “product design flaws—that is, the defendant’s own misconduct.” But courts tossed the suits anyway. They reasoned that speech posted by others, like sexual harassment and terrorist recruitment, was still the true cause of the plaintiffs’ injuries. And platforms cannot be penalized for hosting this kind of speech.
But that logic is, as Thomas wrote, more of a “policy argument” than a legal one. Section 230 clearly protects tech companies from being sued as the mere “publisher” of other people’s speech. It does not, on its face, shield these corporations from suit as the creators of a defective product that negligently serves users this speech in harmful ways. Thomas reiterated this argument in 2024, this time joined by Justice Neil Gorsuch, in a disturbing case brought by a victim of child sexual abuse. The plaintiff accused Snapchat of encouraging minors to lie about their age and enabling adults to commit abuse through certain features, including self-deleting messages. Yet the lower court dismissed his lawsuit under Section 230. Thomas and Gorsuch expressed serious doubts that the statute actually exonerates platforms for “deliberately structuring” in a way that could “facilitate” illegal and dangerous activity. Thomas urged the court to consider whether the statute really “immunizes platforms for their own conduct.”
The plaintiffs who won major payouts against Meta and YouTube last week followed Thomas’ road map. In a California case, lawyers argued that Instagram and YouTube designed features meant to get their client, a young woman identified as KGM, addicted to social media, including infinite scroll, autoplay, and incessant notifications. (These charges were bolstered by Meta’s own damning internal documents.) The jury agreed, awarding the plaintiffs $6 million in damages. In a New Mexico case, the state attorney general accused Meta of enabling the sexual exploitation of children, then lying about the dangers of its products. The jury imposed $375 million against the company in damages.
In both cases, the plaintiffs carefully navigated the gap that Thomas identified in Section 230. They acknowledged that some harm was inflicted by third-party speech, like predatory messages from child abusers. But they insisted that the platforms themselves negligently enabled this harm through faulty design of their services. This strategy echoed the Big Tobacco lawsuits of the 1990s, which faulted tobacco companies for making cigarettes more dangerous by adding chemicals that would get users addicted. And like those suits, it worked—at least for now. The theory will be tested more in the coming months and years: More than 40 state attorneys general are suing Big Tech for allegedly hurting minors’ mental health, while thousands more private plaintiffs have filed similar complaints seeking, in sum, hundreds of billions in damages.
This issue is therefore barreling toward the Supreme Court. And lower-court judges across the ideological spectrum are pushing the justices to reject an interpretation of Section 230 that immunizes platforms for their own negligent designs. Thomas and Gorsuch have already taken up this cause, but the rest of the justices haven’t yet weighed in. (That’s partly because in 2023 the court disposed of a case that raised the issue after finding a different legal defect.) In recent years, the justices have been wary of big decisions that could break the internet and protective of social media companies’ First Amendment rights. The biggest exception is Thomas, who seems to have embraced the MAGA view that these platforms deserve punishment for ostensibly censoring conservative speakers. His hostility toward Big Tech marks a notable departure from his corporate-friendly jurisprudence in other cases. Both Gorsuch and Justice Samuel Alito share this evident suspicion of social media companies, while Justice Brett Kavanaugh is the court’s loudest voice in defense of their rights. The other justices on the left and center are tougher to pin down.
Regardless of where SCOTUS eventually lands, there is undeniable trouble ahead for Silicon Valley. When Thomas stakes out a position, the other Republican-appointed justices listen. And he has already validated the theory that drove last week’s landmark verdicts against Big Tech. These cases are not a slam dunk for the plaintiffs or defendants on appeal; with so much money at stake, the upcoming battle will be brutal for both. But it is always better to arrive at this court with Thomas already on your side.
First Appeared on
Source link