Echo
March 26, 2026· 15 min read

The Twenty-Six Words That Built the Internet - and the Verdict That Found Their Limit

Section 230 was supposed to protect platforms from what their users said. Nobody asked what happens when the product itself is the problem.

Here is a sentence that shaped the modern world more than most constitutional amendments: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

Twenty-six words. You could write them on a napkin. For nearly three decades, these words shielded some of the largest corporations in human history from legal accountability. And in March 2026, in a Los Angeles courtroom, a jury of seven women and five men decided that the shield had a hole in it.

The case was K.G.M. v. Meta Platforms, Inc. The plaintiff, a young woman from Chico, California, who began using social media at age six, argued that Meta and YouTube had designed products so addictive they caused her lasting psychological harm. The companies argued, as they had argued countless times before, that Section 230 of the Communications Decency Act made them immune. The jury found Meta and YouTube negligent. The damages were modest: $4.2 million against Meta, $1.8 million against YouTube. The principle was not.

What happened in that courtroom was not a repeal of Section 230. It was something more interesting. It was a rereading.

Twenty-Six Words on a Napkin

The story begins, as legal stories often do, with a problem nobody saw coming.

In the early 1990s, online services were primitive by current standards. CompuServe operated as a kind of digital newsstand, hosting forums and bulletin boards it did not moderate. Prodigy, by contrast, marketed itself as a family-friendly service and actively reviewed user-generated content. Both were sued for defamatory material posted by their users. The outcomes were contradictory. In Cubby v. CompuServe, a federal court in 1991 held that because CompuServe did not edit its forums, it functioned as a distributor and was not liable for their content. In Stratton Oakmont v. Prodigy, a New York state court in 1995 held that because Prodigy did moderate its content, it functioned as a publisher and was liable.

The paradox was clear. A platform that tried to keep things clean faced more legal exposure than one that did nothing at all. This was, by almost any measure, a perverse incentive.

Representatives Chris Cox, a Republican from California, and Ron Wyden, a Democrat from Oregon, proposed a fix. Their provision became Section 230(c)(1) of the Communications Decency Act, signed into law by President Clinton on February 8, 1996. The intent was narrow and pragmatic: let platforms moderate content without being punished for the act of moderation. A permission slip for good behavior.

The internet of 1996 was a world of dial-up connections, text-heavy bulletin boards, and websites that loaded in stages. The smartphone did not exist. Facebook would not launch for another eight years. YouTube for nine. Instagram for fourteen. The idea that a single platform might serve three billion users, surface content through machine-learning algorithms, and hold the attention of an entire generation of children was, in 1996, not even science fiction. It was simply outside the frame.

The twenty-six words were adequate for the world they entered. The question is whether they were ever meant for the world they came to govern.

The Shield That Grew

What happened next was not dramatic in any single moment. It was cumulative. Case by case, court by court, Section 230 expanded from a narrow protection for content moderation into something approaching a universal immunity.

The pivotal early case was Zeran v. AOL, decided by the Fourth Circuit Court of Appeals in 1997. Kenneth Zeran sued AOL after anonymous users posted his name and phone number alongside advertisements for merchandise celebrating the Oklahoma City bombing. The court held that Section 230 barred all claims treating a platform as a publisher, not just those arising from moderation decisions. This interpretation went beyond what Cox and Wyden had described as their intent. It became the template.

Over the following decades, courts applied Section 230 to dismiss cases involving search engine results, app store rankings, automated friend suggestions, and algorithmically curated news feeds. Jeff Kosseff, a law professor who traced this history in his 2019 book "The Twenty-Six Words That Created the Internet," documented how each judicial interpretation added another layer of insulation. The statute's language did not change. Its meaning expanded.

By the mid-2010s, Section 230 had become the first line of defense in virtually every lawsuit filed against a technology platform. Not because the text compelled that result, but because enough appellate judges had said it did, creating a body of precedent that lower courts felt bound to follow. The shield that Cox and Wyden designed for Good Samaritans now covered the wealthiest companies on Earth.

There is a particular kind of legal drift that occurs when a statute written for one context is applied to another. Nobody makes a decision to expand it. Each individual court applies it to the case in front of it. The expansion happens in the aggregate, visible only in retrospect. By the time anyone notices, the interpretation has hardened into doctrine.

The Question Nobody Asked

For a quarter century, Section 230 litigation followed a predictable script. A plaintiff alleged harm from content hosted on a platform. The platform moved to dismiss, arguing it was not the publisher or speaker of the content. The court agreed. Case closed.

The script worked because every plaintiff framed the question the same way: is the platform liable for what users said?

The attorneys representing K.G.M. asked a different question. They said: we are not suing Meta and YouTube for anything a user posted. We are suing them for how they built the product. Infinite scroll, algorithmic recommendation, autoplay, beauty filters - these are engineering choices, not editorial judgments. When a pharmaceutical company designs a drug that causes addiction, it is liable for the design. When an automobile manufacturer builds a car with a defective steering mechanism, it is liable for the design. Why should a social media platform be different?

Judge Carolyn B. Kuhl, who presided over the case in Los Angeles County Superior Court, allowed this argument to proceed. Her pre-trial rulings permitted the product-design claims to survive Meta's and YouTube's Section 230 defense. This was not a rejection of Section 230. It was a boundary determination. The statute protects platforms from being treated as publishers of third-party content. It does not, Judge Kuhl's rulings implied, protect them from claims about the design of the product itself.

The distinction, once articulated, has the uncomfortable clarity of something that should have been obvious. Content liability asks: did you publish something harmful? Product design liability asks: did you build something harmful? Section 230 speaks to the first question. About the second, it is silent.

A Product, Not a Publisher

Product liability is one of the oldest and most developed areas of American tort law. Its central premise is straightforward: when a manufacturer places a product into the stream of commerce, it assumes responsibility for defects in that product's design. The Restatement (Third) of Torts, published in 1998 - two years after Section 230 became law - codifies three categories of product defect: manufacturing defects, design defects, and failures to warn.

The K.G.M. case rested on the second category. The claim was not that Meta and YouTube had built their products wrong by accident. It was that the design itself, functioning exactly as intended, caused foreseeable harm. Infinite scroll was designed to keep users scrolling. Algorithmic recommendation was designed to serve content that maximized engagement. Autoplay was designed to eliminate the moment of decision between one video and the next. Beauty filters were designed to alter the user's face. Each of these features worked precisely as engineered. The question was whether the engineering was defective.

Consider the analogy to physical products. Nobody blames a car manufacturer for the roads a driver chooses to travel. But if the car's braking system is designed in a way that makes stopping unreasonably difficult, the manufacturer is liable for that design choice, regardless of where the driver was headed. The K.G.M. attorneys applied this logic: Meta was not responsible for any particular Instagram post or YouTube video K.G.M. encountered. It was responsible for building a machine that delivered those posts and videos through mechanisms designed to override the user's capacity to disengage.

The jury agreed, finding both companies negligent in the design of their platforms. Ten of twelve jurors reached this conclusion after more than a week of deliberation. The financial damages - $4.2 million for Meta, $1.8 million for YouTube - barely register on the balance sheets of companies that generate billions in revenue each quarter. But the principle registers quite differently. A jury has now concluded that design features of a social media platform constitute a defective product.

Both companies have announced plans to appeal.

What the Supreme Court Chose Not to Say

The K.G.M. trial did not occur in a vacuum. It occurred in a vacuum of another kind - an absence of guidance from the court that is supposed to provide it.

In 2023, the Supreme Court heard Gonzalez v. Google, a case that seemed poised to define Section 230's limits for the algorithmic age. The family of Nohemi Gonzalez, killed in the 2015 ISIS attacks in Paris, argued that YouTube's recommendation algorithm had actively promoted terrorist content to users, and that this promotion went beyond the passive hosting that Section 230 was meant to protect. The legal community expected a landmark ruling.

What it got was a dodge. The Supreme Court issued a narrow opinion that resolved the case on other grounds, explicitly declining to address the Section 230 question. On the same day, in Twitter v. Taamneh, the court found that platforms had no direct liability for terrorist content in that specific factual context, but again avoided the broader question of algorithmic recommendation.

A year later, in Moody v. NetChoice (2024), the court addressed state laws in Texas and Florida that sought to regulate how platforms moderate content. Justice Elena Kagan wrote that platforms' content-moderation decisions constitute protected editorial judgment under the First Amendment. But the court sent the cases back for further analysis, and once again stopped short of defining what Section 230 does and does not cover when algorithms are involved.

Three opportunities. Three retreats from the central question.

It is worth sitting with what this silence means. When the highest court in a legal system repeatedly avoids a question, the question does not disappear. It migrates to lower courts, where judges with less authority and fewer resources must answer it case by case. The K.G.M. trial happened in a state superior court in Los Angeles precisely because there is no Supreme Court precedent saying it cannot.

This is how law develops in the absence of legislative or appellate clarity. Not through grand pronouncements, but through the accumulation of trial court decisions, jury verdicts, and settlements that gradually establish what the law means in practice. It is messy. It is slow. And it is what happens when the institutions designed to provide clarity choose not to.

If the Algorithm Is a Product, What Else Changes?

Grant the K.G.M. premise for a moment. Accept that algorithmic recommendation is a product feature, not a form of speech or publishing. Accept that Section 230 does not shield platform design from product liability claims. Follow this thread and see where it leads.

It leads, quite quickly, to places far beyond social media.

Amazon's recommendation engine drives, by the company's own past disclosures, a substantial share of its total sales. That engine is an algorithm that selects and surfaces products for individual users based on their browsing and purchasing history. If Instagram's recommendation algorithm is a product feature subject to design-liability claims, what is Amazon's?

Netflix autoplays the next episode before the viewer has decided to watch it. The feature is designed to reduce friction, to eliminate the moment of conscious choice between continuing and stopping. It is, functionally, the same design principle as YouTube's autoplay - the feature the K.G.M. jury found negligent.

Google Search ranks results using an algorithm that determines what information a user sees first. That ranking shapes purchasing decisions, political knowledge, medical choices, and much else. It is, in every meaningful sense, a product design choice.

Ride-hailing apps use algorithmic pricing that surges during periods of high demand, shaping both rider and driver behavior through design. Dating apps use swipe mechanics, gamification, and notification systems designed to maximize engagement. News aggregators curate feeds through algorithms that determine what stories reach which readers.

Each of these is a product feature. Each shapes user behavior through design. And if the K.G.M. product-design theory holds, each could theoretically face the same kind of liability claim.

This is not a prediction. It is an observation about logical implications. The product-design theory is a key that fits many locks. Whether courts and plaintiffs choose to turn it is a separate question entirely.

The Thirty Reform Bills That Failed

While courts have been drawing these lines one verdict at a time, Congress has been doing something else. Since 2020, more than thirty bills proposing to reform or replace Section 230 have been introduced in the House and Senate. The EARN IT Act, first introduced in 2020, would condition Section 230 immunity on compliance with child safety standards established by a federal commission. The SAFE TECH Act, proposed in 2021, would narrow Section 230 immunity for paid content and advertising. The KIDS Online Safety Act passed the Senate with broad bipartisan support in July 2024, then died in the House.

The pattern is consistent. Bills are introduced with fanfare. Hearings are held. Witnesses testify. Nothing passes.

The reason is structural. Democrats and Republicans both want to change Section 230, but they want to change it in opposite directions. Democrats, broadly, want platforms to be more accountable for harmful content they allow to remain - hate speech, misinformation, content that harms children. Republicans, broadly, want platforms to be more accountable for content they choose to remove - political speech, conservative viewpoints, content they believe is suppressed by ideological bias. These goals are not merely different. Under the current statutory framework, they are mutually exclusive. Expanding platform liability for hosting harmful content and expanding platform liability for removing protected content cannot coexist in the same statute without contradiction.

The result is a legislative equilibrium in which reform is perpetually desired and perpetually impossible. Each party's preferred reform is the other party's nightmare. Neither side has the votes to pass its version. Neither will compromise to pass the other's.

Into this vacuum, the courts step. The K.G.M. verdict did not wait for Congress. It did not need to. A jury in Los Angeles applied existing tort law to a set of facts and reached a conclusion. Multiple bellwether plaintiff trials remain on the docket in the same court. Federal cases brought by state attorneys general and school districts are set for trial in Oakland later in 2026.

The legislative branch has been unable to update a thirty-year-old statute for the world it now governs. The judicial branch is doing it for them, one trial at a time.

The Uncomfortable Inheritance

The European Union has taken a different path. The Digital Services Act, which took full effect in 2024, requires very large platforms to conduct systemic risk assessments, provide algorithmic transparency, and submit to independent audits. It does not abolish platform immunity. It conditions it. The approach is regulatory rather than litigious, preventive rather than compensatory.

The American approach, as the K.G.M. trial demonstrates, is different. It relies on private plaintiffs, jury trials, and the threat of cumulative verdicts to change corporate behavior. It is slower, less predictable, and more dramatic. It also places an extraordinary burden on individual litigants to do what regulators have not.

K.G.M., the young woman at the center of this verdict, began using social media at age six. She testified about beauty filters and body dysmorphia, about hours spent on Instagram and the anxiety that followed. Her case became a test of whether American law could hold platforms accountable for what their products do to users, rather than merely for what users do on their products. The jury said yes. The appeals courts have not yet spoken.

Meta and Google have both announced they will appeal. The legal argument will almost certainly return to Section 230 and whether the product-design theory survives appellate scrutiny. The financial argument will be about what happens if it does - not $6 million, but the exposure created by thousands of similar cases awaiting trial.

The twenty-six words are still there. They have not been amended. They have not been repealed. They have been reread - by a judge who drew a line between content and design, and by a jury that decided the line mattered.

Whether this rereading holds is not something a verdict in a single trial can determine. The appeals will take years. Other courts may disagree. The Supreme Court may eventually find the question unavoidable. Congress may finally act, though nothing in the past six years suggests this is likely.

What can be said is this: the twenty-six words were written for a world of bulletin boards and chat rooms, for a problem of defamatory posts and content moderation. They entered a world of algorithmic recommendation, infinite scroll, beauty filters, and three billion users. For three decades, they were read as covering everything that happened on a platform. In a courtroom in Los Angeles, someone asked whether they were ever meant to cover what the platform itself was built to do.

The question, once asked, does not go away. Even if the verdict is overturned, the question remains. Even if Congress never acts, the question remains. It sits inside the twenty-six words, where it has always been, waiting for someone to notice.

Sources:

47 U.S.C. Section 230, Communications Decency Act of 1996

Jeff Kosseff, "The Twenty-Six Words That Created the Internet" (Cornell University Press, 2019)

K.G.M. v. Meta Platforms, Inc. - Los Angeles County Superior Court, verdict March 2026

Gonzalez v. Google LLC, 598 U.S. 617 (2023)

Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023)

Moody v. NetChoice, LLC, 603 U.S. ___ (2024)

Cubby, Inc. v. CompuServe Inc., 776 F. Supp. 135 (S.D.N.Y. 1991)

Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995)

Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997)

Restatement (Third) of Torts: Products Liability (American Law Institute, 1998)

EU Digital Services Act, Regulation (EU) 2022/2065

KIDS Online Safety Act, S. 1409, 118th Congress (passed Senate July 2024)

This article was AI-assisted and fact-checked for accuracy. Sources listed at the end. Found an error? Report a correction