Prism
March 26, 2026· 20 min read

Addiction by Design: The Engineering Playbook That Hooks a Billion Users

How infinite scroll, variable reinforcement, and algorithmic recommendations became Silicon Valley's default engineering discipline - and the product choices a jury found negligent

In 2006, a designer named Aza Raskin sat in the offices of Humanized, a small software company in Chicago, and solved a problem that had annoyed web users for a decade. Pages on the internet ended. You reached the bottom, clicked "Next," waited for a new page to load, and resumed reading. Raskin wrote a few lines of code that eliminated that friction entirely. Content would simply keep appearing as you scrolled down, flowing like water from a tap that never closed. He called it infinite scroll.

Nearly twenty years later, in a Los Angeles courtroom, a jury of seven women and five men examined that design choice and others like it. They concluded that Meta and YouTube had been negligent in building their platforms around features engineered to capture and hold attention. The jury awarded the plaintiff, a young woman identified as K.G.M., $6 million in combined damages. The financial sum barely registered against the companies' quarterly revenues. The precedent registered everywhere.

Raskin, for his part, had long since turned against his own invention. In a 2018 BBC interview, he estimated that infinite scroll wastes the equivalent of 200,000 human lifetimes every day, a figure he later revised sharply upward. "It's as if they took a drug dealer's playbook and applied it to product design," he said of the industry that adopted his creation.

This is the story of how that playbook was written.

The Slot Machine in Your Pocket

Why does pulling down on a social media feed feel satisfying even before you see what appears? The answer lies in a principle that behavioral psychologist B.F. Skinner documented in the 1950s while studying pigeons in laboratory cages. Skinner found that the most persistent behavior emerged not when rewards were delivered on a predictable schedule but when they arrived at unpredictable intervals. A pigeon that received food after every tenth peck eventually lost interest. A pigeon that received food after a random number of pecks kept pecking indefinitely.

Psychologists call this a variable ratio reinforcement schedule. Casino designers call it the engine of the slot machine. Natasha Dow Schüll, a cultural anthropologist at New York University, spent years studying machine gambling in Las Vegas and published her findings in "Addiction by Design" in 2012. She documented how slot machine manufacturers optimize every aspect of the experience for what the industry calls "time on device": the curved screen that creates a cocoon, the speed of the spin that eliminates the pause between bets, the near-miss animation that registers as almost-winning rather than losing. Slot machines generate more revenue in the United States than baseball, movies, and theme parks combined.

Pull-to-refresh, the gesture that Loren Brichter designed for the Twitter client Tweetie in 2009, borrows the same mechanic. You pull down and release. There is a moment of uncertainty. New content appears, or it does not. Sometimes the first item is a message from someone you care about. Sometimes it is an advertisement. The unpredictability is the point. If the feed refreshed automatically on a fixed schedule, the compulsion would be weaker. The manual gesture combined with the variable outcome creates what Schüll would recognize as a "machine zone" - that slightly dissociated state in which minutes vanish without conscious tracking.

Raskin's infinite scroll compounds the effect. Without a natural endpoint, there is no moment when the interface prompts you to ask: do I want to keep doing this? The bottom of a page is a decision point. Infinite scroll removes it. Combined with variable ratio reinforcement, the result is an experience carefully structured to make stopping feel like an interruption rather than a conclusion.

Captology: The Science of Persuasion Becomes an Industry

The intellectual foundation for these design patterns was laid not in a corporate lab but in a Stanford classroom. In 1998, B.J. Fogg founded the Persuasive Technology Lab, which he later renamed the Behavior Design Lab. Fogg coined the term "captology," an acronym for Computers As Persuasive Technologies, and spent the following decade developing a formal framework for using software to change human behavior.

Fogg's central contribution, published as a formal model in 2009, is deceptively simple. Behavior occurs when three elements converge at the same moment: motivation (the person wants to do something), ability (the action is easy enough to perform), and a trigger (something prompts the action). He expressed this as B=MAT, later updated to B=MAP when he renamed "trigger" to "prompt." If any element is missing, behavior does not occur. If all three align, it does.

The framework seems almost banal in its simplicity, which is precisely what made it so useful to product teams. Motivation is hard to create from scratch, but ability can be engineered. Make the action require less effort. Reduce the number of taps. Pre-fill the form. Autoplay the next video. And prompts can be timed and personalized: send a push notification at the moment when the user's historical data suggests they are most likely to respond.

Fogg's 2003 book, "Persuasive Technology: Using Computers to Change What We Think and Do," became a foundational text. But it was his Stanford courses that had the most direct industry impact. Fogg ran classes that explicitly challenged students to build technology that changed behavior, and those students dispersed into Silicon Valley's product teams. The lineage is not hidden. Mike Krieger, who co-founded Instagram in 2010, studied at Stanford during the years Fogg's lab was most active. Fogg has periodically pushed back against the characterization of his work as a blueprint for manipulation, arguing that persuasive technology can be used for beneficial ends like health and sustainability. The distinction, however, rests on the designer's choice of optimization target, and the dominant industry target has been engagement.

The Hook Model: A Product Manager's Bible

No one codified the engagement optimization framework more explicitly than Nir Eyal. A Stanford MBA graduate who built on Fogg's work, Eyal published "Hooked: How to Build Habit-Forming Products" in 2014. The book does not disguise its purpose. It is a step-by-step manual for building products that users return to without external prompting, and it became required reading on product teams across Silicon Valley.

The Hook Model has four phases. The first is the trigger, which can be external (a push notification, an email, an app icon badge) or internal (loneliness, boredom, anxiety). The second is the action, which must be as frictionless as possible. Opening an app with a single tap. Scrolling without clicking. The Fogg Behavior Model is explicitly cited: reduce the effort required to near zero. The third phase is the variable reward, the moment of uncertain payoff. A new like on your photo. A reply from someone unexpected. A video that makes you laugh. The variability, Eyal writes, is essential. Predictable rewards lose their pull. The fourth phase is investment, the user's own contribution of effort that creates the material for future hooks. Posting a photo. Following an account. Building a profile. Each investment makes the next trigger more personally relevant, and the cycle tightens.

Walk through the model with Instagram as the example. External trigger: a notification that someone tagged you in a photo. Action: tap to open. Variable reward: the tagged photo might be flattering, embarrassing, or from someone you have not heard from in years. Investment: you comment on the photo, which generates notifications for others, which creates new external triggers for them. The cycle is self-reinforcing and, by design, self-accelerating.

Eyal has since published "Indistractable" (2019), a book about resisting the very patterns his first book taught others to build. Some critics read this as a partial recantation. Eyal himself frames it as personal responsibility for users. The tension between the two books maps precisely onto the tension at the heart of the K.G.M. case: is the product designed to be addictive, or is the user failing to exercise self-control?

A/B Testing at Scale: One Billion Unwitting Subjects

No single engineer at Meta or YouTube sits at a desk and decides to make the product more addictive. The mechanism is more diffuse and more effective than any individual decision. It is called A/B testing, and at the scale of social media platforms, it amounts to continuous behavioral experimentation on billions of users.

The principle is straightforward. A product team develops two versions of a feature. Version A is the existing design. Version B introduces a change, perhaps a different notification wording, a resized button, a reordered feed algorithm. Half the users see version A, half see version B. The version that produces better engagement metrics wins. The losing version is discarded, and the winning version becomes the new baseline for the next test.

At its peak, Facebook was running thousands of A/B tests simultaneously. Each test is small. A different shade of blue on a button. A notification sent at 8 a.m. instead of 9 a.m. But over thousands of iterations, the compound effect is a product exquisitely tuned to capture and retain attention. No one decided to make the product addictive. The optimization process, running at scale over years, converged on addiction as the most engagement-efficient state.

The practice's ethical boundary came under scrutiny in 2014, when researchers at Facebook published a paper in the Proceedings of the National Academy of Sciences describing what they called an "emotional contagion" experiment. For one week in January 2012, the platform manipulated the news feeds of approximately 689,000 users, showing some more positive content and others more negative content, then measured whether the manipulation affected users' own posting behavior. It did. Users exposed to more negative content posted more negatively. The study, authored by Adam Kramer, Jamie Guillory, and Jeffrey Hancock, provoked immediate backlash not because of its findings but because of its method: nearly 700,000 people had their emotional environment manipulated without their knowledge or meaningful consent.

The episode revealed something the industry preferred to leave unstated. A/B testing is not neutral optimization. When the metric being optimized is time on platform, every test that succeeds is a test that found a way to keep people using the product longer. The cumulative result is a system that treats user attention as a resource to be extracted.

The Algorithmic Recommendation Engine

If A/B testing shapes the interface, the algorithmic recommendation engine shapes the content. And on most social media platforms, the algorithm determines the vast majority of what any individual user sees.

YouTube's recommendation system drives an estimated 70 percent of total watch time on the platform, according to the company's own chief product officer Neal Mohan. That number means most YouTube viewing is not chosen by the user in the conventional sense. A person searches for a video, watches it, and then the algorithm selects the next video, and the next, and the next. Each selection is optimized for the probability that the user will keep watching.

Guillaume Chaslot, a software engineer who worked on YouTube's recommendation algorithm until 2013, has described the system's internal logic in public talks and interviews. "The algorithm does not optimize for truth," Chaslot said. "It optimizes for watch time." Content that provokes strong emotional reactions, whether outrage, fascination, or anxiety, generates higher engagement signals than content that informs calmly. The algorithm does not have a preference for misleading content per se. It has a preference for engaging content, and the two categories overlap significantly.

Instagram made a parallel shift in 2016, abandoning its chronological feed in favor of an algorithmic one. The decision was framed as an improvement: users would see the content most relevant to them, rather than content in the order it was posted. In practice, "most relevant" meant "most likely to generate engagement," which systematically favored provocative, visually striking, and emotionally charged material.

TikTok's For You Page, which launched with the app's global expansion around 2018, pushed the logic further. TikTok's recommendation engine learns user preferences within minutes of first use, requiring no social graph at all. A new user simply watches, and the algorithm calibrates. The speed and accuracy of TikTok's recommendations became its primary competitive advantage, and forced Instagram and YouTube to accelerate their own algorithmic personalization.

The result, across platforms, is an attention environment in which the user's conscious choices matter less than the algorithm's selections. You open the app to check one thing and surface forty minutes later, having watched content you never sought. The algorithm did not trick you. It predicted, correctly, that each successive piece of content would hold your attention for a few more seconds than the alternative. Multiply that by a billion users and you have an engagement engine of unprecedented efficiency.

Beauty Filters and the Distortion Mirror

Not all design choices operate through algorithmic recommendation. Some work on the body itself. Instagram's augmented reality face filters, launched in 2017 following Snapchat's introduction of Lenses in 2015, allowed users to see a modified version of their own face in real time. Smoothed skin. Enlarged eyes. Reshaped jawlines. The filters were positioned as playful, but their most popular variants performed a specific function: they showed users what they would look like if their faces conformed more closely to a narrow beauty standard.

The psychological effect of seeing your idealized face and then confronting your unfiltered reflection is not theoretical. Meta's own internal researchers documented it. In 2021, Frances Haugen, a former product manager at Facebook, leaked tens of thousands of internal documents to the Wall Street Journal and the Securities and Exchange Commission. Among them was a 2019 internal presentation that stated: "We make body image issues worse for one in three teen girls." A separate internal study found that 32 percent of teen girls said Instagram made their feelings about their body worse, and that among teens who reported suicidal thoughts, 13 percent in the UK and 6 percent in the US traced those feelings to Instagram.

Meta publicly contested the framing of these findings, arguing that the research was being taken out of context. But the documents entered the K.G.M. trial record. K.G.M., whose first name is Kaley, testified that she posted hundreds of photos using beauty filters to mask her insecurities about her appearance. She began using YouTube at age six and Instagram as a child. By the time she filed her lawsuit in 2023, she had been diagnosed with body dysmorphia and described thoughts of self-harm.

The design choice to make beauty filters prominent, to place them at the top of the camera interface rather than burying them in a submenu, is itself a product decision. The choice to keep them available to users of any age, including children, is another. The K.G.M. jury evaluated these choices not as abstract policies but as engineering specifications with measurable consequences.

Autoplay and the Abolition of Stopping Cues

When you finish a chapter in a book, you close the cover, and there is a natural pause. When a movie ends, the credits roll, the lights come up, and you re-enter the room. Behavioral scientists call these transitions "stopping cues": environmental signals that prompt a person to evaluate whether to continue or do something else.

Autoplay eliminates them.

YouTube introduced autoplay for suggested videos around 2015. The feature begins the next algorithmically selected video before the current one has fully ended, often while the previous video's end screen is still visible. The user does not choose the next video. The system chooses it and begins playing it. The decision not to watch requires an active intervention: reaching for the screen and pressing pause or navigating away. The default is continuation.

Tristan Harris, a former design ethicist at Google, has called the systematic removal of stopping cues one of the most effective retention strategies in digital product design. Harris joined Google in 2011 through its acquisition of his startup Apture and circulated an internal presentation in 2013 titled "A Call to Minimize Distraction and Respect Users' Attention," which argued that the company's products were designed to exploit psychological vulnerabilities. The presentation went viral within Google and was read by thousands of employees, but did not result in structural changes to the company's products. Harris left in late 2015 and in 2018 co-founded the Center for Humane Technology alongside Aza Raskin.

Netflix adopted the same design pattern for television, automatically beginning the next episode of a series after a brief countdown. The effect is the same: the natural pause between episodes, which might prompt a viewer to check the time or decide to go to sleep, is replaced by an automated transition that treats continuation as the default and stopping as the exception.

When autoplay operates in conjunction with algorithmic recommendations, the combination creates what researchers and journalists have called "rabbit holes." Each successive video is selected to maximize the probability of continued viewing, and the removal of stopping cues means the user progresses from one selection to the next without conscious decision-making. A person who opens YouTube to watch a single cooking tutorial can surface an hour later having watched a sequence of increasingly tangential content, each video individually chosen by an algorithm and automatically initiated by autoplay.

The Designers Who Walked Away

The most effective witnesses against persuasive design are the people who built it. A loose coalition of former Silicon Valley insiders has emerged over the past decade, and their testimony, both public and in courtrooms, has given legal weight to claims that platforms were designed with knowledge of their behavioral effects.

Tristan Harris is the most prominent. After his internal presentation failed to change Google's direction, he became the public face of tech industry self-criticism. His 2017 appearance on "60 Minutes" introduced millions of viewers to the concept of persuasive design, and his organization, the Center for Humane Technology, has briefed members of Congress, European regulators, and legal teams preparing cases against social media companies.

Frances Haugen's 2021 disclosures were more consequential in legal terms. By providing internal documents to the Wall Street Journal, the SEC, and Congress, she created a paper trail showing that Meta's own researchers had identified harms to teen users and that the company's leadership had chosen not to implement recommended changes. Her testimony before the US Senate Commerce Committee in October 2021 was notable for its specificity: Haugen spoke not in generalities about social media harms but in the language of product teams, describing how specific algorithmic choices amplified harmful content.

Justin Rosenstein, who helped develop Facebook's Like button starting in 2007, later described the feature as creating a "bright ding of pseudo-pleasure." The Like button's design was not accidental. It created a quantified social feedback loop: post something, watch the number rise, feel the corresponding emotional response. Development began during internal hackathons in mid-2007, though the button did not launch publicly until February 2009. Rosenstein's public regret added to a growing catalog of designer recantations that journalists, advocates, and eventually lawyers cited as evidence that the industry understood what it was building.

Sandy Parakilas, who managed platform operations at Facebook until 2012, testified before the UK Parliament about the company's data and design practices. His testimony, like Haugen's, was credible precisely because of his insider status. These were not academic critics speculating about possible harms. They were engineers and managers who had seen the internal data, participated in design decisions, and concluded that the trade-offs were wrong.

Their collective testimony speaks to something the K.G.M. jury ultimately found decisive: the companies were not ignorant of the behavioral effects of their products. They measured those effects, discussed them in internal presentations, and in many cases chose not to mitigate them.

The Courtroom Meets the Codebase

In March 2026, the K.G.M. trial reached its conclusion. Plaintiff's attorney Mark Lanier had spent weeks presenting the jury with internal company documents, testimony from executives, and the conceptual framework that connects persuasive design to measurable harm. Meta CEO Mark Zuckerberg and Instagram head Adam Mosseri both took the stand and rejected the characterization of their platforms as "clinically addictive." YouTube argued it was a streaming platform, not a social media site, and that its features were not designed to be addictive.

The jury disagreed. All but two jurors found both companies liable, determining that Meta and YouTube were negligent in designing their platforms and that their products harmed K.G.M. The financial damages, $4.2 million from Meta and $1.8 million from YouTube, were modest by any corporate standard. Mark Lanier made the scale vivid during closing arguments by holding a jar of M&M's, each piece of candy representing a billion dollars of the companies' value. "You can take out a handful and not make a difference," he said, scooping out several. He bit into a single blue M&M. "This is like $200 million. They do not want to feel the pain for what they did."

The jury then awarded $3 million in punitive damages. TikTok and Snap had already settled with K.G.M. for undisclosed terms before the trial began. Outside the courtroom, jurors who identified themselves as Matthew and Victoria spoke about the deliberations. "We wanted them to feel it," Victoria said. "We wanted them to realize this was unacceptable."

The verdict does not hold that social media is inherently harmful. It holds that specific design choices, made with knowledge of their behavioral consequences, constitute negligence. Infinite scroll, algorithmic recommendations, autoplay, beauty filters. These are not abstract features in the K.G.M. record. They are engineering specifications that a group of ordinary citizens examined, evaluated against the evidence of internal knowledge, and found to be defective.

The K.G.M. case was one of thousands filed against social media companies by individuals, school districts, and state attorneys general. Eight more individual trials are scheduled in the same Los Angeles courthouse. Federal cases brought by states and school districts are set for jury trials in Oakland later in the year. The same week as the K.G.M. verdict, a New Mexico jury found Meta liable for $375 million in a separate case brought by the state's attorney general.

Aza Raskin solved a pagination problem in 2006. The solution removed a friction point, which removed a decision point, which removed the user's natural opportunity to stop. Every design pattern described in this article follows the same logic: identify a moment of friction, eliminate it, and measure the increase in engagement. The question the K.G.M. verdict poses is not whether these features are good or bad. It is whether the people who build them and measure their effects bear responsibility for consequences they can observe and choose not to prevent. If the optimization target changes, the product changes overnight. The engineering is the easy part. The decision to optimize for something other than time on platform is the part that, so far, has required a jury to compel.

Sources:

Nir Eyal, "Hooked: How to Build Habit-Forming Products" (Portfolio/Penguin, 2014)

B.J. Fogg, "Persuasive Technology: Using Computers to Change What We Think and Do" (Morgan Kaufmann, 2003)

B.J. Fogg, "A Behavior Model for Persuasive Design," Proceedings of the 4th International Conference on Persuasive Technology (ACM, 2009)

Natasha Dow Schüll, "Addiction by Design: Machine Gambling in Las Vegas" (Princeton University Press, 2012)

Frances Haugen, testimony before the US Senate Commerce Committee, October 5, 2021

Meta internal research documents, leaked 2021 (Wall Street Journal "Facebook Files")

K.G.M. v. Meta Platforms et al., Los Angeles County Superior Court, March 2026

Tristan Harris, Center for Humane Technology publications and "60 Minutes" appearance, April 2017

Aza Raskin, BBC interview on infinite scroll, 2018

Guillaume Chaslot, public statements on YouTube recommendation algorithm

Adam D.I. Kramer, Jamie E. Guillory, Jeffrey T. Hancock, "Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks," Proceedings of the National Academy of Sciences, 2014

Neal Mohan, statements on YouTube recommendation algorithm and watch time

This article was AI-assisted and fact-checked for accuracy. Sources listed at the end. Found an error? Report a correction