From Canberra to Beijing: The Global Scramble to Regulate Children's Social Media
At least 15 countries are drafting laws to protect minors online. The problem: platforms earn an estimated $11 billion a year from users under 18 in the United States alone, and every proposed solution carries a cost no government wants to pay.
Thirteen Minutes of Revenue
Six million dollars. That is what a California jury ordered Meta and YouTube to pay for harming a young woman named K.G.M. through addictive product design. The verdict, delivered in March 2026, made legal history. It also amounts to roughly thirteen minutes of Meta's quarterly revenue.
Meta reported approximately $60 billion in revenue for the fourth quarter of 2025. At that rate, the company earns about $27 million per hour. The entire K.G.M. penalty, split between Meta's $4.2 million and YouTube's $1.8 million, gets absorbed before a single employee finishes a coffee break. A week later, a New Mexico jury ordered Meta to pay $375 million for failing to protect minors from predators on its platforms. That is larger, certainly. It still represents less than one percent of a single quarter's revenue.
The numbers expose the core tension driving every regulatory effort on the planet. Social media companies earn billions from users under 18. A Harvard T.H. Chan School of Public Health study estimated that six major platforms collectively derived nearly $11 billion in advertising revenue from US-based users under 18 in 2022. Against that figure, every fine imposed to date, every legislative proposal drafted, and every regulatory body tasked with enforcement looks structurally inadequate. Thousands of lawsuits have been filed against social media companies by individuals, school districts, and state attorneys general across the United States, with the federal multidistrict litigation alone containing more than 2,400 pending actions as of early 2026. The legal pressure is real. But the economic incentive to resist is larger.
This is the arithmetic that governments from Canberra to Beijing, from Brussels to Brasilia, are trying to solve. Their approaches differ radically. Their results, so far, share one thing in common: none has changed the fundamental economics.
Australia's Experiment: Banning Under-16s Entirely
Australia chose the most direct path. In November 2024, parliament passed the Online Safety Amendment (Social Media Minimum Age) Act with overwhelming bipartisan support. The law bans children under 16 from using social media platforms, including Instagram, TikTok, Snapchat, X, and Reddit. It places the enforcement burden entirely on platforms, not on parents or children. No child faces a fine for creating an account. No parent faces a penalty for looking the other way. The platforms themselves must prevent underage access or face penalties of up to AUD 49.5 million per breach.
The law's simplicity is its political strength. It gave frustrated parents a clear signal: the government took a side. Prime Minister Anthony Albanese framed it as a matter of childhood protection, and the Coalition opposition backed it without hesitation. The eSafety Commissioner, Australia's online safety regulator, was tasked with overseeing implementation.
But implementation is where simplicity ends. The government commissioned age verification trials during 2025, testing technologies ranging from facial age estimation to digital identity tokens. None performed well enough across the board. Facial estimation tools like Yoti's showed accuracy gaps along lines of age, gender, and ethnicity. Document-based verification required users to submit government-issued identification, raising immediate data protection concerns. Privacy-preserving token systems, which would allow users to prove they are over 16 without revealing their identity, remain technically immature.
As of early 2026, the enforcement mechanism is still being finalized. No platform has been fined. The law exists, the implementation timeline stretches forward, and Australian teenagers with moderate technical literacy continue to access social media through workarounds that range from using a parent's account to lying about their birthday, the same method that has defeated every age gate since the internet began.
The experiment matters nonetheless. Across Southeast Asia, governments are watching. Indonesia, where social media penetration among young people is among the highest in the world, has cited the Australian model in early-stage policy discussions. Malaysia is considering similar age restrictions. Japan, which imposed gaming time limits for minors through the Kagawa prefecture ordinance in 2020 but has treated social media differently, is studying whether the Australian framework translates to its own legal system.
China's Parallel Universe: Time Limits and Real Names
China did not wait for a verdict to act. The Cyberspace Administration of China issued regulations in 2023 that imposed graduated time limits on minors' internet use: 40 minutes per day for children under 8, extending to two hours for those aged 16 to 18. These rules built on gaming restrictions introduced in 2021, which limited minors to three hours of online gaming per week, a rule enforced through the country's mandatory real-name registration system.
Douyin, the Chinese version of TikTok, implemented a "Youth Mode" in 2021 that restricts content and enforces the time caps. The technical enforcement works because China has something no democratic nation possesses: a universal real-name internet registration requirement backed by national identity documents. Every user is, in theory, identifiable. Every account can be matched to a real person and a verified age.
Government-reported data from the China Internet Network Information Center claimed a 70 percent reduction in minors' gaming time after the 2021 restrictions took effect. The figure is difficult to verify independently. Chinese parents, like parents everywhere, have found workarounds. Children use their grandparents' or parents' accounts. Third-party account-sharing services emerged briefly before being shut down. The gap between regulation on paper and enforcement in bedrooms is not unique to democracies.
What China demonstrates is not a model for export but a boundary condition. The most effective age verification requires the most invasive surveillance infrastructure. Beijing can enforce time limits because it already tracks every citizen's online activity. For governments that consider privacy a competing value rather than an obstacle, the Chinese approach defines the ceiling of what enforcement can achieve and the floor of what civil liberties it costs.
Europe's Third Way: Transparency Over Prohibition
The European Union rejected both the Australian ban and the Chinese surveillance model. The Digital Services Act, which entered full enforcement in 2024, takes a design-focused approach. Rather than excluding minors from platforms, it requires platforms to change how they treat young users.
Article 28 of the DSA prohibits targeted advertising based on the profiling of minors. Articles 34 and 35 require very large online platforms to conduct systemic risk assessments that specifically address risks to minors' physical and mental wellbeing, and to implement mitigation measures for those risks. Platforms must demonstrate that their design choices do not exploit the vulnerabilities of young users. The logic is philosophically closer to the K.G.M. verdict's product-design theory than to Australia's age gate: don't ban the user, regulate the product.
Germany arrived at this approach through its own evolution. The Network Enforcement Act, known as NetzDG, became one of the world's first social media content laws when it was passed in 2017. But NetzDG targeted hate speech and illegal content, not addictive design. Germany's Youth Protection Act, the Jugendschutzgesetz, addresses minors' media exposure but was written for an era of television and physical media. The DSA effectively supersedes these national frameworks with a continent-wide standard.
France has pushed further within the EU framework. French law established a concept of "digital majority" at age 15, below which platforms need parental consent for data processing. The approach echoes COPPA's model in the United States but adds the GDPR's enforcement teeth. Spain has drafted legislation that would restrict minors' social media access, though details remain in flux. Denmark is considering age-based limits that would draw on both the Australian and EU models.
The DSA's penalty structure gives it theoretical force: up to six percent of a platform's global annual turnover for systemic violations. For Meta, that could mean nearly $12 billion. But enforcement is distributed across 27 national Digital Services Coordinators, each with its own resources, priorities, and political pressures. The first DSA enforcement actions in 2025 focused on transparency requirements and illegal content, not specifically on the protection of minors. The minors provisions remain, for now, more promise than practice.
The United Kingdom: Ofcom's Slow Burn
The United Kingdom took a different institutional path. The Online Safety Act received Royal Assent in October 2023, making it one of the earliest comprehensive online safety laws in the democratic world. It designated Ofcom, the communications regulator, as the enforcement authority and gave it penalty powers that exceed even the EU's: up to 10 percent of global annual revenue or 18 million pounds, whichever is higher.
The penalty ceiling is striking. Ten percent of Meta's annual revenue would approach $20 billion. Ten percent of Alphabet's would exceed $30 billion. These are numbers that would change corporate behavior, numbers large enough to make the K.G.M. verdict's $6 million look like a rounding error.
But Ofcom operates through codes of practice, detailed technical standards that platforms must follow. These codes are being published in phases through 2025 and 2026, with child safety provisions among the final pieces. Until the codes are finalized, platforms face requirements in principle but not in enforceable detail. The Online Safety Act's architecture is sound. Its implementation timeline means that children will have aged through several social media years before the protections designed for them take practical effect.
The UK approach reveals a tension inherent in regulatory design. The more carefully a law is drafted, the more time its implementation requires. Australia moved fast and faces enforcement gaps. The UK moved deliberately and faces a different gap: the one between passing a law and actually applying it.
America's Legislative Graveyard
No major economy has produced more proposals and fewer results than the United States. The list of failed or stalled initiatives reads like an obituary column. The Children's Online Privacy Protection Act of 1998, COPPA, protects children under 13 from unauthorized data collection. It has never been updated despite more than two decades of technological transformation. A proposed COPPA 2.0, which would raise the protected age to 16, has not passed.
The KIDS Online Safety Act cleared the Senate in July 2024 by a vote of 91 to 3, one of the most bipartisan results in recent congressional history. It never received a vote in the House. In June 2024, Surgeon General Vivek Murthy called for Congress to require warning labels on social media platforms, similar to those on tobacco products, citing the platforms' association with mental health harms for adolescents. The call generated headlines, editorial support, and no legislation.
Three structural barriers explain the pattern. First, the First Amendment creates constitutional constraints on content-based regulation that do not exist in Australia, the EU, or China. Courts have repeatedly struck down laws that restrict online expression, even when aimed at protecting minors. Second, Section 230 of the Communications Decency Act has, until the K.G.M. verdict opened a crack, shielded platforms from liability for the effects of their products. Third, the technology industry's lobbying infrastructure is formidable. OpenSecrets data shows major tech firms spent more than $60 million on federal lobbying in 2024 alone, a figure that does not include state-level efforts, campaign contributions, or the revolving door between regulatory agencies and industry.
The result is a regulatory vacuum that extends well beyond American borders. Brazil's Marco Civil da Internet, passed in 2014, established an internet rights framework that emphasizes freedom of expression and net neutrality but lacks specific protections against addictive design targeting minors. India, with nearly 500 million social media users and one of the youngest demographics of any major economy, has proposed a Digital India Act that would modernize its Information Technology Act of 2000, but the legislation has been in draft form for years. In both countries, as in the United States, the gap between the scale of the problem and the scale of the response remains vast.
The Age Verification Problem
Every regulatory approach, from Australia's ban to the EU's design requirements to the UK's codes of practice, eventually collides with the same technical obstacle: proving a user's age without building a surveillance system.
Three categories of technology compete for the role. Self-declaration, where users enter their birthdate, is the current default and is essentially worthless. Any child who can read a calendar can claim to be 18. Document-based verification requires users to upload government-issued identification, which works but creates databases of identity documents that become targets for data breaches. The UK Information Commissioner's Office and the French CNIL have both flagged the data protection implications of mandatory ID verification for social media access.
The third category, age estimation using artificial intelligence, attempts to infer age from facial features, typing patterns, or behavioral signals. Companies like Yoti have developed facial analysis tools that estimate a user's age within a margin of error. But the margins vary. The technology performs less accurately for younger children, for users with darker skin tones, and for certain gender presentations. An age gate that works well for 25-year-old white men and poorly for 14-year-old girls of color is not a solution; it is a liability.
Australia's government-funded age verification trials have not yet produced a technology that satisfies all three requirements simultaneously: accuracy, privacy preservation, and scalability. The OECD's 2021 Recommendation on Children in the Digital Environment notably avoided endorsing age gates, instead recommending "age-appropriate design" as a less invasive alternative. The idea is that platforms should be safe for all users, including children, by default, rather than trying to identify and exclude minors.
In the Gulf states and parts of the MENA region, where national identity infrastructure is well-developed and digital government services are widespread, age verification is technically simpler. The UAE's digital identity system and Saudi Arabia's Absher platform could, in principle, enable age-verified social media access. But these are countries where the line between citizen verification and citizen surveillance is already thin, and where the regulatory impulse around social media has focused on content control rather than child protection from addictive design.
Follow the Money: Why Platforms Fight Every Rule
The resistance is not mysterious. It is arithmetic.
Meta does not disclose how much revenue it earns from users under 18, but the pieces of the calculation are available. The company's per-user monetization in the United States and Canada is among the highest in the world, exceeding $200 per user annually across its platforms. Industry analysts estimate Meta's US user base under 18 at roughly 25 to 30 million across Facebook and Instagram. That places the annual revenue from American minors alone in the billions.
But the revenue number understates the economic value. A 13-year-old who joins Instagram and remains an active user through age 30 represents 17 years of compounding engagement and ad revenue. Multiply that by the estimated minor user base and the lifetime value of the under-18 cohort in the US alone is substantial. Losing access to minors does not merely reduce current revenue. It disrupts the acquisition funnel that supplies the next generation of adult users, the ones who will generate revenue for decades.
Internal Meta documents leaked in 2021 revealed that the company's own research acknowledged the importance of young users to the platform's long-term growth. The language in the leaked studies was clinical, but the strategy was straightforward: acquire users young, retain them long. This is not a conspiracy theory; it is a customer acquisition strategy that every subscription business understands.
The compliance costs of age verification push in the same direction. Implementing robust age verification across platforms with billions of users would cost hundreds of millions annually. Platforms frame this as a burden, and it is, but the cost of compliance is a fraction of the revenue they would lose if compliance actually worked. The economic incentive is to implement age verification that appears serious but functions poorly, to build an age gate that looks like a wall but operates like a screen door.
This calculus extends into the financial markets. Gulf sovereign wealth funds, including Abu Dhabi's ADIA and Saudi Arabia's Public Investment Fund, hold positions in Meta and Alphabet stock. For these governments, regulating social media platforms is not just a question of child safety; it is a question of portfolio returns. The tension between investor interest and regulatory interest is rarely stated openly, but it shapes the speed at which regulation moves in capitals from Riyadh to Washington.
The Enforcement Illusion
Laws on paper and laws in practice are different things. The gap between statutory penalty ceilings and actual enforcement tells its own story.
Australia's Social Media Minimum Age Act provides for fines of up to AUD 49.5 million per breach. As of early 2026, zero fines have been issued. The law has not yet been tested against a single platform.
The EU's DSA can levy penalties of up to six percent of global turnover. The first enforcement actions in 2025 addressed transparency violations and illegal content. Specific enforcement on the protection of minors has not yet occurred.
The UK's Online Safety Act authorizes fines of up to 10 percent of global revenue. The child safety codes of practice are still being finalized.
China reports high compliance rates, but independent verification is impossible in a system that does not permit independent auditing.
The United States provides the most revealing data point. COPPA has been in force since 1998. Over 25 years, the Federal Trade Commission has collected a few hundred million dollars in total COPPA fines. That sounds substantial until you consider that Meta alone earned more than $200 billion in revenue during 2025. A quarter century of American child protection enforcement amounts to a fraction of one company's annual revenue.
The FTC, the agency responsible for COPPA enforcement, operates with a staff of roughly 1,100 and a budget of approximately $426 million per year. It oversees not only social media but the entire American economy's consumer protection landscape. The mismatch between the regulator's capacity and the industry's scale is not a failure of political will; it is a structural condition.
What Courts Do When Legislators Won't
The K.G.M. verdict did not emerge from a vacuum. It emerged from a gap, the space between the problem every government acknowledges and the solution none has managed to implement. When legislators cannot pass laws, when regulators lack enforcement capacity, and when economic incentives overwhelm voluntary reform, the courthouse becomes the default venue for policy change.
This is not new. Tobacco litigation followed the same arc. The first major Surgeon General's report linking smoking to disease appeared in 1964. The Master Settlement Agreement, in which tobacco companies agreed to pay $206 billion across 46 states and accept marketing restrictions, came in 1998. Thirty-four years elapsed between the authoritative warning and the enforceable consequence. During those decades, legislative efforts failed repeatedly. Regulatory agencies lacked jurisdiction. Voluntary industry reform was cosmetic. The courts, through thousands of individual and class-action suits, slowly built the legal and evidentiary foundation that made the settlement inevitable.
The social media timeline is shorter but follows the same trajectory. Frances Haugen's disclosure of internal Meta documents came in 2021. The Surgeon General's advisory came in 2024. The K.G.M. verdict came in 2026. Eight more individual plaintiff trials are scheduled in Los Angeles County. Federal cases brought by states and school districts in Oakland are set for jury trials in summer 2026. More than 40 state attorneys general have filed or joined actions against social media companies.
Litigation is a slow, expensive, and imprecise regulatory instrument. Each case addresses a single plaintiff or a single set of claims. Verdicts vary by jurisdiction. Appeals can take years. A patchwork of court decisions does not produce the coherent, industry-wide standards that legislation can provide. But legislation requires political consensus, and political consensus requires overcoming the structural barriers that have defeated every major social media bill in the world's largest market.
The equation that governs this landscape has not changed. Platforms earn more from minors than any jurisdiction has yet been willing to take from them in penalties. The cost of effective enforcement exceeds what any regulator has been given in resources. The technology to verify age without destroying privacy does not yet exist at scale. And the political cost of doing nothing continues to rise, one verdict, one leaked document, one harmed child at a time.
The K.G.M. jury awarded $6 million. Thirteen minutes of revenue. The next jury may award more. Eventually, the math will change. It always does. The question is only how many thirteen-minute increments it takes.
Meta Platforms Inc. quarterly earnings reports, Q4 2025 ($59.89 billion quarterly revenue)
K.G.M. v. Meta Platforms and YouTube, California Superior Court, Los Angeles County, March 2026
New Mexico v. Meta Platforms, March 2026, $375 million verdict
Harvard T.H. Chan School of Public Health, "Social media platforms generate billions of dollars in revenue from U.S. youth," PLOS ONE, December 2023
Australia Online Safety Amendment (Social Media Minimum Age) Act 2024
EU Digital Services Act (2024), Articles 28, 34-35
Cyberspace Administration of China regulations on minors' internet use, 2023
China Internet Network Information Center (CNNIC) statistical reports
UK Online Safety Act, 2023
Ofcom codes of practice on child safety, 2025-2026 (in progress)
US Surgeon General Advisory on Social Media and Youth Mental Health, June 2024
KIDS Online Safety Act, US Senate vote July 2024 (91-3)
Children's Online Privacy Protection Act (COPPA), 1998
Federal Trade Commission enforcement data, 1998-2025
OpenSecrets.org, technology industry lobbying expenditures, 2024
OECD Recommendation on Children in the Digital Environment, 2021
France, Law No. 2023-566 establishing digital majority at age 15
Germany Network Enforcement Act (NetzDG), 2017; Youth Protection Act (Jugendschutzgesetz)
Brazil Marco Civil da Internet, 2014
Meta internal research documents, leaked 2021 (Frances Haugen disclosure)
Yoti age estimation technology documentation
UK Information Commissioner's Office guidance on age assurance
Australia eSafety Commissioner, age verification trial documentation