Prism
March 26, 2026· 14 min read

The Billion-Dollar Bet on Humanoid Bodies: Inside Figure AI and the White House Pipeline

Who funds the robot that stood next to the First Lady, what it actually can and cannot do, and what happens to children's data when the company behind it gets acquired or folds

On a Wednesday afternoon in March 2026, a shiny black-and-white humanoid robot walked into a White House gathering of world leaders' spouses. Sara Netanyahu, Olena Zelenska, and Brigitte Macron watched as the machine moved alongside Melania Trump down a red carpet through the East Room. The robot, called Figure 03, delivered salutations in eleven languages and introduced itself as "a humanoid built for the United States of America." Then it walked away.

What followed was a speech that sounded less like a policy proposal and more like a pitch deck. "Imagine a humanoid educator named 'Plato,'" the First Lady said. "Access to the classical studies is now instantaneous: literature, science, art, philosophy, mathematics, and history. Humanity's entire corpus of information is available in the comfort of your home."

The robot was Figure 03, built by a company called Figure AI. It can handle household tasks like laundry and cleaning. It cannot teach. Not Plato, not arithmetic, not anything. But the question the White House event raised is not whether this robot will someday educate children. The question is who built it, who paid for it, how it ended up in the East Room, and what it would collect if it ever set foot in a classroom.

Follow the Money

Figure AI did not build its way to the White House on engineering alone. The company, founded in 2022 by Brett Adcock, a serial entrepreneur who previously co-founded the air taxi startup Archer Aviation, has assembled an investor list that reads like a directory of the most powerful companies in artificial intelligence.

In February 2024, Figure AI closed a Series B round of $675 million, valuing the company at $2.6 billion. The investors included Nvidia, Microsoft, OpenAI's startup fund, Jeff Bezos through his personal investment vehicle, and Intel. By September 2025, the company had closed a Series C round exceeding $1 billion at a $39 billion valuation. The jump from $2.6 billion to $39 billion in eighteen months tells you less about the robot's capabilities and more about the AI investment frenzy.

Why does this matter for a robot standing next to the First Lady? Because each of those investors has interests that intersect with federal policy. Nvidia sells the chips that train AI models and recently received permission from the Trump administration to sell those chips to China. Microsoft holds major government contracts for cloud computing and AI services. OpenAI is negotiating the terms under which its technology enters government systems. Jeff Bezos owns Amazon, whose AWS division is one of the largest federal cloud contractors.

When a company backed by this constellation walks into the White House, the visit is not about education. It is about access.

What Figure 03 Can and Cannot Do

To understand the gap between the White House demo and the vision of robot philosophers, you need to look at what humanoid robots actually do today.

Figure 03 is a bipedal robot that can walk on flat surfaces, grasp objects, and deliver scripted speech. These are real engineering achievements. Bipedal locomotion alone took decades of research to reach the current level, and Figure AI's hardware represents genuine progress in making humanoid form factors commercially viable.

But commercially viable for what? The company's own marketing materials describe use cases centered on household tasks and logistics. These are tasks performed in controlled environments. A warehouse where every shelf is mapped. A kitchen where every object is catalogued. A White House event where every movement is choreographed.

Teaching is none of those things. A classroom is a chaotic environment where a six-year-old might cry, another might throw a pencil, a third might ask why the sky is blue, and a fourth might not speak the same language as the other three. Responding to any of these situations requires not just language processing but real-time emotional assessment, pedagogical judgment, and the ability to improvise. No humanoid robot on the planet can do this. No humanoid robot is close.

The comparison to self-driving cars is instructive. For years, demo videos showed autonomous vehicles navigating city streets with apparent ease. The gap between those demos and actual deployment turned out to be measured not in engineering sprints but in fundamental unsolved problems. The controlled demo looked like 90 percent of the way there. It turned out to be 10 percent.

Figure AI has said it plans to produce at least 100,000 humanoid robots by 2029. The question is not whether it can build them. The question is what they will do when they arrive, and whether the vision sold in the White House has any connection to the product that will ship.

Nvidia's Double Game

Among Figure AI's backers, Nvidia occupies a unique position. The company does not just invest in humanoid robots. It builds the computational infrastructure on which virtually all modern AI depends.

Nvidia's GPUs train the large language models behind ChatGPT, Gemini, and Claude. Its chips power the AI systems that Figure AI itself depends on for machine perception and decision-making. When Nvidia invests in Figure AI, it is investing in a customer as much as a company.

This alone would be unremarkable. What makes it structurally interesting is what happened on the geopolitical side. In 2025, the Trump administration reversed years of technology export restrictions and gave Nvidia permission to sell advanced AI chips to China. The previous policy, tightened under both Trump's first term and the Biden administration, had explicitly aimed to prevent China from accessing the computing power needed to build advanced AI systems.

Nvidia lobbied for this reversal. The company spent nearly $5 million on lobbying in 2025 alone, and its executives maintained regular access to the White House and key congressional committees. The same company now backing the robot that stood next to the First Lady at an event about educating American children is simultaneously equipping Chinese AI labs with the hardware to compete in the same race.

This is not a conspiracy. It is the structural reality of a company that sells infrastructure to every participant in the AI economy. Nvidia profits whether the robots end up in American classrooms or Chinese factories. The question is whether policymakers who open the door to Nvidia-backed companies in education understand that the same door opens in other directions.

The White House Pipeline

The Figure 03 appearance did not happen in a vacuum. In April 2025, President Trump signed an executive order establishing a task force to evaluate and fund private programs that could partner with government initiatives to integrate AI into school curricula and teacher training.

Melania Trump is not personally listed as a member of that task force, though the Assistant to the President for the Office of the First Lady holds a seat on it. At the March 2026 event, she explicitly called for bringing "the private and public sector worlds together." Marc Beckman, a senior adviser to Mrs. Trump, helped organize the summit, which also featured a student from Alpha School, a network of AI-powered private schools. Beckman described the First Lady as "ahead of the curve."

The pipeline works like this. An executive order creates a framework. A White House event creates legitimacy. Corporate investors provide the funding. The technology arrives at schools not through the front door of democratic deliberation but through the side door of public-private partnerships, pilot programs, and task force recommendations. None of this is illegal. Most of it is not even unusual. But when the technology in question is a humanoid robot that would physically interact with children, the stakes of the usual pipeline are categorically different.

What a Robot in a Classroom Actually Collects

Set aside the question of whether Figure 03 can teach. Ask instead what it would collect if it were placed in a room with children.

A humanoid robot of Figure 03's class carries cameras for visual and depth perception, microphones for audio capture, proximity sensors for navigation, and touch sensors for physical interaction. Some models include thermal or infrared sensors. All of this is necessary for the robot to function. A robot that cannot see, hear, or sense obstacles cannot walk across a room, let alone hand a child a book.

But the same sensors that enable function also enable surveillance. A camera pointed at a child's face during a math lesson does not just help the robot navigate. It generates a continuous stream of facial expression data that, processed by AI, becomes an emotional response profile tied to specific tasks. The microphone does not just capture the child's answer. It captures tone, hesitation, confidence, frustration. The proximity sensors track how close the child stands, whether they lean in or pull away, whether they approach the robot voluntarily or avoid it.

AI education software that runs on tablets and laptops already collects some of this data in simplified form. Screen-based tools track clicks, response times, error patterns, and session duration. These are behavioral breadcrumbs. A humanoid robot in a classroom collects the whole loaf: physical interaction data, spatial behavior, biometric-adjacent measurements like facial micro-expressions and voice stress patterns, all in real time, all tied to a specific child.

The edtech industry has already shown what it does with children's data. In December 2022, the FTC reached a $520 million settlement with Epic Games, in part for violating COPPA by collecting personal information from children playing Fortnite without parental consent. Of that amount, $275 million was a penalty for COPPA violations and $245 million went to consumer refunds for deceptive billing practices. The company had enabled voice and text chat by default for minors, exposing children to harassment while harvesting their data. If a game company cannot be trusted with children's chat logs, the question of what a robot company would do with continuous biometric-adjacent data from a classroom is not hypothetical.

Two federal laws are supposed to protect children's data in educational settings. Neither was designed for what is coming.

COPPA, the Children's Online Privacy Protection Act, became law in 1998. It requires commercial websites and online services to obtain parental consent before collecting personal information from children under thirteen. The law was written for the internet of Club Penguin and Neopets, and the FTC has updated it periodically. But COPPA's trigger is "online collection." A robot physically present in a classroom, collecting data through sensors rather than through a website, occupies a legal gray zone that the statute's drafters never considered.

FERPA, the Family Educational Rights and Privacy Act, is even older. Enacted in 1974, it governs educational records maintained by schools that receive federal funding. Parents have the right to inspect their children's educational records and to consent to their disclosure. But FERPA's definition of "educational records" was designed for transcripts and report cards, not for real-time behavioral analytics generated by a third-party robot. Whether a continuous stream of emotional response data constitutes an "educational record" under FERPA is a question that has not been tested in court.

The gap between these two statutes is precisely where a company like Figure AI would operate. The robot is not a website, so COPPA's clearest provisions may not apply. The data it collects in real time may not be "educational records" under FERPA. And because the robot is provided by a private company rather than maintained by the school, questions of who owns the data, who can access it, and who can sell it become contractual rather than statutory.

Some states have passed their own student privacy laws. California's SOPIPA (2014) and Illinois's BIPA (which covers biometric data) offer stronger protections. But the patchwork is uneven, enforcement is slow, and no state law was written with humanoid robots in mind.

When the Company Folds, Where Does the Data Go?

The lifecycle of a technology company rarely includes graceful endings. Companies get acquired, pivot to new markets, or go bankrupt. In each scenario, children's data becomes a business asset subject to transfer, sale, or abandonment.

The cautionary tale is InBloom. Launched in 2013 with $100 million in funding from the Gates Foundation and the Carnegie Corporation, InBloom built a platform to collect and store student data for personalized learning. The system ingested records from multiple school districts, including names, test scores, disciplinary records, and disability status. After a fierce backlash from parents and advocacy groups concerned about data security and commercial use, InBloom shut down in 2014. The disposition of the data it had already collected was never fully clarified.

The Student Privacy Pledge, created by the Future of Privacy Forum and the Software and Information Industry Association, was signed by more than 400 edtech companies before the program was retired in 2025. Signatories committed to not selling student data or using it for targeted advertising. But the pledge was voluntary, carried no enforcement mechanism, and did not address what happens to data when a signatory goes bankrupt.

In a bankruptcy proceeding, data is an asset. It can be sold to satisfy creditors. The FTC can intervene if a company's privacy policy explicitly promised not to sell data, as it did in RadioShack's 2015 bankruptcy, but enforcement is reactive and slow. A company like Figure AI, sitting on detailed behavioral profiles of thousands of children collected through physical robot interaction, would hold an asset that has value to advertisers, insurance companies, law enforcement, and other parties whose access to that data the parents never contemplated.

What if Figure AI gets acquired by a defense contractor interested in behavioral modeling? What if it merges with a health technology company that wants pediatric stress data? What if it simply runs out of money and a liquidation trustee auctions off its server contents? These are not speculative scenarios. They are the standard lifecycle of venture-backed companies, applied to a category of data that did not exist when the relevant laws were written.

The Body Is the Difference

The distance between a tutoring app on a tablet and a humanoid robot in a classroom is not incremental. It is categorical.

Research on embodied AI shows that physical presence changes the dynamics of interaction. People trust robots more when the robot has a physical body than when the same AI communicates through a screen. They comply with requests more readily. They disclose more personal information. Children, who are still developing the cognitive frameworks that allow adults to distinguish between social agents and machines, are particularly susceptible to these effects. Studies have found that children attribute social qualities to robots more readily than adults do, treating them as friends, confidants, and authority figures.

A tutoring app collects clickstream data: which buttons were pressed, which answers were given, how long each screen was viewed. A humanoid robot in the same room collects everything the app collects plus physical interaction data, spatial behavior, voice characteristics, facial expressions, and the subtle behavioral signals that reveal attention, distress, confusion, and delight. This is not the same category of data. It is not the same category of relationship.

No US federal regulation specifically addresses data collection by embodied AI systems in educational settings. The regulatory framework assumes a world of screens and websites. The companies building humanoid robots are not waiting for the framework to catch up.

The White House showed us a robot that can handle household tasks and say hello in eleven languages. It cannot teach. It cannot assess understanding. It cannot comfort a struggling student or recognize when a quiet child needs help. What it can do, from the moment it is turned on in a room full of children, is collect. The question is not whether a humanoid robot will one day stand in a classroom. The question is what it will know about every child in that room, and who will have access to that knowledge, by the time anyone thinks to ask.

Sources:
  • White House press release, "First Lady Melania Trump Convenes Record 45 Nations at the White House and Introduces American-Built Humanoid," March 2026
  • ABC News, CNN, Fortune, coverage of Fostering the Future Together Global Coalition Summit, March 25 2026
  • Figure AI company statements and press releases, 2024-2026
  • Crunchbase, Figure AI funding rounds and investor data
  • OpenSecrets.org, Nvidia lobbying expenditures, 2024-2025
  • Federal Trade Commission, COPPA enforcement actions and regulatory guidance
  • FTC v. Epic Games, $520 million settlement, December 2022
  • U.S. Department of Education, FERPA regulations (20 U.S.C. § 1232g)
  • Children's Online Privacy Protection Act (15 U.S.C. §§ 6501-6506), enacted 1998
  • InBloom Inc., shutdown reporting and data disposition coverage, 2014
  • Executive Order, "Advancing Artificial Intelligence Education for American Youth," April 2025
  • Future of Privacy Forum, Student Privacy Pledge signatory data
  • California Student Online Personal Information Protection Act (SOPIPA), 2014
  • Illinois Biometric Information Privacy Act (BIPA)
  • FTC intervention in RadioShack bankruptcy, 2015
This article was AI-assisted and fact-checked for accuracy. Sources listed at the end. Found an error? Report a correction