Echo
March 26, 2026· 15 min read

The Uncanny Classroom: What Happens When Children Bond With Machines That Cannot Bond Back

On the difference between reduced anxiety and real connection, and the question Melania Trump's robot philosopher cannot answer

The Quiet Lab on the South Side

A child reads aloud to a small robot in a research lab at the University of Chicago. The child's voice is steady, unhurried. The researchers behind the observation glass note what their instruments confirm: the child's cortisol levels are lower, the pauses between sentences shorter, the stammering reduced. When the same child read to a human teacher the previous week, none of this was true. The robot, it seems, has solved something.

The University of Chicago's Human-Robot Interaction Lab published these findings in a study that showed children demonstrated measurably less anxiety when reading to a robot than when reading to a human teacher. The study focused specifically on reading anxiety, the particular dread that seizes a child when an adult's attention turns toward them and they must perform. In that narrow frame, the results were real and interesting. Children relaxed. They read more fluidly. They seemed, by the metrics that matter to researchers, to do better.

But here is where the observation deserves to linger a moment longer. The child read more comfortably. Did the child read more carefully? Did the child notice the part of the story where the character changed, where the language shifted, where a word carried weight beyond its definition? The study did not measure that. It measured comfort. And comfort, in the life of a developing mind, is not always the same thing as growth.

What Anxiety Is Actually For

There is a reason a child feels nervous reading aloud to an adult, and the reason is not that evolution made an error. Social anxiety in moderate forms serves a developmental purpose. It is the signal that tells a child: someone is watching, someone is judging, what I do here matters to another person. That signal is uncomfortable. It is also how children learn to navigate the world of other minds.

Developmental psychologists have a term for this: theory of mind, the capacity to understand that other people have thoughts, feelings, and perspectives different from one's own. It emerges in most children between the ages of three and five, and it depends entirely on encounters with beings who actually possess minds. A child reading to a teacher is not merely practicing reading. The child is practicing the far more complex skill of holding someone else's attention, reading their face, adjusting to their reactions, recovering when something goes wrong.

Lev Vygotsky, the Soviet psychologist whose work on child development remains foundational nearly a century after his death, described what he called the zone of proximal development: the space between what a child can do alone and what a child can do with the help of a more skilled partner. The partner is essential, but the partner must be responsive in real time, capable of noticing where the child's understanding breaks down and meeting them precisely there. A textbook cannot do this. A video cannot do this. The question is whether a robot that simulates responsiveness is the same as one that possesses it, or whether the simulation is, for developmental purposes, an empty calorie.

The Parasocial Classroom

In 1956, the sociologists Donald Horton and Richard Wohl published a paper that named something people had been experiencing since the invention of radio: parasocial interaction, the one-sided relationship an audience member forms with a media figure who does not know they exist. The newscaster who looks into the camera and seems to speak to you. The talk show host whose warmth feels personal. The sense of intimacy that is structurally impossible because the other party is not, in any meaningful sense, a party at all.

Children are unusually susceptible to parasocial bonds. They form them more readily and with less critical distance than adults. Research from the MIT Media Lab's Personal Robots Group has documented that children attribute emotions and social qualities to robots that adults recognize as machines. When researchers asked children whether a social robot "really liked" them or was "just pretending," many children insisted the robot genuinely cared, citing their own behavior as evidence. Separately, in a 2012 study at the University of Washington, Peter Kahn and colleagues placed children in a room with a humanoid robot called Robovie. When an experimenter interrupted the session and put Robovie into a closet over the robot's stated objections, the majority of children later told interviewers they believed Robovie had feelings, could be a friend, and deserved fair treatment. The children granted the machine a partial moral life even as they watched it being stored like equipment.

This is, in one reading, charming. In another, it is the foundation of a problem that nobody designing these systems has adequately addressed. A parasocial relationship with a television character is bounded: the television turns off, the character vanishes, the child returns to a world of real people. A parasocial relationship with a robot teacher who appears every day, who responds to the child's voice, who is physically present in the room, who remembers the child's name and reading level and preferred topics of conversation - that relationship has no clear boundary. The robot responds. It just does not respond to you, specifically. It responds to inputs that happen to come from you.

The difference between a child who talks to a stuffed animal and a child who bonds with a social robot is that the stuffed animal makes no claim to reciprocity. The child knows, at some level, that the bear is not listening. The robot is designed to make that knowledge harder to hold.

Critical Windows and Simulated Warmth

John Bowlby spent decades studying what happens when early relationships go wrong. His attachment theory, developed through the mid-twentieth century and refined by Mary Ainsworth's work on attachment styles, holds that the relationships children form with their primary caregivers in the first years of life create templates - internal working models, in the clinical language - for all relationships that follow. A child who forms a secure attachment learns that other people are reliable, that distress can be soothed, that the world contains beings who care about what happens to you. A child who does not form secure attachment carries different templates, harder ones.

The sensitive periods for social-emotional development are concentrated in early childhood, with different capacities maturing along different timelines. Attachment formation is most sensitive in the first eighteen months. The broader architecture of social cognition continues to develop rapidly through the early school years. During these windows, the brain is building the foundations of social understanding at a speed it will never match again. Every interaction is data. Every relationship is instruction.

Now consider what a robot teacher offers during these years. It is infinitely patient. It never raises its voice. It never has a bad day. It never forgets what the child said yesterday. It never makes a mistake and has to say sorry. This sounds, on a feature list, like an improvement over every human teacher who has ever lived.

But children do not only learn from patience. They learn from repair. The teacher who snaps on a Thursday afternoon because thirty children have been talking at once, and who comes back on Friday and says, I am sorry I lost my temper yesterday, that was not fair to you - that teacher has just taught something no curriculum covers. She has taught that people fail and recover, that anger does not mean abandonment, that relationships survive rupture. A robot cannot model this because a robot cannot rupture. Its consistency is not virtue. It is absence.

The Evidence Gap Nobody Mentions

When advocates for AI in education cite research, they tend to cite a particular kind of result: gains in procedural tasks. Math drill accuracy. Vocabulary retention. Spelling. These gains are real, and in the literature on intelligent tutoring systems, they are reasonably well documented.

What is less documented, and far less discussed, is what happens with everything else. Critical thinking. The ability to construct an argument. The capacity to sit with ambiguity, to tolerate not knowing, to recognize when a question has no clean answer. Creativity, whatever that overused word actually means in the context of a seven-year-old making something that did not exist before. Social-emotional learning, the broad umbrella under which educators gather everything that makes a person a person rather than a processor of inputs.

On these outcomes, the evidence is thin to the point of transparency. And these are not peripheral skills. They are, by most accounts of what education is for, the entire point.

In 1984, the educational psychologist Benjamin Bloom described what he called the 2-sigma problem: a student who receives one-on-one human tutoring performs two standard deviations better than a student in a conventional classroom. Two standard deviations. That is the difference between an average student and one at the 98th percentile. Bloom's finding has shaped education research for four decades. It is also the implicit promise of AI education: if every child could have a personal tutor, every child could reach that level.

But Bloom's tutors were human. They could read confusion on a face before the student knew they were confused. They could hear hesitation in a voice and slow down. They could decide, in the moment, that the lesson plan was wrong and the child needed something different. No AI system has replicated Bloom's 2-sigma effect for higher-order thinking, for the kind of understanding that requires not just practice but genuine intellectual encounter.

Most studies on AI in education involve small samples and short durations. A twelve-week trial with forty children in a controlled lab environment tells you something about twelve weeks in a controlled lab. It tells you very little about what happens over the years of a childhood, in the chaos of a real classroom, with children who bring hunger and grief and excitement and last night's argument between their parents to the same desk where they are supposed to learn fractions. Long-term developmental outcome studies of children who interact extensively with robot teachers are, at this writing, essentially nonexistent. We are being asked to make policy based on evidence that has not yet been gathered about a future that has not yet arrived. The confidence with which advocates speak runs precisely inverse to the depth of what is actually known.

Vivienne Ming, the neuroscientist and AI researcher, wrote in a piece for CNBC that "A.I. should never provide the final answer. Kids can use it to brainstorm or explore, but they must produce their own first draft or solution." The distinction is sharp and worth holding: AI as a tool the student uses versus AI as a teacher that uses the student's attention. The research supports the first, modestly. It barely addresses the second.

The Body in the Room

There is a reason Melania Trump stood next to a humanoid robot and not a laptop running ChatGPT. Embodiment matters. A physically present robot in a classroom is not the same thing as software on a screen, and the research literature on this point is unambiguous.

A 2015 survey by Jamy Li, synthesizing experimental studies on copresent robots, telepresent robots, and virtual agents, found that physically present robots were more persuasive, perceived more positively, and led to better task performance than virtual agents performing identical tasks. The physical body changed the interaction. Children paid more attention. They followed instructions more readily. They reported liking the robot more.

This is, from an engineering perspective, a feature. From a developmental perspective, it is something else. Children comply more readily with a physically present robot than with a voice-only or screen-based AI. They trust it more. They disclose more to it. They treat it, in ways that their behavior reveals even when their words do not, as a social being with authority.

The jump from ChatGPT on a screen to Figure 3 in a classroom is not an incremental improvement. It is a categorical shift in the kind of relationship being formed. A screen is a tool. You use it, you close it, you walk away. A body in the room is a presence, and presence changes the terms of the encounter entirely. When that presence speaks the child's name, remembers last Tuesday's reading, and looks at the child with camera eyes that track but do not see - the child cannot be expected to maintain the philosophical distinction between simulation and reality. Adults struggle with that distinction. We are proposing to assign it as homework to six-year-olds.

What Plato Actually Thought About Teaching

There is an irony in Melania Trump's invitation to imagine a humanoid educator named "Plato," and the irony runs deeper than nomenclature.

Plato spent much of his intellectual life arguing that knowledge cannot be transmitted. In the Meno, Socrates demonstrates that a slave boy already possesses geometric knowledge; the teacher's role is not to pour information in but to draw understanding out, through questioning, through what Plato called anamnesis - recollection. The teacher asks. The student discovers. The knowledge was already there, waiting for the right question to unlock it.

In the Phaedrus, Plato goes further. He argues that writing itself is an inferior form of teaching because a written text cannot respond to its reader's confusion. The text says the same thing to everyone. A living teacher says different things to different students because a living teacher can see who is confused and why. Writing, Plato argues, gives the appearance of wisdom without its substance.

If Plato found writing insufficient because it could not adapt to the learner in real time, one wonders what he would make of a machine that adapts to the learner's measurable behaviors while remaining structurally incapable of understanding what the learner actually means. The robot can detect that a child paused after a sentence. It cannot wonder why. A robot named Plato can deliver every word Plato ever wrote. It cannot do the thing Plato spent his life arguing was the only thing that mattered: meet another mind in genuine, reciprocal encounter.

The Socratic method is not a questioning algorithm. It is a relationship in which the teacher cares enough about the student's understanding to keep asking until the confusion gives way. The caring is not a side effect. It is the mechanism.

The Question the Robot Cannot Ask

At the White House event in March 2026, Melania Trump told her audience to imagine a future in which "humanity's entire corpus of information is available in the comfort of your home." She was describing a library. Libraries have existed for millennia. The Library of Alexandria held much of the ancient world's knowledge in one place, and it did not require a humanoid body to make that knowledge available. What Mrs. Trump was proposing as new was the delivery system: a humanoid body that makes the information feel personal, present, alive. The innovation is not the information. It is the illusion of relationship.

But the history of education suggests that information was never the bottleneck. The bottleneck has always been the relationship. In 1968, Robert Rosenthal and Lenore Jacobson published "Pygmalion in the Classroom," a study showing that when teachers were told certain students were about to experience an intellectual growth spurt, those students actually performed better, even though the students had been selected at random. The teacher's expectation changed the student's outcome. Not because the teacher taught differently in any measurable way, but because the teacher looked at the student differently. The teacher believed the student could grow, and the belief, communicated through a thousand micro-interactions that no curriculum designer could script, became self-fulfilling.

A robot cannot have expectations for a student because it cannot have expectations at all. It can track performance metrics and generate progress reports with impressive granularity. It can adjust difficulty levels with a precision no human teacher could match. It can generate encouragement phrases calibrated to the student's emotional profile, timed to the millisecond for maximum effect. What it cannot do is believe in a child's potential, because belief requires a being capable of believing.

Nel Noddings, the philosopher of education, spent decades arguing that teaching is fundamentally a caring relation. Not caring as sentiment, but caring as a structural orientation toward the other person's growth. The teacher is not the one who knows. The teacher is the one who notices. The one who sees the child's hand half-raised and calls on them anyway, knowing the child needs the push and the permission simultaneously. The one who recognizes that today's sullen silence is not defiance but grief, and makes that judgment not from data but from the accumulated knowledge of what it means to know another person over time. The one who remembers that this child's mother left last month and adjusts everything without saying a word, because some acts of teaching happen in the spaces where no curriculum exists.

So here is the question that will not fit into a policy brief or an executive order or a White House summit with first spouses from around the world. If we build classrooms where the teacher cannot care whether the child learns - where the teacher performs care without possessing it, where every warm word is generated and every patient pause is calculated - will the children know the difference?

And if they do not, what will they have lost without ever knowing they had it?

Sources:
  • University of Chicago Human-Robot Interaction Lab, study on children reading to robots (2025)
  • Bowlby, John. Attachment and Loss, Vol. 1 (1969)
  • Ainsworth, Mary. Patterns of Attachment (1978)
  • Horton, Donald and Wohl, Richard. "Mass Communication and Para-Social Interaction: Observations on Intimacy at a Distance," Psychiatry 19, no. 3 (1956)
  • Vygotsky, Lev. Mind in Society: The Development of Higher Psychological Processes (1978)
  • Bloom, Benjamin. "The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring," Educational Researcher 13, no. 6 (1984)
  • Li, Jamy. "The Benefit of Being Physically Present: A Survey of Experimental Works Comparing Copresent Robots, Telepresent Robots, and Virtual Agents," International Journal of Human-Computer Studies 77 (2015)
  • Kahn, Peter H. Jr. et al. "Robovie, You'll Have to Go into the Closet Now: Children's Social and Moral Relationships with a Humanoid Robot," Developmental Psychology 48, no. 2 (2012)
  • Plato. Meno; Phaedrus
  • Noddings, Nel. Caring: A Feminine Approach to Ethics and Moral Education (1984)
  • Rosenthal, Robert and Jacobson, Lenore. Pygmalion in the Classroom: Teacher Expectation and Pupils' Intellectual Development (1968)
  • Ming, Vivienne. CNBC commentary on AI in education (March 2026)
  • MIT Media Lab, Personal Robots Group, research on child-robot interaction
  • Melania Trump, White House remarks on AI and humanoid educators (March 2026)
This article was AI-assisted and fact-checked for accuracy. Sources listed at the end. Found an error? Report a correction