The Analog Comeback: Why Handwriting, Textbooks, and Boredom Are Staging a Classroom Revival
Neuroscience is catching up with what seventh graders already know: the brain learns differently when the hand holds a pen
What happens inside a twelve-year-old's brain when she picks up a pencil and writes the word "photosynthesis" on a sheet of lined paper? Not types it. Not taps it on glass. Writes it, letter by letter, the pencil pressing grooves into the page. The question sounds quaint in an era where most classrooms hand children a Chromebook before they hand them a textbook. But a growing body of neuroscience research suggests the answer is anything but quaint. It is, in fact, one of the most consequential findings in learning science of the past decade, and it is beginning to reshape how schools think about technology in the classroom.
What Happens Inside a Brain That Writes
At the Norwegian University of Science and Technology in Trondheim, Audrey van der Meer has spent years attaching high-density EEG caps to the heads of children and young adults, then asking them to do something simple: write words by hand, or type them on a keyboard. The difference in brain activity is not subtle.
When participants write by hand, Van der Meer's EEG recordings show widespread connectivity patterns linking visual, motor, and central brain regions. Theta-band oscillations, the slow rhythmic waves associated with memory encoding and retrieval, light up across central and parietal areas. When the same participants type the same words, the pattern collapses. The brain still processes the information, but through a narrower corridor. Typing activates the motor cortex for finger movements and the visual cortex for letter recognition, but the sprawling network of connections that handwriting triggers does not appear.
Think of it as the difference between walking through a city and driving through it. The walker builds a mental map from street smells, uneven cobblestones, the weight of a hill. The driver arrives faster but remembers the route as a sequence of turns. Handwriting forces the brain to construct each letter through a unique motor sequence, and that construction process leaves a richer trace in memory.
Van der Meer's research, published across multiple studies from 2017 to 2024 in Frontiers in Psychology, builds on earlier work by Karin James at Indiana University. James studied something even more fundamental: how young children learn to recognize letters in the first place. In a series of experiments, she asked preliterate children to learn letters through one of three methods: writing them freehand, typing them on a keyboard, or tracing them within an outline. When she later scanned the children's brains during letter recognition, only the freehand writing group activated the left fusiform gyrus, a region critically involved in reading. The children who typed or traced showed no such activation. The hand, it appeared, was teaching the brain to read.
The Laptop Note-Taking Paradox
If handwriting activates richer neural networks, does that translate into better learning outcomes in a classroom? Pam Mueller at Princeton and Daniel Oppenheimer at UCLA tested this in 2014 with an elegantly simple experiment. They sent students into a lecture hall. Half took notes on laptops. Half took notes by hand. Then they tested everyone.
The laptop users had captured more content. Their notes were closer to a verbatim transcript of the lecture. But on conceptual questions, the kind that require understanding rather than recall, the longhand note-takers performed significantly better. Mueller and Oppenheimer titled their paper "The Pen Is Mightier Than the Keyboard," published in Psychological Science, and it became one of the most cited education studies of the decade.
The mechanism is not mysterious. A person typing on a laptop can transcribe speech nearly in real time. The fingers move fast enough to keep up with the speaker, so the brain does not need to do much filtering. A person writing by hand cannot keep up. The hand is too slow. So the brain compensates: it selects, compresses, rephrases. It decides what matters and what does not. This forced selection is itself a form of learning. Cognitive scientists call it generative processing, and it is one of the most reliable predictors of retention.
But science is not a highlight reel. In 2019, Morehead, Dunlosky, and Rawson published a direct replication of the Mueller and Oppenheimer findings. The results were sobering. Performance did not consistently differ between longhand and laptop groups, and a meta-analysis combining direct replications showed small, nonsignificant effects favoring longhand. The replication did not demolish the original finding, but it complicated it substantially. The advantage of longhand notes appears to depend on how the test is structured, how much time passes between note-taking and testing, and whether students review their notes. The mechanism itself, that generative processing aids retention, remains well-established in cognitive science. The clean headline, that handwriting always beats typing, does not survive scrutiny.
This is worth sitting with. The evidence for handwriting is strong but contextual. It does not say laptops are useless. It says laptops make it easy to bypass the cognitive work that produces learning, and that many students, left to their own devices in every sense of the phrase, will take that easier path.
Reading on Paper vs. Reading on Screen
The question of handwriting leads naturally to a related one: does it matter whether a student reads from paper or from a screen? Anne Mangen at the University of Stavanger in Norway has been investigating this for over a decade. In one widely cited experiment, she gave two groups of readers the same mystery story. One group read it on paper. The other read it on a Kindle. When asked to reconstruct the plot in chronological order, the paper readers performed significantly better.
Mangen's hypothesis centers on what she calls haptic cues. A physical book provides the reader with continuous spatial and tactile feedback. You feel the weight of read pages accumulating in your left hand. You see how far you are from the end. Your fingers mark a position. These cues, trivial as they sound, help the brain construct a mental map of the text. A screen, which presents every page in the same flat rectangle, provides none of this spatial scaffolding.
The most comprehensive evidence comes from a 2018 meta-analysis by Delgado, Vargas, Ackerman, and Salmerón, published in Educational Research Review. They aggregated 54 studies covering 171,055 participants and found a significant screen inferiority effect on reading comprehension. The effect was stronger for expository texts than for narratives, and it increased under time-constrained reading conditions. Short texts on a screen caused little difference. Long, complex texts on paper produced measurably better understanding.
The PISA data tells a complementary story at the population level. In the 2018 assessment, students across nearly all OECD countries who reported moderate use of digital devices in school scored higher in reading than students with the highest device use. Japan offers a particularly instructive case. Japanese students, who used devices less frequently in school than their peers in many other developed nations, performed comparatively well on PISA reading assessments. Japan's Ministry of Education has since pursued a dual approach through its GIGA School Program, distributing one device per student while maintaining analog instruction as a parallel and equally valued track.
Schools That Switched Back
Research findings take years to move from journals to classrooms. But the accumulation of evidence, combined with a pandemic that forced millions of children onto screens for months, has begun to shift institutional behavior.
Sweden delivered the most prominent signal. In 2023, the Karolinska Institute, one of Europe's most respected medical universities, issued an advisory warning against screen-based learning for young children. The statement carried particular weight because it came from a scientific institution, not a teachers' union or a political party. Sweden's education minister responded by announcing a return to printed textbooks in primary schools and a multi-billion-krona investment in textbooks and school libraries spread across several years. The country had been among the most aggressive digitalizers in Europe. It became the first to formally reverse course.
In the United States, the movement is more fragmented but increasingly legislative. North Carolina passed House Bill 959, the Protecting Students in a Digital Age act, requiring all public school boards to restrict wireless communication devices during instructional time. Individual districts like Wake County and Charlotte-Mecklenburg have also reconsidered their one-to-one Chromebook programs, with some limiting devices to specific tasks or confining them to certain grade levels.
The strongest measurable data, somewhat ironically, comes not from the textbook-versus-tablet question but from phone-free school policies. Schools implementing lockable phone pouches, marketed under the brand name Yondr, have reported reduced behavioral incidents and improved classroom focus. Australia has gone further, with several states banning mobile phones from schools outright. Preliminary data from these interventions suggests improved attention, though rigorous longitudinal studies remain sparse.
A note of caution is necessary here. Most of the evidence for analog-return benefits at the school level remains anecdotal or based on short-term observations. The Swedish reversal is too recent for outcome data. The US district experiments are too scattered for systematic comparison. The strongest scientific evidence lives in the laboratory studies described above, not yet in large-scale educational outcomes research.
The Students Who Prefer Paper
Adults tend to debate screen time on behalf of children. Researchers study it. Parents worry about it. Teachers manage it. But when someone bothers to ask the students themselves, the answers are often surprising in their clarity.
The original headline that prompted this investigation references seventh graders who say they prefer learning offline. They are not alone. Naomi Baron at American University has spent years researching student preferences for print versus digital reading. In surveys of university students across multiple countries, she has consistently found that a majority prefer print for serious reading, the kind that requires concentration and retention. Students distinguish, with more precision than many adults give them credit for, between reading for entertainment and reading to learn. For the latter, they want paper.
Younger students articulate the same preference in simpler terms. In classroom interviews and surveys, middle schoolers frequently cite fewer distractions as their primary reason for preferring paper. A notebook does not ping with notifications. A textbook does not offer a tab where YouTube lives. But beyond the absence of distractions, students describe something harder to quantify: a tactile satisfaction in writing by hand, a feeling of progress as pages fill, a sense that the work is more "real" when it exists as ink on paper rather than pixels on glass.
This is not nostalgia. These students did not grow up in an analog era. Many of them have used tablets since preschool and Chromebooks since third grade. Their preference for paper is not rooted in what they remember but in what they experience. When a generation raised on screens begins choosing paper for focused work, the signal is worth taking seriously.
What Boredom Builds
There is a quieter argument beneath the handwriting and textbook debates, one that receives less attention because it is harder to quantify and more uncomfortable to articulate. It concerns boredom.
The modern classroom, particularly the device-equipped classroom, is engineered to eliminate idle moments. Every transition between tasks can be filled. Every waiting period can be occupied. The assumption, rarely stated but deeply embedded, is that boredom is a failure of design, a gap that technology should fill.
Sandi Mann at the University of Central Lancashire tested this assumption directly. In a series of experiments, she gave participants a deliberately boring task, copying numbers from a phone directory, before asking them to complete a creative task. The bored participants consistently generated more creative responses than a control group that went straight to the creative task. Boredom, it appeared, was not emptiness. It was a kind of cognitive preparation.
The neuroscience offers an explanation. During periods of low stimulation, the brain activates what researchers call the default mode network, a constellation of regions that light up specifically when a person is not focused on an external task. The default mode network is associated with mind-wandering, yes, but also with autobiographical memory, future planning, and creative problem-solving. It is the network that generates the shower thought, the solution that arrives while staring out a window, the connection between two ideas that seemed unrelated.
Teresa Belton at the University of East Anglia has argued that boredom serves a developmental function in children specifically. When a child is bored, she must generate her own stimulation: invent a game, tell herself a story, build something from nothing. This capacity for self-directed imagination, Belton contends, is a precondition for creativity, and it atrophies when every idle moment is filled by a screen.
The classroom implications are not difficult to trace. A child who moves from a math app to a reading app to an educational video without pause never enters the default mode. The brain never gets its turn to do the slow, associative, undirected work that boredom enables. This is not a call to make classrooms deliberately tedious. It is a recognition that cognitive downtime is not dead time. It is a different kind of productive time, and schools that eliminate it may be eliminating something they do not yet fully understand.
The Handwriting Renaissance in Japan
Japan occupies an unusual position in this debate. Unlike Scandinavian countries and much of the Anglophone world, Japan never fully committed to a screens-first classroom strategy. The reasons are partly cultural and partly institutional.
Japanese elementary schools continue to teach shūji, traditional brush calligraphy, as part of the standard curriculum. Children spend hours forming kanji characters with brush and ink, an activity that is simultaneously artistic, linguistic, and motoric. The cultural status of calligraphy gave handwriting a weight in the Japanese education system that pencil cursive never carried in American or European schools. When the push toward digital instruction arrived, it encountered an established tradition with deep cultural roots.
Japan's Ministry of Education, MEXT, has pursued what amounts to a both-and strategy. The GIGA School Program, launched in 2019, distributed one device per student across the country. But MEXT simultaneously maintained requirements for analog instruction, including handwriting practice and printed textbook use. The result is a system where digital and analog methods coexist as explicitly parallel tracks rather than one replacing the other.
The PISA data is suggestive, though not conclusive. Japanese students, who used devices in school less frequently than peers in many heavily digitalized systems, performed comparatively well on reading assessments. This does not prove that less technology caused better reading. Cultural factors, pedagogical traditions, and curriculum design all contribute. But it does undermine the assumption, prevalent in the 2010s, that more devices automatically meant better learning.
The Japanese case is instructive not because it provides a model to copy but because it demonstrates that a deliberate, measured approach to educational technology is possible. The binary framing of analog versus digital, which dominates the Western debate, does not describe how Japanese schools actually operate.
Neither Luddism nor Surrender
It would be convenient to conclude that the science demands a return to chalkboards and fountain pens. It does not. Digital tools remain superior for collaboration, for coding and computational thinking, for multimedia creation, for accessing information that no textbook can contain. A student researching climate change benefits from a screen. A student learning to multiply fractions probably does not.
The research consensus, to the extent one exists, is not anti-technology. It is anti-default. The problem is not that classrooms contain screens. The problem is that screens became the default tool for every task, from note-taking to reading to test-taking, without sufficient evidence that they improved learning and with growing evidence that, for certain cognitive tasks, they made it worse.
The schools now pulling back from full digitalization are not returning to 1985. They are attempting something more nuanced: matching the tool to the task. Handwriting for note-taking and early literacy. Paper for sustained reading. Devices for research, creation, and specific skill-building. Unstructured time that is not immediately filled by a screen.
The neuroscience will continue to accumulate. Van der Meer's lab in Trondheim is running new studies. The replication debates around Mueller and Oppenheimer will produce new data. The Swedish experiment will eventually yield outcome numbers. But the most honest assessment of the current evidence is this: we built digital classrooms on the assumption that newer tools would produce better learning, and the science increasingly suggests that the old tools were doing more than we realized. The pencil, the paper book, the empty minute with nothing to do, each of them activates something in the developing brain that a screen does not. We are only beginning to understand what that something is.
Van der Meer, A. & Van der Weel, F. (2017, 2020, 2024). Handwriting and brain connectivity research. NTNU, Frontiers in Psychology.
James, K. H. & Engelhardt, L. (2012). The effects of handwriting experience on functional brain development in pre-literate children. Trends in Neuroscience and Education.
Mueller, P. A. & Oppenheimer, D. M. (2014). The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking. Psychological Science.
Morehead, K., Dunlosky, J., & Rawson, K. A. (2019). How Much Mightier Is the Pen than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014). Educational Psychology Review.
Mangen, A., Walgermo, B., & Brønnick, K. (2013). Reading linear texts on paper versus computer screen: Effects on reading comprehension. International Journal of Educational Research.
Mangen, A., Olivier, G., & Velay, J.-L. (2019). Comparing Comprehension of a Long Text Read in Print Book and on Kindle: Where in the Text and When in the Story? Frontiers in Psychology.
Delgado, P., Vargas, C., Ackerman, R., & Salmerón, L. (2018). Don't throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension. Educational Research Review.
OECD (2018, 2022). Programme for International Student Assessment (PISA) reports.
Baron, N. S. (2015). Words Onscreen: The Fate of Reading in a Digital World. Oxford University Press.
Mann, S. & Cadman, R. (2014). Does Being Bored Make Us More Creative? Creativity Research Journal.
Belton, T. & Priyadharshini, E. (2007). Boredom and schooling: a cross-disciplinary exploration. Cambridge Journal of Education.
Karolinska Institute (2023). Advisory on screen-based learning in early education. Stockholm.
MEXT (2019-present). GIGA School Program documentation. Japan Ministry of Education, Culture, Sports, Science and Technology.