AI Digest
AUG / SEPT 2026
~ ~ \\ // ~ ~
authored by
Andrew Bloom
05 MAY 2026
Copyright (C) SAFE AI Foundation
Thinking & Cognitive Crisis:
How AI Dependence Is Quietly Hollowing Out the Next Generation


AI is rewriting the relationship between effort and output. In a matter of seconds, a student can generate a polished essay, a solved equation, a researched argument, or a summarized chapter. The technology is remarkable. But something equally remarkable and far more troubling is happening beneath the surface of this convenience. This being that the very cognitive processes that education is designed to build are quietly being bypassed.
This is not a story about cheating. It is a story about atrophy. It is about what happens to a mind that is never required to struggle, to fail, to revise, and to persist. If we allow the next generation to outsource thinking before they have learned to think, we will not simply produce an underprepared workforce. We will produce a generation unable to govern, question, or correct the very systems they depend upon.has always been a discipline of precision. The goal is simple on the surface, move goods from one place to another efficiently. In practice, it is one of the most complex decision environments in any industry. Every movement involves tradeoffs between cost, speed, capacity, and customer experience.
THE DATA : A RAPID & LARGELY UNGOVERNED SHIFT
The scale of AI adoption in education has accelerated well beyond the capacity of institutions to respond. According to the RAND Corporation's nationally representative survey, one of the most rigorous studies of AI use in American schools in 2025, 54 percent of students and 53 percent of English language arts, math, and science teachers reported using AI for school. These figures represent increases of more than 15 percentage points compared to one to two years prior. The growth is not incremental. It is structural.
54% of students and 53% of core subject teachers used AI for school in 2025 — up 15+ percentage points in under two years.
Source: RAND Corporation, nationally representative survey, 2025
Yet the institutions guiding that adoption have not kept pace. The same RAND study found that over 80 percent of students reported that teachers did not explicitly teach them how to use AI for schoolwork. As of spring 2025, only 35 percent of school district leaders reported providing students with any training on AI at all. Students are being handed a powerful cognitive tool with virtually no instruction on when, why, or whether to use it, and no guardrails on what it costs them when they do.
Teen use of ChatGPT for schoolwork alone doubled in a single year, rising from 13 percent in 2023 to 26 percent in 2024. The trajectory since then has only accelerated. We are not watching a gradual cultural shift. We are watching our educational institutions being outrun by a technology it does not yet know how to govern.
THE STUDENTS KNOW : THE INSTITUTIONS DON'T
Perhaps the most striking finding in the current research is this, the students themselves are sounding the alarm. According to RAND's American Youth Panel, a survey of 1,214 youth conducted in December 2025 found that 67 percent of students endorsed the statement that the more they use AI for their schoolwork, the more it will harm their critical thinking skills. That figure was up more than 10 percentage points from just ten months earlier. Students are not oblivious to what they are trading away. They are making the trade anyway, in part because no one has built the structures to help them do otherwise.
67% of students believe greater AI use for schoolwork will harm their critical thinking — up 10+ points in under a year.
Source: RAND American Youth Panel, December 2025 (n=1,214)
Their parents share their concern. The RAND survey found that 61 percent of parents agreed that greater AI use will harm students' critical thinking skills. Yet only 22 percent of district leaders shared that concern. The gap between how families perceive this risk and how institutions perceive it is not a communication problem. It is a governance failure.
THE SCIENCE: COGNITIVE OFFLOADING & THE ATROPHY OF THOUGHTS
The students' instincts are confirmed by research. The peer-reviewed literature on AI use and cognitive development is now producing a consistent and sobering finding that heavy AI dependence is measurably associated with diminished critical thinking.
A 2025 study published in the journal Societies, conducted by researcher Michael Gerlich at SBS Swiss Business School, surveyed 666 participants across diverse age groups and educational backgrounds using a mixed-method approach. The results were unambiguous: statistical analyses demonstrated a significant negative correlation between AI tool usage and critical thinking scores, with a correlation coefficient of r = -0.68 (p < 0.001). Frequent AI users showed diminished ability to critically evaluate information and engage in independent analysis. Crucially, younger participants exhibited both higher AI dependence and lower critical thinking scores than their older counterparts. In other words, the population most at risk from this dynamic is the one we are educating right now.
r = -0.68 (p < 0.001): A significant negative correlation between frequent AI tool usage and critical thinking ability, with younger users showing the greatest dependence and lowest scores.
Source: Gerlich, Societies (2025), peer-reviewed, n=666
A separate peer-reviewed study of 580 university students, published in 2025, found that greater AI dependence was associated with lower levels of critical thinking, with cognitive fatigue identified as a partial mediating mechanism. In plain terms, the more students leaned on AI, the more mentally exhausted they became by the effort of thinking independently. Consequently, making them progressively more likely to lean on AI again. It is a cycle of diminishing cognitive capacity dressed up as efficiency.
This phenomenon has a name in cognitive science. That being, cognitive offloading. It is not new, for we observed a version of it with search engines, in what researchers have called the "Google Effect." But AI's role in reasoning and analysis takes the dynamic further. Search engines changed where people stored information. AI is changing whether people reason at all.
Intellectual capabilities essential for success in modern life need to be stimulated from an early age, especially during adolescence. Young people must engage in cognitive effort — not be relieved of it.
— Educational researcher Umberto León Domínguez
A 2024 study published in Computers in Human Behavior, Stadler, Bannert, and Sailer found that while AI tools reduced students' mental effort, they simultaneously compromised depth of engagement in scientific inquiry. The trade is seductive. Less effort, similar-looking output. The hidden cost is that the student does not develop the cognitive infrastructure that the struggle was designed to build.
WORKFORCE CONSEQUENCES: SKILLS THAT DON'T TRANSFER
The implications of this trend do not remain in the classroom. They follow students into the workforce, and the research suggests the problem may be even more serious than it first appears.
Microsoft's 2025 New Future of Work Report is one of the most comprehensive analyses of AI's impact on professional performance. It found experimental evidence with significant implications for education. While AI tools can enable workers to perform tasks they could not previously complete without assistance, those gains appear to be temporary. Workers lose the capability to perform those tasks once AI access ends, indicating no lasting skill development. The scaffolding, in other words, becomes the structure, and when it is removed, nothing stands.
Workers enabled by AI to perform new tasks lose that capability when AI access ends — no lasting skill development occurs.
Source: Microsoft New Future of Work Report, 2025
This is the quiet catastrophe being constructed in our schools today. Students who have learned to produce without learning to think will enter the workforce capable of operating AI tools, but not of questioning them. They will be able to prompt but not evaluate. They will be able to generate but not judge. In fields where human judgment is the last line of defense, such as medicine, law, policy, security, finance this is not merely an academic concern. It is a systemic risk.
The Microsoft report further notes that roles requiring AI skills are nearly twice as likely to also demand analytical thinking, resilience, ethics, and digital literacy. The workforce does not simply need people who can use AI. It needs people capable of understanding what AI cannot do, and of knowing when a machine's answer should be doubted, overridden, or rejected entirely. That capability is built through years of cognitive struggle. It cannot be downloaded.
THE META COGNITIVE GAP: YOU CANNOT EVALUATE WHAT YOU CANNOT RECOGNIZE
There is a deeper problem beneath the dependence problem, and it may be the most dangerous of all. To effectively use AI, to deploy it wisely, correct its errors, and recognize its limitations a person must first know what good thinking looks like. They must be able to identify a flawed argument, detect a biased framing, recognize a shallow answer, and sense when a conclusion does not follow from its premises.
These are not skills that AI teaches. They are skills that only come from doing the hard cognitive work yourself and from writing a bad first draft and understanding why it fails, from constructing an argument and discovering its weakness, from researching a question and navigating contradictory evidence. Students who bypass this process through AI use do not simply lack knowledge. They lack the metacognitive foundation to know what they are missing.
This creates a compounding epistemic crisis. A student who cannot distinguish between knowing something and having been told something by an AI is not merely underprepared. They are acutely vulnerable to misinformation, to algorithmic manipulation, and to the slow erosion of the capacity for independent democratic deliberation. A citizenry that cannot think critically cannot govern itself wisely. That is not a distant concern. It is a near-term trajectory if current patterns are left ungoverned.
QUANTIFYING THINKING: THE CASE FOR COGNITIVE SAFETY GUARDRAILS
What we are describing is Cognitive Safety, which can be described as a framework that recognizes intellectual development not as an ancillary educational value, but as a foundational human capacity requiring active, structural protection. Just as we build guardrails to prevent AI from producing harmful outputs in institutional settings, we must build guardrails to prevent AI from displacing the cognitive processes that make humans capable of overseeing those institutions.
This requires us to ask a question that our current frameworks have not yet posed. This being, how do we quantify thinking? How do we measure whether a student is developing genuine intellectual capacity or merely becoming proficient at eliciting it from a machine?
The answer is not to ban AI from education. That ship has sailed, and the technology carries genuine potential when deployed wisely. The answer is to build an assessment and governance infrastructure that makes cognitive development visible, measurable, and protected. This means several things in practice.
It means assessment redesign that privileges demonstrated reasoning over produced output. It means structured AI-free zones in curricula, not as punishment, but as deliberate spaces where the cognitive muscles required for independent thought are exercised and strengthened. It means age-appropriate AI literacy programs that teach students not only how to use AI tools, but why human judgment must remain primary. And it means institutional accountability frameworks that hold schools and districts responsible for cognitive outcomes, not just academic production.
The RAND data makes clear that this architecture is not yet in place. Over 80 percent of students have not been taught how to use AI appropriately. Only 35 percent of districts provide any training whatsoever. The gap between the pace of AI adoption and the pace of institutional governance is not a technological problem. It is a moral one.
Over 80% of students report that teachers did not explicitly teach them how to use AI. Only 35% of district leaders report providing any student AI training.
Source: RAND Corporation, spring 2025
Thinking is not a commodity. It is not a task to be optimized or a burden to be eliminated. It is the mechanism through which human beings develop wisdom, build understanding, form judgment, and exercise freedom. It is the foundation upon which every other safeguard we construct ultimately rests. An AI system constrained by well-designed guardrails is only as safe as the humans overseeing it, and those humans must first know how to think.
~~~ end ~~~
REFEERENCES
RAND Corporation. (2025). AI Use in Schools Is Quickly Increasing but Guidance Lags Behind. Nationally representative survey of students, teachers, and district leaders. rand.org
RAND American Youth Panel. (December 2025). More Students Use AI for Homework, and More Believe It Harms Critical Thinking. Survey of 1,214 youth ages 12–29. rand.org
Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. doi:10.3390/soc15010006
ScienceDirect. (2025). Learners' AI Dependence and Critical Thinking: The Psychological Mechanism of Fatigue. Peer-reviewed study, n=580 university students.
Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386.
Microsoft Research. (2025). New Future of Work Report 2025. Includes experimental evidence on AI-enabled skill acquisition and transfer.
Youngstown State University / Pew Research cited data: Teens use of ChatGPT for schoolwork doubled from 13% (2023) to 26% (2024).
León Domínguez, U. (2024). Quoted in: AI's Cognitive Implications: The Decline of Our Thinking Skills? IE University Center for Health and Well-Being.
Disclaimer: The information in this digest is provided “as it is”, by the SAFE AI FOUNDATION, USA. The use of the information provided here is subject to the user’s own risk, accountability, and responsibility. The SAFE AI FOUNDATION and the author are not responsible for the use of the information by the user or reader. The opinions expressed in this article are solely that of the author, not the SAFE AI Foundation. All copyrights related to this article are reserved by the author. Please reference this article if you wish to cite it elsewhere.
Note: The SAFE AI Foundation is a non-profit organization registered in the State of California and it welcomes inputs and feedback from readers and the public. If you have things to add concerning the impact of AI on Thinking and Cognitive Skills and would like to volunteer or donate, please email us at: contact@safeaifoundation.com


