

Artificial intelligence promises to expand our intellectual reach, but we risk weakening the brain’s hard-won capacity to learn and think, Marco Magrini argues

Having always been fond of technology, I welcomed the arrival of generative artificial intelligence. Since Google’s DeepMind developed AlphaFold, the algorithm capable of predicting the structures of more than 200 million proteins, this helping solve biological problems – I started wondering how much AI will accelerate scientific discovery, with a cascade of beneficial applications.
More efficient photovoltaics, new battery chemistry that makes energy storage cheaper and more convenient; carbon-free, safe and plentiful nuclear fusion… The potential of ever-improving AI models could cover many fields – from medicine to materials science, from mathematics to the deep mysteries of physics.
Enjoying this article? Check out our related reads…
Every new technology carries risks. The idea that machines, once they reach superintelligence, will immediately find a way to erase the human race is likely overhyped. However, one growing problem currently worries me most. The human brain – the most complex “machine” in the universe, as far as we know – draws much of its might from the longest newborn-to-adulthood span in the animal world.
Human brains do most of their developing outside the womb. Newborns have about 86 billion neurons that will accompany them throughout their lives, with a multitude of synaptic connections (some estimates put the number at one quadrillion).
Then, during childhood and adolescence, the brain undergoes an enormous “clean-up” phase, called synaptic pruning. A second, concurrent process is myelination – the addition of fatty myelin to the axons of a neuronal network. Myelin, which accelerates synaptic transmission by acting like insulation on a wire, is indispensable for language, working memory, reasoning and much more, including motor skills.

These two processes aren’t automatic. A newborn needs sensory stimuli; a toddler needs to experience the physical world; a child needs focused repetition; an adolescent needs to strive to learn. Connections that are engaged through mental effort are strengthened and preserved. Connections that aren’t used are marked as inefficient and physically eliminated – the pruning. Without effort, without ever exiting the comfort zone, even the process of myelination falters.
Here comes the worry. Generative AI has rapidly seized high schools and universities. According to OpenRouter measurements, service chat tasks just a portion of AI users. Last year in May – at the peak of exam season – ChatGPT generated 97.4 billion tokens in a single day (a measure of AI output, equivalent to about four English characters).
In June, as schools closed, the average of daily tokens fell by 53 per cent. OpenRouter data also reveal that 2.5 million users, while ChatGPT has 800 million. Yet the trend is clear – students have outsourced, at least in part, their mental effort to one of many algorithmic models.
Since the 1970s, electronic calculators have eroded people’s numerical skills. GPS navigators have worsened their spatial prowess. Social media has somewhat altered the public discourse. However, the possibility that future brains will be even partially impaired by the offloading of mental exercise to chatbots is far more disturbing.
Such rapid (and apparently widespread) adoption of chatbots by students is reshaping the world of education. To start with, since AI detectors are neither foolproof nor cheap, radical measures are being taken. “In our national learning class for graduate students,” Thomas Poggio, a MIT professor and AI pioneer, told me recently, “we have abandoned the usual mid-year, written exam.”
Problem set. Now the exam is oral only. Needless to say, AI has other drawbacks, starting with its staggering energy consumption both for its training and for inference; as its real-time output is called. It is estimated that by around 2030, AI data centres will need an unimaginable 1,000 terawatt hours per year.
The infamous Three Mile Island nuclear plant in Pennsylvania will soon be restarted to accommodate part of Microsoft’s energy needs. It’s a good reason to hope that AI models will indeed come up with novel, clean-energy solutions and patch the problem they themselves created. However, while that is theoretically possible, I don’t see how any AI system can patch over the looming problem of the damage that they might inflict on developing human brains.
If an entire generation is allowed to shortcut the torturous process that actually builds human intelligence, we may witness a future with powerful tools but with too few people capable of wielding them. How then will we solve the many remaining mysteries of nature and mend the damage we’ve inflicted on our fragile world?




