The Reverse Flynn Effect and the Decline of Intelligence
How Our Modern World Is Making Us Dumber
Hi everyone,
This is the first in a two-part series exploring the concept of intelligence—both as it applies to us, our brains, and to our tools, or what we call “AI.” In this post, I’ll break down the basics of the IQ debate and what neuroscience tells us about general intelligence (g). In the follow-up, I’ll examine François Chollet’s treatment of intelligence for AI scientists aiming to build AGI, drawing heavily from his excellent paper On the Measure of Intelligence.
The notion of intelligence underpins nearly every discussion about the past, present, and future of AI. And yet, the field of AI often sidesteps the question: What is intelligence, really? My goal in these posts is to shed some light on that question.
Now, let’s turn to our intelligence—how we measure it, the debate over whether it’s changing (are we getting dumber?), and what role the brain plays in it all.
What is Intelligence? From Human Minds to Machine Learning
Psychologist Linda Gottfredson and 52 leading experts provided a widely cited definition of intelligence in a 1994 Wall Street Journal statement. Their original declaration contained 25 statements, each highlighting an essential aspect of intelligence. The definition has since been distilled—here, by Iain McGilchrist in The Matter With Things:
Intelligence is a very general mental capacity which, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.
Fair enough. A definition that broad and catch-all is unlikely to be wrong, but it might be too sprawling to serve as a precise tool for analysis.
For a sharper take, we turn to François Chollet, a researcher at Google, who defines intelligence in a way that directly serves AI and AGI discussions:
The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty.
Though we’re anticipating the second post in this series, Chollet’s definition is useful here, too. Skill acquisition is a fundamental concept: if a cognitive system—natural or artificial—cannot acquire new skills, it can’t be considered intelligent. Likewise, generalization is crucial. Intelligence isn’t just about solving known problems—it’s about adapting to new, unfamiliar situations where prior experience offers no immediate advantage.
Chollet’s definition aligns with other AI-inspired formulations of intelligence. Consider the 2007 summary by Shane Legg and Marcus Hutter, who analyzed roughly 70 definitions from across psychology and AI:
Intelligence measures an agent’s ability to achieve goals in a wide range of environments.
Here we see the same core ideas: skill acquisition framed as “achieving goals” and generalization embedded in “a wide range of environments.”
The definitions here, whether from psychology or AI, expose an essential tension: is intelligence fundamentally about competence—mastering tasks, recognizing patterns, and applying knowledge effectively? Or is it about adaptability—the ability to efficiently learn new skills and generalize across tasks? The answer has deep implications—not just for how we understand human intelligence, but for what we expect from AI.
Keep reading with a 7-day free trial
Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.