Colligo

Colligo

Share this post

Colligo
Colligo
The Fiction of Generalizable AI
Copy link
Facebook
Email
Notes
More

The Fiction of Generalizable AI

Why Intelligence Isn’t a Linear Scale—and Why True Generalization Remains Unsolved

Erik J Larson's avatar
Erik J Larson
Apr 26, 2025
∙ Paid
15

Share this post

Colligo
Colligo
The Fiction of Generalizable AI
Copy link
Facebook
Email
Notes
More
14
2
Share
Ancient World Map From 1689 Free Stock Photo - Public Domain Pictures

Hi everyone—this post ties together several previous discussions on the “I” in AI and the deeper differences between minds and machines. My discussion here is informed by François Chollet’s excellent 2019 paper “On the Measure of Intelligence” as well as a provocative book I recently read for a Liberty Fund event, Radical Uncertainty: Decision Making Beyond the Numbers. My goal is to explain where AI currently stands, what we are really trying to do when we talk about "AGI," and why we are not even remotely getting the story straight.

Let’s begin.

Introduction: The Missing "I" in AI

The field of AI has somehow managed to evolve into a technological behemoth without ever clearly explaining what it means by artificial intelligence. More specifically: what is the "I" in AI?

From its inception in the 1950s, the promise of AI was loudly touted, but the yardstick of progress was curiously narrow. It was always the engineering of particular skills: playing chess, passing Turing’s playful imitation game over teletype (chat), or later, recommending movies or products to buy.

Even by 2007, researchers noted in a major survey that "to the best of our knowledge, no general survey of tests and definitions [of intelligence] has been published." The field seemed content to showcase prowess in specialized tasks: board games, supply chain optimization, translation engines, facial recognition—systems that, as was well-publicized, might sometimes pick out wolves from dogs based not on features of the animal, but simply on whether there was snow in the background.

Through all this, commercial opportunities on the web exploded. But scientific work on what we actually mean by "intelligence"—and how we would know if it was meaningfully increasing—stalled out, and for the most part, simply didn’t exist.

What the 2007 survey by Shane Legg and Marcus Hutter did accomplish, however, was to summarize roughly 70 disparate definitions into one helpful statement:

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.

Let’s unpack that.

The first condition—the ability to achieve goals—invokes task-specific skills: beating Kasparov at chess, optimizing a supply chain, or increasing sales of energy drinks at a Walmart.

The second condition—in a wide range of environments—points toward what Chollet calls "generality and adaptation." Notice how a "wide range of environments" would seem to preclude simply building in priors or manipulating data to achieve some particular objective.

In other words, it is somewhat at odds with the first condition, implying that learning is intrinsic to knowing, or intelligence itself. It also suggests that the task to be performed might not be known in advance, requiring the AI not merely to apply existing skills, but to adapt to genuinely new tasks.

Notice: there’s a tension between these two parts of the definition. Maximizing goal achievement in known domains can often be accomplished by overfitting—building in strong priors, manipulating data, or tailoring solutions to very narrow circumstances. But true generalization resists this. It demands flexibility without foreknowledge of specific tasks.

Chollet highlights that the Legg and Hutter definition nicely mirrors the traditional split in human psychometrics between crystallized intelligence (acquired skills, like math or vocabulary) and fluid intelligence (the flexible capacity to solve novel problems). Even in the absence of a settled scientific theory of "intelligence," this framing—task performance plus generalization—puts us on reasonably firm conceptual ground.

Keep reading with a 7-day free trial

Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Erik J Larson
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More