Colligo

Colligo

Share this post

Colligo
Colligo
The Limits of AI: What Neuroscience Reveals About the Way We Reason

The Limits of AI: What Neuroscience Reveals About the Way We Reason

Exploring the tripartite framework of inference—induction, deduction, and abduction—and how advances in neuroscience illuminate what’s missing in artificial intelligence.

Erik J Larson's avatar
Erik J Larson
Jan 07, 2025
∙ Paid
19

Share this post

Colligo
Colligo
The Limits of AI: What Neuroscience Reveals About the Way We Reason
9
3
Share
A serene and remote snowy landscape in Russia, featuring majestic white bears roaming through the snow-covered terrain. The scene captures the vastness of the wilderness with towering ice cliffs, frosty trees, and a soft blue sky. The white bears are depicted in detail, their fur blending into the snow but standing out with realistic textures. The atmosphere feels quiet and pristine, evoking the beauty and isolation of nature in an untouched environment.

Hi everyone,

In this post I want to return to the inference framework I introduced in The Myth of Artificial Intelligence, only now let’s look at a growing body of evidence in neuroscience that sheds new light on inference as we do it. We can use this to try to think more deeply about AI. It’s a new year, let’s try.

In This Post

Induction, deduction, and abduction are distinct modes of reasoning. Advances in neuroscience reveal that our brains not only treat these differently but also lateralize them—engaging the left and right hemispheres in unique ways. Remarkably, our brains grasp these distinctions even before we formalize them in calculations or logic.

Increasingly, brain research illuminates how we reason, how inference works, and what’s fundamentally missing in AI today. I’m excited to extend the ideas from The Myth of Artificial Intelligence with new insights from neuroscience, and I’m hopeful that exploring these connections will bring us closer to understanding both the unique power of human intelligence and the challenges we face in replicating it. Welcome to a deeper exploration of the tripartite inference framework: induction, deduction, and abduction.

This post builds on my earlier piece, Left Brain Rules. Let’s dig in.

The Three Modes of Inference

Let me sketch this briefly as I’ve covered this extensively in the book. At least since Aristotle we’ve known about deduction, reasoning from stated premises to a conclusion using a rule.

Deduction

The classic example is the syllogism:

All people are mortal.
Socrates is a person.
Therefore, Socrates is mortal.

Here we call the first two statements premises, and the final, the conclusion. To illustrate this closer to work in AI, we can expose more of the logic:

If you are a person, you are mortal.
Socrates is a person.
Therefore, Socrates is mortal.

Deduction is reasoning at its most structured and precise. It begins with general principles or rules and applies them to specific cases to arrive at a conclusion. Unlike induction, which relies on probabilities, deduction guarantees the truth of the conclusion if the premises are true.

A classic example of deduction is the syllogism:

  • All humans are mortal.

  • Socrates is a human.

  • Therefore, Socrates is mortal.

This reasoning is watertight: if the premises are correct, the conclusion follows with certainty. Deduction is foundational to logic, mathematics, and science, where the validity of conclusions must be beyond doubt.

Deduction is not creative, nor is it generative. It doesn’t discover new ideas or hypotheses. Instead, it ensures that our reasoning is consistent and that conclusions logically follow from what we already know. Deduction is the tool we use to validate claims, to verify the soundness of theories, and to apply rules systematically.

Induction

Induction works in the opposite direction from deduction: it starts with observations and infers a general rule or principle. While deduction guarantees the truth of the conclusion if the premises are true, induction deals in probabilities. It tells us what is likely true based on patterns in the data.

The classic example of induction is:

  • I’ve seen 100 swans, and all of them are white.

  • Therefore, all swans are white.

Here, the conclusion isn’t guaranteed to be true—it’s entirely possible that the next swan you see will be black. Induction is at the core of how we learn from experience: it’s the brain’s way of generalizing patterns from limited evidence. In AI, this maps closely to machine learning, where models generalize from very large datasets to predict future patterns. For example, recognizing a cat in an image is often the result of an inductive process—training on thousands of images of cats and learning statistical patterns that generalize to new examples.

However, induction has limits. It’s only as good as the data it works with. It cannot, on its own, generate entirely new ideas or explanations. I’ll explain the third type, abduction, briefly here and plan to cover it in a later post in depth. .

The term “abduction” was coined by C.S. Pierce in the 19th century and captures the near ubiquitous but seldom discussed phenomenon of our mind creating new hypotheses. “Hypothesis generation” could be hifalutin science, like relativity theory, or it could be ordinary insight during the course of a day, as when we infer the workers left a half drank can of Pepsi on the kitchen counter rather than an intruder, your sister, or aliens. We are constantly positing new ideas to explain what we see in front of us. This typically takes the form of cause and effect: I observe an effect, such as some tracks in front of me left by a large animal with claws, and a posit a cause, such as a Grizzly bear roaming down the train a few hours ago (I then might formulate a plan: to choose another destination for a picnic, say).

The logical nuts and bolts of induction, deduction, and even now abduction are “old hat” in the relevant academic and research communities. Logical reasoning has been studied extensively for thousands of years by mathematicians, scientists, and philosophers. We know a lot about inference and have formalized the types into logical forms that show their differences and allow us to talk about them “scientifically.” What we haven’t known until recently is how our brains treat the types of inference, and we’re just beginning to see the limits of AI as a consequence of a failure to consider a broader picture of inference.

We’re at the cusp of new ideas, perhaps, but there are a few things we already know and that will seem “new” to the scientific and AI community status quo. Our brains rely on prediction and also must deal with surprise. This is a basic survival statement, but weirdly it has much to do with inference. The AI community hasn’t caught up to this yet. Read on.

Induction, Redux

Induction thrives on the normal and expected. It’s what tells us that traffic downtown is generally worse between 4 and 6 pm, or that rainfall tends to increase during the winter months where I live in Texas. This makes our lives easier. Thanks to induction, we don’t need to engage in complicated reasoning—or worse, have profound insights into the nature of weather patterns—just to plan a vacation. Induction operates smoothly within the realm of patterns, probabilities, and the predictable.

Induction is strongest when working within the statistical bell curve—averages and norms we’ve measured and come to expect. Induction “wants” the world to conform to its expectations. When faced with exceptions or anomalies it’s less useful, and as Nassim Nicholas Taleb famously claimed in his 2009 The Black Swan, it can be dangerous—when the future doesn’t look like the past, induction can be a cognitive barrier rather than an assistant. Indeed, the farther we move into the unexpected, the worse induction performs. A system relying solely on induction would prefer those anomalies didn’t exist at all—or that they quickly disappeared.

Keep reading with a 7-day free trial

Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Erik J Larson
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share