Colligo

Colligo

From Data to Thoughts: Why Language Models Hallucinate and the Quest for Understanding

The Limits of Language Models and Paths to Real Cognition

Erik J Larson's avatar
Erik J Larson
Oct 29, 2024
∙ Paid
18
9
4
Share
Next gen AI

Hi everyone,

I’m back in the thick of my love-hate relationship with AI. Tonight, I’m offering a sneak peek into a project that tackles a pressing issue: the hallucination problem in language models. Why do these systems sometimes go off the rails? The burning question is: how do we cut down the error rate?

Let’s dig in. It starts with the right critique, and the right ideas.

Summary

One of the major challenges with large language models (LLMs) today is their tendency to “hallucinate”—that is, to generate responses that sound plausible but are factually off the mark. Despite all the advancements we’re seeing in AI, we’re nowhere close to fixing this issue, and there are deep-seated reasons why. Here’s what’s really going on when LLMs hallucinate and what it would take to move from pattern-based language generation to genuine understanding.


Why Hallucinations Happen: No “Truth Model,” Only Probability

LMs hallucinate because they lack a model for “truth.” They don’t check facts the way we think about truth; they work probabilistically—as many people now understand. When an LLM responds, it’s not “thinking” in terms of what’s true or false. It’s silly to expect it to somehow “find” this in the data.

What I sense is often overlooked is how the underlying technology can drive follow-on research toward genuine improvements and solutions. From what I can see, major companies are merely scratching the surface—primarily relying on post-process filters and reinforcement learning. These approaches are reactive, patchwork solutions rather than substantial investments in comprehensive research that addresses the hallucination problem at its core.

Some background.

A large language model (LLM) calculates the probability of each word w given the context C, represented as P(w | C), and then selects the most likely next word. This setup is effective for generating fluent language but inherently lacks fact-checking. When we encounter hallucinations, it’s because the model fills in gaps with what sounds plausible rather than what is verifiable.

Without a built-in “truth model”—something that cross-checks, validates, or confirms accuracy—the system is effectively a great “guesser.” It’ll often sound authoritative but lack the foundational layer that ensures what it’s saying is actually correct. This is in the architecture. The model doesn’t even know what a “word,” is, it computes tokens (there are many non-word tokens in for instance GPT-4o, which is one reason it can occasionally impress when asked to be creative).

Keep reading with a 7-day free trial

Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Erik J Larson
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture