Metric Fixation and the Limits of AI
What Jerry Z. Muller's critique of "metric fixation" tells us about the delusions of data-driven intelligence.
Hi everyone,
This is coming on the heels of my last post quickly, but I want to write a post that can do double duty as part of a chapter I’m working on for the book. The connection here is Jerry Z. Muller's excellent book The Tyranny of Metrics. Muller lays out a crucial argument: the modern world has become obsessed with what he calls “metric fixation.” This fixation is based on a seductive premise: that what can be measured can be improved.
Muller defines metric fixation through three key beliefs:
It’s possible and desirable to replace judgment with numerical indicators of comparative performance.
Making such metrics public holds institutions accountable—and is always a good idea.
Motivating people is best achieved by tying rewards and penalties to those metrics.
He shows that even when this mindset clearly leads to failure, it persists. Why? Because the world is messier than a spreadsheet. As the old dictum goes: not everything important can be measured, and not everything we measure is important.
What Does This Have to Do With AI?
A lot.
But to see it clearly, we need to zoom out and examine the modern faith in data. Muller notes:
“The modern digital world creates more data—data that becomes ever less useful—while gathering it sucks up more and more time and resources.”
His focus is on institutional dysfunction—health care, education, government, law. But the logic is strikingly relevant to artificial intelligence. Muller's list of the side effects of metric fixation reads like a checklist of problems in AI:
Keep reading with a 7-day free trial
Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.