Comte's Fallacy and AI
AI enthusiasts view progress as a straight line to success. But whether we're making progress in AI is not so simple.
Hi everyone,
I had an exchange with a friend recently that began as a broader discussion about progress, but after reflection I find it’s form fit for the AI question as well. I call the idea that it’s undeniable we’re making “progress,” discernable in history as roughly the linear progression of better and better ideas and outcomes, Comte’s Fallacy, after the famously optimistic 19th century thinker—he coined the term “sociology”—Auguste Comte.
AI futurists almost always assume Comte’s Fallacy when it comes to discussing AI today and speculating about it tomorrow. Yes, we have a robust literature spanning the decades of AI’s odd history, all those worries about bad outcomes: killer robots, the control problem, SkyNet. That’s a side issue here. Comte’s Fallacy for AI is about intelligence, and the fallacy is that we’re replacing systems with less “intelligence” with ones with more, in a roughly linear fashion, and the next stop—not far away now—is human-level intelligence, and beyond.
Keep reading with a 7-day free trial
Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.