Colligo

Colligo

Share this post

Colligo
Colligo
The Death of "Ontology"

The Death of "Ontology"

Why ChatGPT is killing off traditional modes of AI (plus... spooky....it's Halloween)

Erik J Larson's avatar
Erik J Larson
Nov 01, 2023
∙ Paid
13

Share this post

Colligo
Colligo
The Death of "Ontology"
20
3
Share

a manual laborer

Hi all,

I recently finished a speaking tour in Europe and would like to send a few thoughts compiled from all that discussion, speaking, and interviewing.

Colligo is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

My first job in AI was at a company called Cycorp. The founder, Doug Lenat, just passed away. He was a Carnegie Mellon professor and then started a decades long project to represent “commonsense” knowledge in a formal language that a computer could read. The idea, ironically, grew out of a relatively prevalent idea among AI and computer scientists that neural networks and other inductive approaches using data not concepts were “brittle.” All statistical approaches to AI—all of machine learning—was thought by a contingent of very stalwart AI types (sometimes referred to as “good old fashioned AI” researchers) to be brittle, because they couldn’t capture the actual concepts we use to think and communicate. A neural net might “recognize” a face, but it didn’t have the “concept” of FACE. This was quite a debate before the web.

The web proved that gathering data and using machine learning techniques resulted in superior performance on a number of central tasks in information extraction and natural language processing, like entity extraction, co-reference resolution, and many others (sentiment analysis et al).

For all practical purposes, the debate about this raging on among AI scientists was resolved definitely by about 2010— that the idea of hiring smart people to hand-code “knowledge” in a computer-readable language was quite limited. It had its day, to be sure, but it wasn’t a path to AGI. It wasn’t a path to anything other than hiring philosophers.

My own career transitioned from doing the manual code-it-all-in approach to training and developing systems based on provision of data. The reason the web was so central in this shift in AI was simple: there’s a lot of data! I mean text messages, tweets, comments, blogs, and for image recognition jpegs and so on. We simply didn’t have this volume of data before the world wide web. What we learned was that the old school way of thinking about AI—statistical approaches using only data are brittle—was just wrong. This started the problem.

Share

ChatGPT finished it. It (as far as we know) has literally no concept, in the ontologists’ sense, of anything—it doesn’t “know” that living humans have heads (to take a famous example), and that houses are structures where people live. It doesn’t “know” anything. But the lesson is—and it’s a hard lesson to swallow—we don’t really need to do all that manual effort in the first place. It’s like digging with a spoon and someone hands you a shovel. Keep digging with a spoon?

Keep reading with a 7-day free trial

Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Erik J Larson
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share