Colligo
Colligo Podcast
Tackling Hallucinations in AI: A New Approach
0:00
Current time: 0:00 / Total time: -15:24
-15:24

Tackling Hallucinations in AI: A New Approach

A scientist's perspective on reducing LLM errors—an idea ready for angel investment.
How to Humanize AI-Generated Marketing Content | Aurosign

Hi everyone,

I’ve been a bit haunted over the last week—maybe it was Halloween—with an idea that popped into my head for tackling the hallucination problem in large language models. In computer science, the term “hallucinate” may be evocative, but at the core, “hallucinations” are just errors. As humans, we favor technologies that (a) have low error rates for a given application and (b) produce errors that are manageable, not catastrophic.

One error in ten thousand might be tolerable if you lose ten cents on a widget. But if it starts a nuclear war, we want that rate as close to zero as possible.

The approach I outline in this brief audio clip builds on my Ph.D. work, and I’m looking to form a small team to explore a proof of concept. If you’re interested in contributing to this project, including investing in it, please reach out. Let’s see where we can take this.

Erik J. Larson

Discussion about this podcast

Colligo
Colligo Podcast
Toward a humanistic theory in an age of data