Hi everyone,
I’ve been a bit haunted over the last week—maybe it was Halloween—with an idea that popped into my head for tackling the hallucination problem in large language models. In computer science, the term “hallucinate” may be evocative, but at the core, “hallucinations” are just errors. As humans, we favor technologies that (a) have low error rates for a given application and (b) produce errors that are manageable, not catastrophic.
One error in ten thousand might be tolerable if you lose ten cents on a widget. But if it starts a nuclear war, we want that rate as close to zero as possible.
The approach I outline in this brief audio clip builds on my Ph.D. work, and I’m looking to form a small team to explore a proof of concept. If you’re interested in contributing to this project, including investing in it, please reach out. Let’s see where we can take this.
Erik J. Larson
Share this post