Context, Drift, and the Illusion of Intent
Why Language Models Still Don’t Know What You Mean
Hi everyone,
In this post, I introduce a longstanding thorn in the side of AI scientists working to extend the power of large language models. I haven’t settled on a perfect name for it yet, but we can gloss it as the persistence/adaptability tradeoff: we want models that can fluidly adapt to changing contexts, but we also want them to remember the conte…
Keep reading with a 7-day free trial
Subscribe to Colligo to keep reading this post and get 7 days of free access to the full post archives.



