Discussion about this post

User's avatar
Richard Parker's avatar

So, at some given point, no matter how impressive the LLM, the principle of “GIGO” is always with us... Thanks for a very interesting article: I’ll need to chew it over a bit before the acronyms “stick”, but this stuff is conceptually fascinating (and very current).

Expand full comment
Gerben Wierda's avatar

Very good. I don't think fine-tuning is entirely out of the picture , there will be hundreds of thousands of them, it will be a combination of pre-training by the OpenAI's (with competition of open source LLama etc), expensive but important fine-tuning, and in-context learning (prompts). Maybe GOFAI could actually be a source for fine-tuning and not only ICL.

Expand full comment
9 more comments...

No posts