Discussion about this post

User's avatar
Understanding Intelligence's avatar

AI doesn't fulfill at all its marketed promises, so far, and that's what critics are usually pointing out. LLMs do not have understanding of the world, they *do not* pass Turing test (just use a common sense test that humans pass with at least 95%, and the best models get 50%, and there you go, spotted the fraud). That being said, there is no doubt that there is pattern recognition intelligence in AI. For example, I use AI music models, and they have impressive mathematical understanding of and proficiency in the most important transformations in music. That's truly valuable. BUT... if one tries to sell the model as an unbelievable songwriter that will replace musicians, I'll say, no, all alone it *sucks*.

Expand full comment
Jana Novohradska's avatar

The biggest obstacle to achieving technologically excellent AI (mature, most optimally configured LLMs) as fast as possible - is the idea of approaching AGI. I understand that it was created and hyped up cause it sells better (attracts more investment). But it does create unrealistic expectations (missed ROI, more badly implemented tech, etc) and confusion (some vulnerable people fall for the illusion that a chatbot is capable of compassion, kindness, intimacy, etc)… I think we need to urgently educate general public about LLMs via a government led initiatives to give it authority it needs. Public imagination is hooked on AGI (as humans always search for something greater than us - has its pros and cons, this being a con)… we need an AI anti-anthropomorphic dictionary, translating AI hype terms into actual meaning, eg ChatGPT hallucination = error, autonomous = high degree of automation, etc. to be used when reading mainstream media articles.

Expand full comment
17 more comments...

No posts