5 Comments

I also observed that people often interchange the concepts of narrow AI and AGI without realising it, and their minds are biased towards AGI, even if they acknowledge that there is only narrow AI. This innate human tendency to be biased towards potentially better outcomes (technologically AGI is generally perceived to be more sophisticated and therefore better). This is being fuelled by and made worse by eg adopting anthropomorphises international AI definitions eg OECD Stuart Russell’s definition. AI system infers - rather than performs inference, and autonomous which even ISO standard acknowledges is a misnomer. Voila, mainstream media use these little hooks and spread the anthropomorphic AI. People are confused what ai is, what it can do...

Expand full comment

Hi Jana,

I completely agree! It's funny even to have moved to "AGI" as the goal of AI from the get-go was human-level intelligence or beyond. As you suggest, "AGI" is a tacit concession that artificial intelligence research and development is application specific, that is, "narrow AI." Why else have to say "AGI"? I suppose the LLMs have a certain generality, as conversational AI is a big tent. But even here, we have nothing even approaching human-level intelligence. We can't put an LLM in a self-driving car--this takes all sort of AI and non-AI components, and it still isn't reliable enough for public safety--and LLMs are actually poor at passing the Turing Test (ChatGPT 3.5 performed worse than ELIZA, the 1960s program that mimicked a Rogerian psychotherapist. ChatGPT 4.0 did slightly better than ELIZA but still dismal at about 40% success--wow.) Thanks as always for your comments!

Expand full comment

Selling narrow AI as AGI with degrading accuracy (or general purpose AI - GPAI)... a trade of the decade but hardly as profitable as the Catholic trade of selling forgiveness (truly zero production costs of forgiveness )😉 FTC is on the case and for the purpose of the issuance of their CIDs - AI is even what is being claimed to be AI. https://www.ftc.gov/news-events/news/press-releases/2023/11/ftc-authorizes-compulsory-process-ai-related-products-services

Expand full comment

Myths of AI is being institutionalised. A dangerous step further down the road leading to outcomes such as in the past, eg Giordano Bruno public execution as a heretic.

Expand full comment

Thank you, Eric. I have read this piece only once so far and I will re-read it. All of these are amazing technological advancements. The only issue is that some AI scientists and developers and their investment searching teams - hype up these as the route to AGI, which is being sold off as the “wonder-weapon” to “save” the “reduce costs, increase profits” ad infinitum business model which reached and passed its sell by date (meaning it is producing/manufacturing eg even such simple items as umbrellas that aren’t even fit for purpose for one use). What good is an umbrella that can’t protect you from the rain even during its first use?... what good is the manufacturing model that produces such products? I think AI has become the proverbial quest to save the sinking ship. This hinders innovation and useful uses of AI and sinks us further into the depth of a titanic... rather than on a floating door with a whistle attached. This hype/myths/fanaticism isn’t about AI or even AGI, this is about saving the doomed Titanic.

I keep going back to this keynote by prof Holzinger, he delivered at Slovakian technical university in Kosice - TUKE

https://www.researchgate.net/publication/328309811_From_Machine_Learning_to_Explainable_AI

Expand full comment