Hi Erik. I agree that AGI is an illusion. I have been reinterpreting C. S. Peirce #Pragmaticism for quite some time. I claim we need expert conceptualizers of #TheWealthOfGlobalization that follow me as the #HomoPragmaticist pioneer (or somebody else that preceded me) as the #PathCreator.
Under your LinkedIn post “… the role of AI and science,” I shared “Philanthropy for
I've actually been thinking lately, and you may have a better insight in this, whether generative AI is more a model of how memory works than any form of inteligence. I mean, in a way, we could see it as storing and recovering information in a neural network. Hallucinations could be the same as fahse memories experienced by humans (the famous mandela effect). In the end we al can picture images in our head, "hear" sounds and remember sentences and short clips in our heads, so a better memory could do those things also to a larger extent ,right?
There are some problems I guess with this insight, but I find it interesting enough to have entertained the idea a bit.
At this point, Cassandra would raise her hand in the classroom, and shout out “I know... we will need opaque chatbots and metaworld-games presented as AGI or conscious AI to spread propaganda about how well we all have it” 😥🤯😱 not even Greek Gods fought their fate 😉🫣 but philosophers continued their quest for the truth, even during the golden age of sophists.
So the predictions are an important part of ROI investment decisions. It would be prudent, as any decent trader knows when to cut your losses and move on. But instead of cutting our losses, we are going further down the rabbit hole. Bypassing opportunities to further develop technologically excellent solutions to real world problems - such as contaminated general public food chain (not the food chain of the few privileged, etc. what do we need conscious AI for when most people will be food intolerant, diabetic with compromised mental health, and heart problems?
Current technology and its foreseeable paths is amazing. E.g. Laser agribots ...
We could decrease the use of pesticides, in food production chains... that would be a good position to be in. We aren’t focusing on it, we aren’t talking about it’s potential ... instead we devote headlines to conscious AI!!! 😭😳😭
Encoding international anthropomorphised misnomer littered legal definitions ... is a bad position to be in. And presenting it as an international consensus of democratic and free world parties makes us all look like hypocrites and other reputationally damaging positions - in front of the “other camp”.
The purpose and the use of sophist arguments is to cover up lies/things that don’t add up. ROI on technology that requires huge investments and timelines don’t add up. Overselling is nothing new, it resulted in many AI winters. What is new is the barrage of sophist arguments and misinformation about AI.... the further down the rabbit hole we go... the bigger and more expensive the sophist arguments.
It is a losing game fending off sophist arguments. Tiring and not an efficient way to use our resources. It is a positioning problem. We are in a bad position.
I do wonder what sophist argument Kurzweil will come up with in 2029... unless he already has come up with a way to persuade general public that narrow AI chatbot is conscious.
When you start asking questions like “who would sign off on that?” ... a bunch of people come to mind! 🫣
Hi Erik. I agree that AGI is an illusion. I have been reinterpreting C. S. Peirce #Pragmaticism for quite some time. I claim we need expert conceptualizers of #TheWealthOfGlobalization that follow me as the #HomoPragmaticist pioneer (or somebody else that preceded me) as the #PathCreator.
Under your LinkedIn post “… the role of AI and science,” I shared “Philanthropy for
#AGlobalSystem ( https://gmh-upsa.medium.com/philanthropy-for-aglobalsystem-f64dcf099c0e )” is the way to revolutionize science.
I've actually been thinking lately, and you may have a better insight in this, whether generative AI is more a model of how memory works than any form of inteligence. I mean, in a way, we could see it as storing and recovering information in a neural network. Hallucinations could be the same as fahse memories experienced by humans (the famous mandela effect). In the end we al can picture images in our head, "hear" sounds and remember sentences and short clips in our heads, so a better memory could do those things also to a larger extent ,right?
There are some problems I guess with this insight, but I find it interesting enough to have entertained the idea a bit.
I went to a conference in Cambridge and not all is about chatGPT!
https://link.springer.com/chapter/10.1007/978-3-031-47994-6_45
At this point, Cassandra would raise her hand in the classroom, and shout out “I know... we will need opaque chatbots and metaworld-games presented as AGI or conscious AI to spread propaganda about how well we all have it” 😥🤯😱 not even Greek Gods fought their fate 😉🫣 but philosophers continued their quest for the truth, even during the golden age of sophists.
So the predictions are an important part of ROI investment decisions. It would be prudent, as any decent trader knows when to cut your losses and move on. But instead of cutting our losses, we are going further down the rabbit hole. Bypassing opportunities to further develop technologically excellent solutions to real world problems - such as contaminated general public food chain (not the food chain of the few privileged, etc. what do we need conscious AI for when most people will be food intolerant, diabetic with compromised mental health, and heart problems?
Current technology and its foreseeable paths is amazing. E.g. Laser agribots ...
We could decrease the use of pesticides, in food production chains... that would be a good position to be in. We aren’t focusing on it, we aren’t talking about it’s potential ... instead we devote headlines to conscious AI!!! 😭😳😭
Encoding international anthropomorphised misnomer littered legal definitions ... is a bad position to be in. And presenting it as an international consensus of democratic and free world parties makes us all look like hypocrites and other reputationally damaging positions - in front of the “other camp”.
The purpose and the use of sophist arguments is to cover up lies/things that don’t add up. ROI on technology that requires huge investments and timelines don’t add up. Overselling is nothing new, it resulted in many AI winters. What is new is the barrage of sophist arguments and misinformation about AI.... the further down the rabbit hole we go... the bigger and more expensive the sophist arguments.
It is a losing game fending off sophist arguments. Tiring and not an efficient way to use our resources. It is a positioning problem. We are in a bad position.
Interesting Jana. Can you say more about this? I like the direction.
I do wonder what sophist argument Kurzweil will come up with in 2029... unless he already has come up with a way to persuade general public that narrow AI chatbot is conscious.
The problem is that we are led being led by snakes...