15 Comments
May 23·edited May 24Liked by Erik J Larson

The philosopher Iris Murdoch once defined the human being as the animal that makes images of itself and then comes to resemble those images.

Ever since Turing was fatefully misunderstood, intellectuals have mythologized the ability to compute as the height of human intelligence, and so humanity has in too many respects become more and more computer-like.

If you've talked with a customer service representative following a bureaucratically begotten script on the other end of a customer service phone line, you've experienced a human being occupying the same uncanny valley as LLMs. (This isn't the representative's fault, of course.)

Expand full comment
author

Indeed, Eric. It's so hard to combat.... I'm ever searching for ways to get the point across, and this helps. Thanks.

Expand full comment
May 23Liked by Erik J Larson

Thank you Erik for explaining to me why I’ve been feeling so “off” from interactions with the tool! I thought “uncanny valley” only refers to visuals until now.

There are so many aspects to unpack, it’s almost overwhelming: eg the model using words like “I”, “understand”, “conscious effort” etc.

Another super confusing aspect: any time the model says it’s doing something (e.g. chain of thought), that in no way means it’s actually doing anything like that! It often kind of seems like it is doing it indeed, whereas it works simply by how it’s following up on the whole session’s context and simply calculating the next token over and over again.

This is extremely tricky for a lot of humans to grasp, because it’s totally unlike what we do when we use chain of thought.

Expand full comment
author

Totally agree, Ondřej. LLMs in my view are a kind of "alien intelligence" made possible by pushing data through transformer architectures (roughly tokenized as words, but not quite co-extensive with words in a natural language). The problem comes in the "fall off" rate of the intelligence, which is somewhat ill-defined because researchers haven't yet had to think about it. But--here's the deal--LLMs get MOST of what you may ask of them correct. The problem is that what they get wrong, they assert categorically and with a kind of confidence we usually ascribe to cult leaders or dictators, and their mistakes are aptly labeled as "hallucinations" because too often, they simply make no sense. HOW CAN THE AI COMMUNITY build COGNITIVE SYSTEMS ON TOP OF THAT FOUNDATION?!!!!!! (Apologies for the all caps, I want folks to get this point!). So I think we're in a dead zone that SEEMS like it's really fecund and promising. Very strange times, my friend. Thanks for your comments!

Expand full comment

My experience with AI images is that they can produce detailed photographic quality images... but utterly and repeatedly fail to understand how many fingers a human being should have, or how many strings go on a guitar, or the layout of white/black keys on a piano, etc. The AI is a lot like a tech entrepreneur: much better at making things look impressive than getting the details correct.

Expand full comment
author

I agree with you, but I suppose "tech entrepreneur" is too broad for my tastes, and I would suggest yours as well, unless you don't use a computer, smartphone, QWERTY to type, electricity, vehicles, etc.... but taking your point for what I think you meant, there are a lot of more shallow "posers" now in the entrepreneur space, partly because making new stuff has such a short life span. It'll get bought by a big tech company if it's useful, or it won't find a place in the market because it's not. Couldn't agree more about AI images--the details are notoriously ridiculous (and often show violation of copyright).

Expand full comment
founding
May 23Liked by Erik J Larson

The humanoid robots are as creepy as anything one could see in a wax museum. Wax museums were a thing in what century?

I had to laugh at the ChatGPT conversation. When I am challenged by someone to explain my theses, I almost always resort to answering in bullet points. I now see why that can drive people nuts.

Expand full comment

Self reflection is so refreshing as a cold shower, isn’t it? Sorry your comment … just made me chuckle. I like these self awareness moments myself 😊

Expand full comment

erik - you continue to find and expose flaws. regardless, you think big tech will just patch and continue, rather than seek ways to satisfy your deeper questions, or to try something new -- innovation.

thank you for keeping on pushing.

new question: do you work with quantum computing and ai?

Expand full comment
author

Hi Peter,

Thanks much. I don't do much with Quantum Computing. It's a bit like what we used to call "cold fusion," I'm afraid. Anything new on that front, though? If so, let me know and I'll update what I think!

Expand full comment
author
May 23·edited May 23Author

Well, you're gonna get an earful Peter, I hope you're ready! Thinking about "AI" and Quantum applications and all the rest. I've worked on several subfields of AI, mostly what we use to call "knowledge representation and reasoning (KR&R) and later information extraction using statistical or machine learning approaches. In my last research scientist role I worked with foundational models (BERT), precursors and later LLMs. Here's the "earful": computers were forged in the crucible of war, numerical calculation of artillery tables, and later were used for census taking and other actuarial pursuits. That they would somehow become "smart" or "intelligent" is more than a little mysterious given what they are--they are, literally, machines for adding binary digits. It seems that ultimately the quest for intelligence properly construed will fail. Computers are logic machines, and serve very technological purposes. I think all the progress and all the lack of it will always just add up to that. I do have to say, it is immensely interesting to watch the culture grapple with larger and larger data machines, and what they can do (but for who?). I hope this makes sense!

Expand full comment

That makes perfect sense. The adding up part. The market of greatness. Offer - people up-selling these machines as greater than us (or approaching to be greater than us anytime now) and demand - people buying into the hype/myth. This market has been around for thousands of years with different commodities (selling Gods forgiveness, selling Gods blessings, etc). It is a negative societal (collective) occurrence hindering the proper and societally beneficial use of a useful technology - digital data query and retrieval tools. With regards the immensely interesting part - I think it is a common recurring market behavioural pattern. Someone comes to a marketplace with pears amongst all the apples and oranges, and starts shouting “I have the fruit that will guarantee you success (with all its material perks) and it is projected to make you live far beyond your most daring wishes (let’s say 110)” people flock around the stand and start fighting over the pears (digital divide deepening - panic and hysteria take hold - FOMO - fear of missing out). Some “true believers” proclaiming they can feel these pronounced miraculous effects straight away, some “realists” reporting true experience - taste of a pear, some optimists cautious in their responses - express positive feedback without voicing true observations - it is just a pear. The hype cycle dies out. Not much changes, the market moves on to “bananas”.

What I find fascinating is the answer to a question - why we humans (larger group) keep searching for something or someone greater than us and along the way - we keep falling prey to talented sales-humans (smaller group), who know about this and prepare temporarily successful, period-adjusted sales pitches. LLMs should be talked about like dishwashers or washing machines, as far as I am concerned - those are far more exciting and useful… too much ado about a digital tool, because some of us are trying hard to upsell it as humankind’s route to “greatness” and for the time being - also succeeding at it.

Expand full comment

Thanks Erik, for being like Euclid 😊

Expand full comment

“There is no royal road to geometry,” Euclid. If more people were like Euclid - and fewer like Ptolemy I, the history of humankind could have been different. It is a game of personal intellectual integrity. Those who know, shouldn’t fool those who don’t and those who don’t know, should not pretend they do.

Expand full comment

A very interesting and truly original reflection from many perspectives, which touches on different aspects of the 'strangeness' that we perceive on many occasions when we interact with machines. It is interesting to note that later scholars have pointed out that we 'slide' into the Uncanny Valley when computers begin to be anthropomorphized enough to instill certain sensations. This point, which might seem obvious, is actually very important precisely because it is the basis of the points you highlighted and I think that finding what new stimuli can enhance these sense of weirdness (like the cases highlighted in this issue) is very important also from a research perspective. Furthermore, on the topic of the recent authors they proposed an absolutely fascinating rethinking of the Uncanny Valley. Here is the link to their study: https://www.sciencedirect.com/science/article/pii/S0747563224001225

Expand full comment