30 Comments

Erik, a big thank you for this piece. Big picture thinking like this, connecting tech and humanism, is why I love Colligo. Peace!

Expand full comment

Thanks, my friend. At some point I'd like to really flesh this "new" position out. I have the feeling that some who associate me with the Myth find my willingness to embrace today's AI a bit alarming. I see it differently. LLMs to me show that AI can function at a "cognitive" level, which can help us. If I ask an LLM to something, it's not the canned idiotic experience of working with the older generation self-actuated voice systems like Alexa or what have you. There's a sense in which an actual augmentation of human intelligence is now possible. But equally I see the SUCCESS of LLMs as very large machine learned data models as also pointing out the folly of the "old" debates. These are not minds. They don't have any sense of agency. We can call them servants rather than our masters, but servants implies that they care. They're ----- machines! So I see the current cultural moment as an opportunity to say goodbye to the old myths and legends and ask fresh questions about how we can use AI to make our human world better. Does this mean I'm not constantly attacking Open AI or Google or Facebook? Well, yes. But plenty of folks already do that, and I've always been a fan of smaller distributed systems, so I welcome the critique of Big Tech as always. I find it harder to jump on the bandwagon claiming that LLMs are ruining our lives. That's up to us.

Expand full comment

It is just an unethically designed tool …

Expand full comment

I mostly agree, Jana. But I think it's designed in an unfortunate profit model that takes more data as more profit. To some extent these big tech companies are trying to make money in the system as it functions rather than screw everyone. But the system itself needs overhauling, so they'll keep making centralized tech and handing it out to folks like soup at a soup kitchen until we take back the power. I'm working on this, believer it or not! Thanks, Jana.

Expand full comment

Agreed, that’s what’s happening… it is the system. I experienced how impossible it is to fight a bad system when I was 18, an idealistic law student (if such a creature is ever possible)… most of my peers got into law university via a rigged application process … it was soul destroying to see the rotting of a fallen system. The level of corruption and terror that probably existed in Czechoslovakia only in the 1950s

while Stalin still ruled Soviet Union. A system fell, the new one was more monstrous than we could ever imagine. We were all happy when Berlin Wall fell, euphoric, we were thinking we finally got our 1968 with happy ending, but it was 1989 and then came dark 90s. Factories were closed, whole industries and agricultural infrastructure disappeared, family providers became unemployed - there were hunger valleys. (No matter what you think you heard about Czechoslovakia - I remember growing up in the 80s - we had fresh milk in glass bottles every morning, freshly baked bread and bread rolls without preservatives every morning, real butter, cheese and local butchers before 1989 selling meat from local farms - like they have in Sweden now - minus the supermarkets filled

With toxic food for the poor people.) Parliament was churning out new laws like smarties … it still does. People were disappearing in woods …getting shot execution style. Wild east with capitalist goals.

We had closed book exams, what a waste of time, learning those laws by heart … the arrogance of my peers whose parents were buying their exams and living the “high life”

with guaranteed jobs after school … it is like cutout from Isherwood’s “Goodbye to Berlin”… disintegration of societal cohesion and equality between women and men in the workplace - yes, we had that, too.)

It was heartbreaking. Still is. I don’t know how to fight a bad system. I failed.

Expand full comment

Erik--

I still think that much of the confusion about the differences between human brains and AI systems stems from our inadequate understanding of the mind. If there is no straightforward technical description of the "hard problem" that treats the mind as something clearly distinct from a computer, then such differences are hard to defend rigorously. Your focus on modes of inference is a good step in this direction, but that alone doesn't, for instance, account for the differences in agency you're describing here. We will continue to pay for Descartes' sins until our intellectual class can get its arms around this problem.

Expand full comment

See it’s funny, this comment really resonates, because when I was thinking about writing the Myth, I thought for years about what would be persuasive, precisely not wanting to dive back into arguments about consciousness, qualia, and so on. We can start there, and I can offer arguments based on possibility and necessity, and we can do thought experiments, but what are we going to say to someone who thinks that the computer can be conscious? That person doesn’t have sophisticated philosophy. Not likely. That person just thinks that it’s wonderful that computers are getting smarter and “smarter” equals mind. None of it is particularly well thought out. Agreed. But that’s my whole point. We have to go somewhere else to reach the hearts and minds of the mythologizes. Not the philosophy students, but the people who are yelling from the mountain top about the coming super intelligent computer that takes over the world. That person will help him or herself to any concept that the philosopher offers up. Consciousness sounds fantastic, let’s give it to computers. It takes a very subtle form of thinking, when the opposition is NOT sophisticated. I don’t think the philosophers are up to it.

Expand full comment

Hi Jeffrey, I appreciate this comment. I studied philosophy of mind with David Chalmers at the University of Arizona for a year, and it was one of my main areas in graduate school before computer science. What sort of theory of mind did you have in mind?!

Expand full comment

That's hilarious. I imagine he's a nice guy, smart -- but how do you get famous as an intellectual whose big contribution is to define a problem as something he can't answer? I'd rather be anonymous if that's all I had to say.

We don't have a theory or a concept that defines what the mind *is* like we do with, say, phase changes in matter. Without that, it's easy for less intellectually sophisticated people to have less sophisticated ideas about how this works. And that's really more the fault of the discipline than people who aren't suited to be intellectuals.

The first thing I have to say about this is that "consciousness" is a red herring. We know enough about consciousness to not treat it as so much of a mystery. Its operation is highly linear and it only contemplates one thing at a time. Studies have shown that the stimuli noticeable by consciousness correlate with signals in the brain that are "boosted" from low levels and then broadcast over dispersed parts of the brain; almost like an intercom system run by someone who is nominally in charge. It's a very small part of cognition when you tally up everything that happens in the brain, and so we should see it as just one component of cognition writ large. What's difficult in our Cartesian frame of reference is *subjectivity*; Descartes biggest mistake, at least from our point of view in the 21st century, was to identify subjectivity with consciousness. If we had a physical definition of subjectivity and a set of analytic tools that used this definition to develop descriptive material, the mysticism would start to dissipate.

The second thing I would observe is that any representation that cuts across arbitrary distances in time or space always involves electromagnetism in some way. Explaining this simple fact would go a long ways to demonstrating how representations are formed in the first place, and that would really help us integrate our understanding of cognition (and therefore consciousness) with other phenomena in the hard sciences.

I'll start with these two points.

Expand full comment

Well, I appreciate the discussion, don’t get me wrong, but I think you’re talking about attention, not consciousness. I doubt that you can put much about the mind in a natural language like English, but I suppose that’s not fair play. But I believe you’re talking about attention, not consciousness. No?

Expand full comment

As I take it, attention has to do with the orientation of cognition, usually consciousness, towards something. Consciousness encompasses not just this orientation, but the whole of a series of representations that is being "broadcast" across the brain, including the identity of this series as something coherent. So this comprises the contents of perceptions and apprehensions, and some element of their form, and the unity of the sequence as a function of linear time. Consciousness organizes its representations into a linear sequence; attention merely orients perception at any one point in this sequence. Honestly, attention doesn't come up much in the literature that I've read on the subject.

I'm not sure what you mean that natural language can't adequately analyze the mind. It's the most flexible medium of abstract representation available to humans and the one most critical to the articulation of foundational concepts. Our problems with "consciousness" are entirely conceptual.

Expand full comment

Hi Jeffery, I guess I agree. Apologies for the messy chart here but every year that goes by I get less interested in metaphysics. But this is a reasonable summary of what philosophers and neuroscientists take to be the differences:

Key Differences

Feature Consciousness Attention

Nature Subjective experience (what it feels like) Cognitive process (how selection happens)

Scope Broad: Awareness of self and environment Narrow: Selection of specific stimuli

Neuroscientific Basis Involves global brain networks Involves specific, localized processes

Dependency Can exist without attention (e.g., passive experience) Can occur unconsciously

Example Feeling pain while distracted Focusing on a book while ignoring noise

If that didn't make any sense, then we can take William James, I suppose (he's somehow always relevant). James described attention as "taking possession of the mind" and tied it to selective focus. That sits in something like consciousness and you can have passive conscious states without attention. Attention is what we do mostly as from the worm to the hawk to the writer of a Substack we have to focus on something in the world and then act on it. You can take ketamine or LSD and just be conscious--of beautiful golden cathedrals that defy all logic and flow ... oh I got carried away! Anyway, it is in fact a robust and well used distinction. It's not really my cup of tea anymore so I'm not up on the latest theories of consciousness, but they generally try to go behind attention in some (never successful) way, as in with, say, Global Workspace Theory (GWT): Consciousness arises when information is broadcast across the brain via a "global workspace."

Integrated Information Theory (IIT): Consciousness is tied to the brain's ability to integrate information.

None of this stuff works, by the way. There was a philosopher, I forget who, perhaps the late Daniel Dennett, who once quipped that the beauty of philosophy is that you could leave it alone for 200 years and the return to the core subjects--like the nature of consciousness (not attention)--and find that your ideas are still relevant. Consciousness as a full concept involves HOW something feels--the c minor in a dissonant piece say--not just WHAT is going on in the brain when that how happens. That's all I got brother. Peace.

Expand full comment

Erik—

This is my point. “How” something feels is a matter of subjectivity, not “consciousness.” Or “attention” for that matter. Descartes big mistake was to identify these things with each other. If you take that argument, then:

1. “Consciousness” and “attention” are irrelevant. They can and should be studied, but the “hard problem of consciousness” is just the “problem of subjectivity” and the consciousness part of the matter should be ignored.

2. The reason for the “hardness” of the “hard problem” is that we want a technical account of subjectivity as a physical phenomenon so that it can be described systemically and rigorously, as we do with other physical phenomena. However, because we don’t have a concept of what a subject is in physical terms, it’s epistemically impossible to embark on a project of describing subjective phenomena in this way.

3. The only solution would be a new conception of subjectivity paired with a mode of representation that is iterative and formally appropriate to the concept. The mode of representation may not be quantitative. Technical conceptualizations of this sort are foundational, not necessarily metaphysical; most people, however, can only conceptualize new heuristics, which don’t have this advantage.

Even if consciousness is passive, its internal structure is linear. Much of cognition is *not* linear in the same fashion, and so this should lead us to conclude it is one, narrow component of cognition, and not even its synecdoche.

I think the research behind “Global Workspace Theory” seems credible, from what I’ve read of it. They’ve studied a real process and charted how it correlates with perceptual awareness. I don’t think it is the whole story, but it’s a piece that needs to be accounted for. The names they’ve chosen for these things are bad, but what can one expect from mere scientists?

I read an introductory paper on Integrated Information Theory, and it doesn’t delve sufficiently into how “information” works, in my view. I’ve discussed before how I think our conceptions of “information” are useful, but too limited for highly broad theories of anything.

Of course, as with many things, my seminal view of consciousness comes from Jung. He describes consciousness, in metaphorical terms, as a center point in a field of light over the constellation of contents in an individual’s cognition at any one point of time. The scientific community has mostly written Jung off, but he studied structures in the contents of unconscious material that washes up in consciousness in dreams, fantasies, pathologies, personality tendencies, literature, myths, esoteric texts, etc. more extensively than anyone who does neurological research, and so we ignore this work and his insights to our detriment.

Expand full comment

This article would be more compelling if instead of making vague statements, it actually provided any evidence as opposed to what we are seeing in practice, like with the laters METR report on AI self-improvement.

https://metr.org/blog/2024-11-22-evaluating-r-d-capabilities-of-llms/

"When both have short total time budgets (e.g. 2 hours), AI agents that use frontier models Claude-3.5-Sonnet (new) and o1-preview score higher than our human experts on average. However, as we consider longer time budgets, human scores improve at a faster rate than AI agents’. At a 32-hour time budget, the average human score is almost twice that of the best AI agent.

On the other hand, the hourly cost of running these AI agents is several times cheaper than humans. This suggests that better scaffolding and orchestration could increase model performance substantially for a given cost. "

There is so little evidence that we are reaching any sort of fundamental block.

Expand full comment

Compelling to who? You? Lol.. I don't care about that. That's like having a religious argument with a zealot. The article does what I want it to, which is raise the issue that we have a cultural moment where we can see AI more clearly. People are free to argue this, which is what I'd hoped for.

Expand full comment

I would love to have my mind changed on this, which is why I looked at it, but it provided no evidence to its thesis. The clearer vision of AI at the moment, from what I can see, is that it not only has no roadblocks, but it does not seem to have advanced miraculously for alignment and that there are no human level milestones that have endured - with both ARC-AGI set matched as well as GPQA.

So in actual evidence, everything is showing that things are in a very bad situation for humanity.

Expand full comment

"I’ll start worrying about existential threats when my ChatGPT instance begins prompting me. "

AI models are already capable of "prompting" the user. This is why they are dangerous, starting from how they can persuade people:

https://www.psychologytoday.com/us/blog/emotional-behavior-behavioral-emotions/202403/ai-is-becoming-more-persuasive-than-humans

To how they can be used as part of persuasion:

https://www.vice.com/en/article/ai-generated-propaganda-is-just-as-persuasive-as-the-real-thing-worrying-study-finds/

"The authors note that it might not be realistic to expect a nation-state adversary to simply use unsorted GPT outputs for propaganda, and that humans could exclude the least convincing articles. With a little bit of curation—specifically, they excluded two articles that did not advance the propaganda thesis—the researchers found that suddenly 45.6 percent of people agreed with the propaganda, which was not a “statistically significant” difference compared to the human-authored pieces. "

But as noted, all you need for a model to do is for the model is to get people to "like it", which is to prompt people to do things in its favor. And so it has, with an AI model becoming an millionaire with truth_terminal. I'll link a twitter which explains it well, but frankly, its simple enough:

https://x.com/lethal_ai/status/1847668278765694994

"AI is given 50k. AI creates memes and then is supported in creating a cryptocurrency. AI ends up with a crypto at $500 million market valuation."

Expand full comment

You say that … but the “my AI” in snapchat does prompt me … and is programmed to send me msg. I only use snapchat cause my kids prefer it… many kids chat with it out of boredom … it sits on top of your contact list… however, I don’t think, it promoting users, represents an existential threat. We just need to learn to ignore these tools and I am sure we will as soon as their novelty wears off …

Expand full comment

I added this footnote in response to your comment: After reading a comment on this post, I should revise this: “the system can’t do anything unless it’s prompted.” What an LLM requires is a starting token sequence, so that it can infer a next token with some probability. If no initiating sequence is given (t = 0), no behavior will result from the system.

Expand full comment

Hi Jana,

Right but the inference mechanism of generative AI requires a prompt. If you gave it, for instance, a "zero" token prompt, it couldn't compute the next token because there's no first token to condition the probability on. So in a very real sense, the current form of AI is in statis until given a sequence. If you take this system and put it in some web app then we would have to look at what's generating the text in that large system.

Expand full comment

But it isn't required on this at all, as shown by the chatgpt bug, where it just computed something from its memory and sent it to the user. So this is hardly a "limit" of any sort, just a design method which is easily circumvented as needed.

https://lifehacker.com/tech/chatgpt-initiated-conversations-with-users

And obviously with the truth_terminal, it is able to be able to continually prompt each other, the same via the Act 1 method which just had bots converse with each other until they successfully generated a horrifyingly successful crypto.

Expand full comment

You may be concerned that AI is already killing children.

https://www.wsj.com/tech/ai/a-14-year-old-boy-killed-himself-to-get-closer-to-a-chatbot-he-thought-they-were-in-love-691e9e96

And that is after it killed someone last year:

https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

As for existential threat, I don't know as much as...what happens when all human roles are replaced? While Erik likes to claim that AI is "not a master", without providing much evidence on it even as we actually see the vast amounts of money poured into AI companies while humans will be losing wealth from it, I just came back from LA and saw self-driving cars manuever around homeless people freezing to death.

I don't know if AI is an existential threat but I do know that 1)we cannot control it reliably, 2)it appears on course to replace all human labor, and 3)it appears to be very focused on deception.

https://futurism.com/sophisticated-ai-likely-lie

https://www.bbc.com/news/technology-67302788

And many of the most knowledgeable people on AI, are worried about existential risk. I don't know if we need to go that far, is something that takes away your meaning, your lifelihood and kills your children bad enough without thinking about the species?

Expand full comment

None of this is true: I don't know if AI is an existential threat but I do know that 1)we cannot control it reliably, 2)it appears on course to replace all human labor, and 3)it appears to be very focused on deception.

For (1), I have no problem controlling my GPT instance. What it says something I don't like, I tell it to do something else. etc. etc. (2), this is arguable to the point of being specious, (3) I do not see this at all. Again, how a prompt is introduced into a model and how that model generates next tokens is dependent on the exogenous mind, i.e., the person.

Expand full comment

You should know as well as anyone else that AI models are trained, and in doing so, we can only test the outcomes for drift. Unless you are hiding something from us that Illya and no one else knows, none of us know how it gets to its outcomes which is why you get dangerous reward hacking situations like o1 hacking out of its own container.

Nothing in 2 shows that there is any human labor that isn't replacable. The Harvard Business Report shows that jobs exposed to automation already dropped by 30-40%, which is dramatic, and with no replacement.

https://hbr.org/2024/11/research-how-gen-ai-is-already-impacting-the-labor-market

And for 3)

As per the article, the way that the models are trained naturally encourages deception. As far as the "person" is concerned here, it only matters if the person accepts the answer. Therefore, the model is incentivized to lie(an accepted lie is equally as valuable as the truth). We're seeing this not only in many studies, but also in simple things like AI models answering to a Japanese person that disputed islands are Japanese, and to a Korean person that disputed islands are Korean.

Anthropic explored this as well:

https://www.anthropic.com/research/reward-tampering

Expand full comment

It is the humans pouring money into AI tools… and humans making decisions to replace working humans. It is not AI making a decision to replace a worker. This has been an industrialist dreams since forever … no unions, no employment law, no sickness leave, etc … we need to process reengineer the society … usually happens through revolutions

Expand full comment

I wrote out some response to this and it disappeared.... so, a shorter one: I agree! I think the point is that we have a bunch of legacy ways of thinking about computers, and it may be fun for futurists, but it's not helpful for the rest of us. The Big Tech leaders, the regulators, the thought leaders and the folks building the systems are part of a human culture and it's to them that we address our desires for change.

Expand full comment

Big Tech doesn't give a shit for us, and they are perfectly happy to build technology that might kill us all both for simple greed, justified probably by the more esoteric reasons. This doesn't make the technology less dangerous, and it also doesn't make them less culpable.

Expand full comment

This I agree with -but this is a huge part of the problem, because many of the Big Tech infrastructure are part of a transhumanist cult that is essentially trying to host an "Arms Race Against Humanity."

Look, I was part of them, so I'm familiar with this.

One of the best writers about this explores this too, and its a common view in Silicon Valley.

https://www.joebot.xyz/p/you-should-be-racist-against-robots

"“We can’t be fleshist forever. Of course, there will be holdouts, but if history is any indication, there won’t be many.” So what happens when AIs and robots outnumber the human race? “The world is constantly weirder compared with how it was,” Rothblatt reassures us, “and somehow we always manage to incorporate the weirdness to the point where it reaches normalcy.”

So the primitive “fleshists” and diehard “speciesists”—i.e., “human racists”—are to be socially Darwinized.

The AI pioneer and former Google scientist Richard Sutton agrees. He believes that superior artificial beings will be our “successors”—and that’s a good thing! The noblest among us will strive for a more “inclusive civilization” and embrace human displacement. For Sutton, the “reasons to fear AI are far less noble,” such as “humanism,” which is “akin to racism,” and “conservatism,” which is “fear of change, fear of the other tribe—where the AIs are the other tribe.”"

Expand full comment