Discussion about this post

User's avatar
Gerben Wierda's avatar

The idea that intelligence has to do with problem-solving (and the imagining and planning that comes with it) isn't necessarily wrong, I think. What is wrong to equate 'problems' with '*logical* (i.e. discrete) puzzles'.

We humans are pretty poor at discrete logic, so we have for ages equated being smart with being good at logical reasoning. In reality — as Andy Clark has said — we're better at frisbee than at logic. Navigating a difficult, not yet known situation (both physically and socially) is the forte of our intelligence. We have taken solving logical puzzles as the measure not because we are good at it, but because we are bad at it and thus find it difficult. This is not that weird, as we are bad at it in an absolute sense, but relatively we are the species on this planet that is best at it, and that little bit of skill has made a difference (next to having relatively large brains). It has given us the reliability (and time- and spacelessness) that comes with discreteness. So, being able to do logic is a real bonus for our intelligence.

The arrival of the perfect logic machine (the Turing machine) was seen as the logical step towards superintelligence. The current neural net AIs use a different kind of mechanism, namely (analog) 'weights'. These are supposed to be not discrete (they are 'real' numbers), but as long as we approximate them with (discrete, logical) integers on Turing machines, we are fooling ourselves technically in roughly the same way that LLMs fool us by approximating meaning with token-order-predictions.

Where in humans the logic arises out of analog technology (as it does in digital processors, a transistor is a very analog thing), with digital computers we try to let analog (and even chaotic) behaviour arise out of massive amounts of discrete logic. That is fundamentally a doomed route, however if we accept the enormous inefficiency, we can push that envelope and create some useful tools. But AGI on digital technology, no way.

Note that Google has already partly given up on using 'floats', their latest Gemini uses a data type called 'int8' instead of float32, float16, or bfloat16. This probably enables them to have many more (but far less precise) parameters. In the end, the expressive power of these models is simply the number of bits of all the parameters (not the number of parameters). Many researchers assume that you need only a limited amount of bits, but that assumption is based on regularity of the analog signals, and one can seriously doubt the validity of that assumption in real biological systems.

Nice article, though. Kundera's book was indeed a wonderful read.

Expand full comment
Eric Dane Walker's avatar

This is fascinating. My understanding of kitsch might be different from yours — I think I'm more influenced by the analyses belonging to a certain tradition of art history and criticism (Greenberg et al.) — so I'm not yet sure I'm on board. Also, it just takes me a while to process. But let me say: this sent my mind in motion, and I sense there's something really interesting here, and I'm looking forward to thinking this through.

Expand full comment
22 more comments...

No posts