Discussion about this post

User's avatar
Jeffrey Quackenbush's avatar

You could add two more vital types to this typology:

The Poseur puts up a veneer of sophistication on any given topic and has many ready-at-hand opinions on it, but, when pressed to engage in a deeper way on the topic, has only a very superficial understanding of it.

The Crank is like the Poseur, but instead of projecting sophistication, he insists on his originality and has spent lots of time thinking about some specific, narrow theory; unfortunately, the Crank lacks the talent and the critical thinking skills to actually have original ideas and connect them to anything in an established body of knowledge. He becomes bitter and dissociated from reality.

Maybe for the Poseur you could include the "species" of raccoons on certain Caribbean islands that are just common raccoons in an unusual niche:

https://en.wikipedia.org/wiki/Island_raccoon

For the Crank, you have to pick an extinct species that never exist in the first place, like the "Hunter Island penguin":

https://www.smithsonianmag.com/smart-news/extinct-penguin-actually-never-existed-first-place-180964556/

Expand full comment
Shon Pan's avatar

We would not be the first species to bring about our own extinction, for starters: in fact, given evolution, we have wiped out most of our previous iterations(same can be said of a lot of other animals).

While quite chill for us, that had not been great for say, homo erectus. That said, I do not think that creating deepfake clones of human beings as ghosts in the machine - something entirely possible right now counts as anything like "descendant of humans" while killing our actual children.

The argument that existential risks exists does not mean that there arent other risks. That is like saying that bacteria that kills you doesnt also cause infection, bad smell and local organ failure first. Of course it does: but it is the death of the entire person that is pretty important to note.

The argument that "humans are just too clever to die" is a "hope" argument. Hope is not a strategy and the evidence is strongly on the negative side. Noble prize winners at this time have emphasized on it - but one does not need to appeal to authority, only evidence.

So yes, if we are to find a future, the answer is to acknowledge and deal with the risks of AI, with evidence and awareness of it.

I want denial to be right. I want the risks to be silly. But the evidence is what matters and dealing with reality is how we have to do in order to survive.

I am posting Yoshua Bengio's excellent reply to the mundane "dont worry, its all fine" arguments:

https://yoshuabengio.org/2024/07/09/reasoning-through-arguments-against-taking-ai-safety-seriously/

Expand full comment
7 more comments...

No posts