You could add two more vital types to this typology:
The Poseur puts up a veneer of sophistication on any given topic and has many ready-at-hand opinions on it, but, when pressed to engage in a deeper way on the topic, has only a very superficial understanding of it.
The Crank is like the Poseur, but instead of projecting sophistication, he insists on his originality and has spent lots of time thinking about some specific, narrow theory; unfortunately, the Crank lacks the talent and the critical thinking skills to actually have original ideas and connect them to anything in an established body of knowledge. He becomes bitter and dissociated from reality.
Maybe for the Poseur you could include the "species" of raccoons on certain Caribbean islands that are just common raccoons in an unusual niche:
The Dipshit engages in embarrassing and inconsiderate behavior because they have trouble processing social cues and are generally careless in social situations. What distinguishes the Dipshit from the jerk, the asshole, etc. is his or her obliviousness.
For extinctions, the paired extinction should be a species that died out not from a concerted effort (to hunt it, to remove a pest, etc.), but from neglect. Here's a candidate for this -- Labrador Duck:
Awesome! I'm picking up what you're laying down.... Yeah, you know, I sometimes wonder, with all our myth-making, fears of heaven and hell, killer robots, alien invasions, and asteroids, if there's a 'view from nowhere,' as Nagel famously put it. Like a cosmic referee's chair where we can finally decide what really deserves our panic, like Taylor Swift pr $6 coffee, I suppose, or whether we should be building bunkers for sentient Roombas. Maybe we’ll even get clarity on which is more apocalyptic—alien invasions or $25 avocado toast. It seems one thing about our species is crystal clear: we are REALLY bad at predicting the future, but unlike the chipmunk or what have you, we seem perennially obsessed with trying, facts and track record be damned.
On the contrary. Humans are amazing at predicting the future using self-conscious representations. No other species can even begin to approach what humans can do. It's a really hard thing to do! In fact, we've been so successful that many now take for granted our achievements and the comfortable life they afford, and that's where the trouble starts -- when a large mass of the population is unwilling to take responsibility for themselves. Trump is the ultimate synecdoche for this attitude. Be the worst version of yourself and you'll face no consequences.
We would not be the first species to bring about our own extinction, for starters: in fact, given evolution, we have wiped out most of our previous iterations(same can be said of a lot of other animals).
While quite chill for us, that had not been great for say, homo erectus. That said, I do not think that creating deepfake clones of human beings as ghosts in the machine - something entirely possible right now counts as anything like "descendant of humans" while killing our actual children.
The argument that existential risks exists does not mean that there arent other risks. That is like saying that bacteria that kills you doesnt also cause infection, bad smell and local organ failure first. Of course it does: but it is the death of the entire person that is pretty important to note.
The argument that "humans are just too clever to die" is a "hope" argument. Hope is not a strategy and the evidence is strongly on the negative side. Noble prize winners at this time have emphasized on it - but one does not need to appeal to authority, only evidence.
So yes, if we are to find a future, the answer is to acknowledge and deal with the risks of AI, with evidence and awareness of it.
I want denial to be right. I want the risks to be silly. But the evidence is what matters and dealing with reality is how we have to do in order to survive.
I am posting Yoshua Bengio's excellent reply to the mundane "dont worry, its all fine" arguments:
As for the idea that superbugs are more lethal than AI replacement of humanity - that is the only one really worth addressing(Taylor Swift is not a serious input).
First, custom superbugs are likely getting an uplift from AI and this is indeed the main present concern of the AISI.
Therefore, as a risk, it should be addressed with other AI risks. But additionally, as a worst case scenario, most likely at least some isolated humans would survive due(unlike AI, superbugs will never develop a full self-replication and upgrade industry, nor would superbugs be enabled by humans as part of an economic system).
At the very worst outcome, at least other biological life with very similar values as us - family, love, and so on would survive. The same would not be said of US.
But basically, nothing else causes irreversible extinction of all life like AI could. Not asteroids, and certainly not cyberattacks, etc.
And I think it is hard not to see the fervor and evil of the AI leaders who push this while acknowledging the harms it might cause. Insofar as they are a combination of greedy douchebags, uncaring jerks and indeed apocalyptic in their yearning against the human condition, they are far more a threat to the human species than any "doomer."
I mean, they explicitly speak to a desire to end humanity.
You could add two more vital types to this typology:
The Poseur puts up a veneer of sophistication on any given topic and has many ready-at-hand opinions on it, but, when pressed to engage in a deeper way on the topic, has only a very superficial understanding of it.
The Crank is like the Poseur, but instead of projecting sophistication, he insists on his originality and has spent lots of time thinking about some specific, narrow theory; unfortunately, the Crank lacks the talent and the critical thinking skills to actually have original ideas and connect them to anything in an established body of knowledge. He becomes bitter and dissociated from reality.
Maybe for the Poseur you could include the "species" of raccoons on certain Caribbean islands that are just common raccoons in an unusual niche:
https://en.wikipedia.org/wiki/Island_raccoon
For the Crank, you have to pick an extinct species that never exist in the first place, like the "Hunter Island penguin":
https://www.smithsonianmag.com/smart-news/extinct-penguin-actually-never-existed-first-place-180964556/
One more, in honor of Tim Walz:
The Dipshit engages in embarrassing and inconsiderate behavior because they have trouble processing social cues and are generally careless in social situations. What distinguishes the Dipshit from the jerk, the asshole, etc. is his or her obliviousness.
For extinctions, the paired extinction should be a species that died out not from a concerted effort (to hunt it, to remove a pest, etc.), but from neglect. Here's a candidate for this -- Labrador Duck:
https://projectupland.com/waterfowl-hunting-2/labrador-duck/
Awesome! I'm picking up what you're laying down.... Yeah, you know, I sometimes wonder, with all our myth-making, fears of heaven and hell, killer robots, alien invasions, and asteroids, if there's a 'view from nowhere,' as Nagel famously put it. Like a cosmic referee's chair where we can finally decide what really deserves our panic, like Taylor Swift pr $6 coffee, I suppose, or whether we should be building bunkers for sentient Roombas. Maybe we’ll even get clarity on which is more apocalyptic—alien invasions or $25 avocado toast. It seems one thing about our species is crystal clear: we are REALLY bad at predicting the future, but unlike the chipmunk or what have you, we seem perennially obsessed with trying, facts and track record be damned.
On the contrary. Humans are amazing at predicting the future using self-conscious representations. No other species can even begin to approach what humans can do. It's a really hard thing to do! In fact, we've been so successful that many now take for granted our achievements and the comfortable life they afford, and that's where the trouble starts -- when a large mass of the population is unwilling to take responsibility for themselves. Trump is the ultimate synecdoche for this attitude. Be the worst version of yourself and you'll face no consequences.
Fantastic. As the youth says Jeffrey, lol. I’m going to include them! Funny how these rogatory names actually carry so much meaning.
Yes, feel free to add these to your typology.
We would not be the first species to bring about our own extinction, for starters: in fact, given evolution, we have wiped out most of our previous iterations(same can be said of a lot of other animals).
While quite chill for us, that had not been great for say, homo erectus. That said, I do not think that creating deepfake clones of human beings as ghosts in the machine - something entirely possible right now counts as anything like "descendant of humans" while killing our actual children.
The argument that existential risks exists does not mean that there arent other risks. That is like saying that bacteria that kills you doesnt also cause infection, bad smell and local organ failure first. Of course it does: but it is the death of the entire person that is pretty important to note.
The argument that "humans are just too clever to die" is a "hope" argument. Hope is not a strategy and the evidence is strongly on the negative side. Noble prize winners at this time have emphasized on it - but one does not need to appeal to authority, only evidence.
So yes, if we are to find a future, the answer is to acknowledge and deal with the risks of AI, with evidence and awareness of it.
I want denial to be right. I want the risks to be silly. But the evidence is what matters and dealing with reality is how we have to do in order to survive.
I am posting Yoshua Bengio's excellent reply to the mundane "dont worry, its all fine" arguments:
https://yoshuabengio.org/2024/07/09/reasoning-through-arguments-against-taking-ai-safety-seriously/
As for the idea that superbugs are more lethal than AI replacement of humanity - that is the only one really worth addressing(Taylor Swift is not a serious input).
First, custom superbugs are likely getting an uplift from AI and this is indeed the main present concern of the AISI.
Therefore, as a risk, it should be addressed with other AI risks. But additionally, as a worst case scenario, most likely at least some isolated humans would survive due(unlike AI, superbugs will never develop a full self-replication and upgrade industry, nor would superbugs be enabled by humans as part of an economic system).
At the very worst outcome, at least other biological life with very similar values as us - family, love, and so on would survive. The same would not be said of US.
But basically, nothing else causes irreversible extinction of all life like AI could. Not asteroids, and certainly not cyberattacks, etc.
And I think it is hard not to see the fervor and evil of the AI leaders who push this while acknowledging the harms it might cause. Insofar as they are a combination of greedy douchebags, uncaring jerks and indeed apocalyptic in their yearning against the human condition, they are far more a threat to the human species than any "doomer."
I mean, they explicitly speak to a desire to end humanity.
https://jacobin.com/2024/01/can-humanity-survive-ai