Polanyi. I read and reread Personal Knowledge preparing for writing my book. The key is what calls "articulations," which are basically marks--things we can write down--like words or computer code or anything else. He argues some knowledge can't be articulated.
I just finished writing a huge post on bureaucracy so I'm out of steam for now! Thank you David and thanks again to Eric for writing the post. I'm really encouraged that Colligo is attracting talent, and good ideas.
This was outstanding, and gets to the heart of things: “what computers can’t do” (to borrow the title of Hubert Dreyfus’s classic). I’m hoping someone will bring to bear on AI the insights of Michael Polanyi, as Dreyfus brought Heidegger to bear on anglo-analytic philosophy of mind and its bastard child, the computational theory of mind. Not just the “tacit knowledge” stuff for which Polanyi is most famous, but his larger body of work, in particular Personal Knowledge. By way of making such an enterprise palatable, it may be pointed out that Polanyi’s central concern is to explain how *scientific* knowledge is possible. So there’s no humanistic hanky-panky that has to first be apologized for before the AI nerds will take notice. I haven’t been paying much attention to current debate, but I suppose this could be a very fruitful time for revisiting some quarrels that arose in the confrontation between phenomenology and positivistic theory knowledge in the 20th century.
I think I should know more about Polyani's "personal knowledge" -- I do think that a lot of representations, including speech, ultimately presume intentionality that AI does not have. Working on an essay about this. Well, fixin' to work, as they say in Texas.
I appreciate the compliment, Matthew. It's especially meaningful coming from you, someone I've thought along with for years, ever since my old mentor, the philosopher of technology Albert Borgmann, introduced me first to Dreidegger and then to your NA article, "Shop Class as Soulcraft."
It's funny you should mention Polanyi. My well worn copy of Personal Knowledge is sitting in front of me right now. This work, and Merleau-Ponty's Phenomenology of Perception, while not the proper topics of my dissertation, are the sources of inspiration for it, which can be classified, roughly, as a phenomenology and semiology of mathematics. I ask what mathematical knowledge is and how it's possible, and I argue that the answers must make essential reference to the enminded human body, because mathematical proof (and the understanding it articulates) is possible only in action and perception, and it is essentially addressed, in virtue of its diagrammatic form, to the human being's point-of-view-ish-ness.
Albert Borgmann, wow!!!!! As a young man -- I was clerking at the Federal Circuit, fresh out of law school -- I read Crossing the Postmodern Divide. I drove from Sun Valley, Idaho to Missoula to have coffee with Borgmann and talk about Kant's Critique of Judgment.
Albert's was a quietly great and beautiful soul. I went to his funeral in Missoula in May. It was overwhelming in the best way. Even on the occasion of a death, life was spilling out all over the place.
Count me a Borgmann fan as well. What a cozy little circle we have here! I didn’t know he had died. RIP. He came to Charlottesville once, at the invitation of my Institute, and was a real mensch. Very warm.
Eric, your dissertation sounds fascinating. At the risk of continuing to drop names, have you by any chance come across a book by Jacob Klein, titled “Greek mathematical thought and the origin of algebra”? I ask because he argues (if I’m remembering correctly) that the big breakthrough was conceiving “number” abstractly, whereas previously (somehow) 3 wasn’t conceivable apart from 3 rocks or whatever. It may bear on your action-embedded approach to mathematical entities. For Klein, this fundamental shift entailed a fundamentally different way of human being-in-the-world. (In this company, I figure it’s OK to use the dashes!)
Matthew, I have read Klein; "Greek Mathematical Thought" is on my bookshelf. It's a wonderful book that's been formative for my thinking, in that he brought home for me the notion that there are deep interconnections between the development of notations, the development of conceptualizations, and the development of human being-in-the-world.
To my mind — and struggling to realize this was very fruitful — Klein is somewhat stuck in the picture that understanding is "in the head" and that notations are useful scaffoldings for the real action that gets completed in the head.
I argue, to the contrary, inspired by Merleau-Ponty, that mathematical conceptualizations become completed *in and through* our reading or writing perceptually grasped diagrammatic metaphors for that conceptualization. Mathematical thinking, in brief, is partly constituted by reading and writing, and that writing can *show us* things that written natural languages can only *tell us* about.
So while Klein and I agree that conceptualization depends on notation, we disagree about the nature of that dependence.
Anyway, I've lapsed into academic hair-splitting. My apologies. Let me thank you again for commenting on my piece and for the exchange.
It seems to me there is more than a shallow analogy between what 'digital humanities' do regarding 'humanities' and what LLMs do regarding language. And — if only because this was written by a Philosophy PhD candidate — we could build on Uncle Ludwig's observation that (in P.M.S. Hacker's version) "meaning lies hidden in correct use".
The data in LLMs is currently 'used' to create '(for humans) satisficing text', but human understanding of text requires meaning on a level that is related to our 'way of life'. LLMs use the data completely differently and thus, while there is meaning in LLMs, it is that separate (and not our) meaning that LLMs understand.
If — thought experiment — we would leave the LLMs to their own devices, a society of LLMs — creating and using each other's data to 'train' themselves — would diverge from what we could understand as humans, because that anchoring to what from human understanding 'correct' is (i.e. that 'judgement') that is now in the training data would be lost. Because such a society of LLMs would have nothing to anchor it, I think it would dissolve into gibberish. Hmm, there is a story in that...
Philosophers are sometimes underestimated and there are many prejudices. Quote Luke Muehlhauser (2016) who is very open and honest about it:
I’ve been vaguely aware of these two different positions on Dreyfus for the last few years, and before I read Dreyfus (1965) for myself, I suspected the [people who claimed Dreyfus made poor arguments, was badly misinformed about the state of AI, and was rightly ignored by the AI community were] right because I didn’t feel optimistic about the likely value of a critique of AI from a continental philosopher who wrote his dissertation on Heidegger.
But now having read “Alchemy and Artificial Intelligence” for this investigation, I find myself firmly in the latter camp. It seems to me that Dreyfus’ 1965 critiques of 1960s AI approaches were largely correct, for roughly the right reasons, in a way that seems quite impressive in hindsight.
This might be helpful: in financial transactions, in which nothing (legal entities, money, contracts) can be seen, it is often necessary to draw the transaction. Like with chalk or marker, in your hand. If you can't draw it, you don't get it.
I, for one, would like to see someone think through the role of AI in medicine. Especially from a political or institutional angle. With "healthcare" tending toward monopolization — that is, tending toward control by a distant, powerful few — what is the place of anonymous, Silicon-Valley-begotten, statistical-inference-making machines?
Very nicely done, Eric! Apart from the economics, which I do get, the digital humanities project seems to me a bizarre exercise in objectification. Let's discover something both objective and unavailable to the minds of our subjects, or ourselves, prior to the running of our algorithm. And that is a humanistic form of knowledge, how?
Polanyi. I read and reread Personal Knowledge preparing for writing my book. The key is what calls "articulations," which are basically marks--things we can write down--like words or computer code or anything else. He argues some knowledge can't be articulated.
I just finished writing a huge post on bureaucracy so I'm out of steam for now! Thank you David and thanks again to Eric for writing the post. I'm really encouraged that Colligo is attracting talent, and good ideas.
This was outstanding, and gets to the heart of things: “what computers can’t do” (to borrow the title of Hubert Dreyfus’s classic). I’m hoping someone will bring to bear on AI the insights of Michael Polanyi, as Dreyfus brought Heidegger to bear on anglo-analytic philosophy of mind and its bastard child, the computational theory of mind. Not just the “tacit knowledge” stuff for which Polanyi is most famous, but his larger body of work, in particular Personal Knowledge. By way of making such an enterprise palatable, it may be pointed out that Polanyi’s central concern is to explain how *scientific* knowledge is possible. So there’s no humanistic hanky-panky that has to first be apologized for before the AI nerds will take notice. I haven’t been paying much attention to current debate, but I suppose this could be a very fruitful time for revisiting some quarrels that arose in the confrontation between phenomenology and positivistic theory knowledge in the 20th century.
I think I should know more about Polyani's "personal knowledge" -- I do think that a lot of representations, including speech, ultimately presume intentionality that AI does not have. Working on an essay about this. Well, fixin' to work, as they say in Texas.
I appreciate the compliment, Matthew. It's especially meaningful coming from you, someone I've thought along with for years, ever since my old mentor, the philosopher of technology Albert Borgmann, introduced me first to Dreidegger and then to your NA article, "Shop Class as Soulcraft."
It's funny you should mention Polanyi. My well worn copy of Personal Knowledge is sitting in front of me right now. This work, and Merleau-Ponty's Phenomenology of Perception, while not the proper topics of my dissertation, are the sources of inspiration for it, which can be classified, roughly, as a phenomenology and semiology of mathematics. I ask what mathematical knowledge is and how it's possible, and I argue that the answers must make essential reference to the enminded human body, because mathematical proof (and the understanding it articulates) is possible only in action and perception, and it is essentially addressed, in virtue of its diagrammatic form, to the human being's point-of-view-ish-ness.
Albert Borgmann, wow!!!!! As a young man -- I was clerking at the Federal Circuit, fresh out of law school -- I read Crossing the Postmodern Divide. I drove from Sun Valley, Idaho to Missoula to have coffee with Borgmann and talk about Kant's Critique of Judgment.
I love it! Thanks for sharing this, David.
Albert's was a quietly great and beautiful soul. I went to his funeral in Missoula in May. It was overwhelming in the best way. Even on the occasion of a death, life was spilling out all over the place.
Count me a Borgmann fan as well. What a cozy little circle we have here! I didn’t know he had died. RIP. He came to Charlottesville once, at the invitation of my Institute, and was a real mensch. Very warm.
Eric, your dissertation sounds fascinating. At the risk of continuing to drop names, have you by any chance come across a book by Jacob Klein, titled “Greek mathematical thought and the origin of algebra”? I ask because he argues (if I’m remembering correctly) that the big breakthrough was conceiving “number” abstractly, whereas previously (somehow) 3 wasn’t conceivable apart from 3 rocks or whatever. It may bear on your action-embedded approach to mathematical entities. For Klein, this fundamental shift entailed a fundamentally different way of human being-in-the-world. (In this company, I figure it’s OK to use the dashes!)
Matthew, I have read Klein; "Greek Mathematical Thought" is on my bookshelf. It's a wonderful book that's been formative for my thinking, in that he brought home for me the notion that there are deep interconnections between the development of notations, the development of conceptualizations, and the development of human being-in-the-world.
To my mind — and struggling to realize this was very fruitful — Klein is somewhat stuck in the picture that understanding is "in the head" and that notations are useful scaffoldings for the real action that gets completed in the head.
I argue, to the contrary, inspired by Merleau-Ponty, that mathematical conceptualizations become completed *in and through* our reading or writing perceptually grasped diagrammatic metaphors for that conceptualization. Mathematical thinking, in brief, is partly constituted by reading and writing, and that writing can *show us* things that written natural languages can only *tell us* about.
So while Klein and I agree that conceptualization depends on notation, we disagree about the nature of that dependence.
Anyway, I've lapsed into academic hair-splitting. My apologies. Let me thank you again for commenting on my piece and for the exchange.
Very good.
Just musing out loud without a lot of rigour:
It seems to me there is more than a shallow analogy between what 'digital humanities' do regarding 'humanities' and what LLMs do regarding language. And — if only because this was written by a Philosophy PhD candidate — we could build on Uncle Ludwig's observation that (in P.M.S. Hacker's version) "meaning lies hidden in correct use".
The data in LLMs is currently 'used' to create '(for humans) satisficing text', but human understanding of text requires meaning on a level that is related to our 'way of life'. LLMs use the data completely differently and thus, while there is meaning in LLMs, it is that separate (and not our) meaning that LLMs understand.
If — thought experiment — we would leave the LLMs to their own devices, a society of LLMs — creating and using each other's data to 'train' themselves — would diverge from what we could understand as humans, because that anchoring to what from human understanding 'correct' is (i.e. that 'judgement') that is now in the training data would be lost. Because such a society of LLMs would have nothing to anchor it, I think it would dissolve into gibberish. Hmm, there is a story in that...
Philosophers are sometimes underestimated and there are many prejudices. Quote Luke Muehlhauser (2016) who is very open and honest about it:
I’ve been vaguely aware of these two different positions on Dreyfus for the last few years, and before I read Dreyfus (1965) for myself, I suspected the [people who claimed Dreyfus made poor arguments, was badly misinformed about the state of AI, and was rightly ignored by the AI community were] right because I didn’t feel optimistic about the likely value of a critique of AI from a continental philosopher who wrote his dissertation on Heidegger.
But now having read “Alchemy and Artificial Intelligence” for this investigation, I find myself firmly in the latter camp. It seems to me that Dreyfus’ 1965 critiques of 1960s AI approaches were largely correct, for roughly the right reasons, in a way that seems quite impressive in hindsight.
Not all hope is lost for a philosopher...
This might be helpful: in financial transactions, in which nothing (legal entities, money, contracts) can be seen, it is often necessary to draw the transaction. Like with chalk or marker, in your hand. If you can't draw it, you don't get it.
I, for one, would like to see someone think through the role of AI in medicine. Especially from a political or institutional angle. With "healthcare" tending toward monopolization — that is, tending toward control by a distant, powerful few — what is the place of anonymous, Silicon-Valley-begotten, statistical-inference-making machines?
Very nicely done, Eric! Apart from the economics, which I do get, the digital humanities project seems to me a bizarre exercise in objectification. Let's discover something both objective and unavailable to the minds of our subjects, or ourselves, prior to the running of our algorithm. And that is a humanistic form of knowledge, how?