What an interesting post! It set my subsidiarist sensibilities fluttering.
If I'm understanding you, you're arguing for an arrangement whereby the algorithms that determine what news or social media posts I see on my device are trained on what I and my fellow community members let them be trained on. And what we deem training-worthy — what we let the algorithms train on, what data we choose to share — will be determined by criteria shaped more by our local interests than, say, by what is in the interest of a monstrous profit-seeking entity. And the distinction is not simply between local and global: there are intermediary spheres governed by less and less locally trained algorithms.
Have I understood you? Assuming so, let me register a question.
One of Hayek's concerns about centralized control over distribution (of any good) was that such control was liable to unaccountable capture by private interests. Hayek wasn't simply concerned with efficient signaling and information flow. He was concerned that public and common interests would be subverted.
My question is this: if local interests are shaping the informational landscape, do you think there's any less risk of those local interests being private interests as opposed to public and common interests?
Perhaps I should sketch the background animating my worry.
One conception of individual freedom has it that you're free as long as you're not interfered with, and your being interfered with is justifiable only when you're interfering with someone else. This is freedom-as-non-interference, a cornerstone of the classical liberal understanding of governmentality.
Another conception has it that you're free as long as you're not subject to arbitrary, unchecked power. This is freedom-as-non-domination, a cornerstone of the classical republican understanding of governmentality.
For example, suppose you're standing on some private property that is not your own, but the owner gave you permission to be there, and the owner's not interfering with you. According to the first conception, you're free. According to the second, you're not. For you're subject to arbitrary power: the owner can revoke his permission at any time, for any reason, and you have no real recourse in the matter.
This is basically our situation vis-à-vis tech companies. As things look now, the online space serving as our de facto public square is in fact not public — it is not a public thing, not a res publica. So there is an important part of civil life taking place only because a few oligarchs let it. A user might be free from interference (because she's not interfering with anyone else), but that doesn't mean she's free from the whims of the oligarch whose property she's currently enjoying.
It's the loss of republican freedom — the loss of reasonable public control over the forces that make the most difference to a people's way of life — that worries me and animates my original question. I can see how your suggestion might actually restore some republican freedom, and I can see how it might not. So I'm asking what you think.
We can define at least 3 levels of exchange between us humans : the public sphere, the private sphere (the few people I trust) and the personal (inside one's head). There is another one, the subconscious, but it does not explicitly 'communicate'.
As we all know from experience, only the personal one can be made full-proof (but it takes a lot of efforts).
Making it 'digital' does not make a difference to the problem.
So I would say the way forward can be for each of us to select which 'community' we trust to send our data to be used for collective training, but even better to each have our own, personal, AI, and decide from what external data sources it can be updated (trained).
Behind all that lurks this question : does intelligence get better if you pool it ?
That also leads to the question : is there such a thing as 'super-intelligence' ?
The past 50 years has seen the rise of 'big science' where projects involve groups of researchers.
At the same time the productivity of science as diminished drastically.
All scientific revolutions have stemmed from one mind looking at things differently, and while that process is ongoing in a mind it is very difficult to express it as it is incomplete, and trying to can even hinder it by restricting the 'mind flow'.
My view is that the 'creative' aspect can only take place within one mind (at subconscious level, see my other comment); A community of minds is then better to formalise it through external exchanges, as each one will raise questions as it adopts (adjust their worldview to) the new view.
Very interesting - as an approach (so I understand it) to structure (semantically) and federate/distribute "knowledge" "democratically", i.e. without intransparent, biased, self-serving moderators. But the question remains what the value of such "knowledge" is - is that value just our personal judgement - or majority-vote? I am reminded of the age-old scentific method that works by tacit consensus among a peer-group of accepted authority - but it remains confined to those with sufficient training, knowledge and research-experience - in silos of specialization. Then, any one of those silos can overwhelm us - become a life-task to the exclusion of anything outside it - so missing almost everything that really is important in life. But who can judge what that is, even in an enumerative (so not deeply analytic) sense? That has always been shaped explicitly/implicitly by an exclusive elite of "cultural leaders", ideally well-versed in matters of the world BUT ALSO philosophy (which I understand as science outside the silos) - AND spirituality (which informs us about purpose and meaning) - but, in practice, those personifying the aspirations arising from our own inadequacy and frustration. A cryptic quote I noted today on X comes to mind: Does the slave dream of being free - or becoming a slave-owner?
So, yes, federated and democratic learning has it's value - but only if it has mechanisms that assure the quality of knowledge - as the scientific method does - and respects meaningful spiritual goals - and putting all that together could shape culture in a new way.
Yet another interesting idea, and I share EDW's question.
Linked to that, I have a very simple and deep question in mind that could bring a way to start re-describing what is going on in 'AI', and that everyone can answer (and I would really like everybody to give their answer) :
When you 'think' about something to try find an 'answer', do you use words ?
Wow, lovely question!! I think I do sometimes, but I’d probably say more often it’s in “concepts”… but come to think of it, those probably somehow consists of words too… I’ll try to observe myself
Ok so I noticed something - sometimes, when I’m tired, I fail to find the word describing an object that I’m asking my wife to pass me. We’ve made this into sort of a joke when I purposefully say a totally nonexistent word and she still manages to guess what I mean based on the location / surrounding environment etc. What’s surprising to me there is that I know exactly what I mean, but cannot remember the word. How can I be knowing what I mean without the word though? Am I thinking of it by somehow visualizing it maybe? Or thinking in terms of the actions I need it for? (E.g in case of the kitchen knife - the thing that cuts my food)… or some abstraction?
I suppose exploring exactly the moments when people struggle searching for words might be very interesting from this perspective. :)
A very neat, clear cut, simple, evidence that we use words to express our thoughts but we don't need them to think !
We use words (or signs) to express our thoughts in a way that can be shared with others.
We also use them, inside our brain, to structure our thoughts, sharing them with ourself, listening to what we are saying and adjusting it if we are not 'satisfied' with that expression.
But our thoughts come in our brain without them. They are sub-conscious.
Consciousness is 'listening' to our sub-conscious mind, selecting the most active 'idea' poping up from it and trying to make sense of it by expressing it with words in a rational manner.
Words are labels, stored in memory, attached to concepts and invoked when those are activated (your exemple shows a problem at that level). Language is used to group those concepts into a 'rational'.
I believe the deepest question (ever ?) is wether language is enough to account for human intelligence. LLMs have brought it to the light and it is unsettling the greatest minds (including mathematicians).
Creativity is a core element of that question (As in finding answers, seeing 'the light', not art). And the way Edison was using a ball in his hand, falling asleep thinking of a question, with the sound of the ball falling waking him up with an answer is very interesting in that respect (and I use it. It works).
Thinking of your thinking is available to all of us equally and mostly dismissed in the field of AI.
To extend my initial question, try and 'catch' an idea rising in your brain in real time. It is a challenge but you can 'tune' your brain into doing it. Have fun !
Thank you for such a wide comment!! I really like the idea of the consciousness listening to raw thoughts, that’s super interesting.
As you mentioned LLMs - one way LLMs are super puzzling for me is that I have always thought of words and thinking coupled together. So seeing words coming out of a LLM so consistently thanks to the enormous training set was (still is) extremely confusing for me, because I cannot help but imagine some kind of mind there. (For the record, I don’t believe there is anything but mathematics, but it’s blowing my mind nevertheless - or maybe all the more).
Thank you for the rising idea catching exercise! I’ll try!
This is a great point! Words and thinking seem coupled together in our cognitive experience, I agree. I don't really have the intuition there's a bona fide mind there, but I don't find it interesting that there's latent emergent information in the models that can be elicited by prompt changes.
Ugh, this was a sad read - in the way how the alternative sounded so much healthier, more privacy-friendly and overall better than what we have now.
My worry here is - is there any motivation to make this a reality? The current state is probably much more profitable for the big tech, so they have no incentive in coming up with something different …? And it sounds like a LOT of work to do, so I’m not sure it could be somehow enthusiast-driven?
The next few years are going to be a very transformative time in our history. This is not a niche or technical issue. It goes right down to our sovereignty as individuals. Our freedom of thought and expression.
We need healthy alternatives. We need smart and courageous people to find those solutions. I really appreciate this articles attempt at that. It made me realize how much work there is to be done.
What an interesting post! It set my subsidiarist sensibilities fluttering.
If I'm understanding you, you're arguing for an arrangement whereby the algorithms that determine what news or social media posts I see on my device are trained on what I and my fellow community members let them be trained on. And what we deem training-worthy — what we let the algorithms train on, what data we choose to share — will be determined by criteria shaped more by our local interests than, say, by what is in the interest of a monstrous profit-seeking entity. And the distinction is not simply between local and global: there are intermediary spheres governed by less and less locally trained algorithms.
Have I understood you? Assuming so, let me register a question.
One of Hayek's concerns about centralized control over distribution (of any good) was that such control was liable to unaccountable capture by private interests. Hayek wasn't simply concerned with efficient signaling and information flow. He was concerned that public and common interests would be subverted.
My question is this: if local interests are shaping the informational landscape, do you think there's any less risk of those local interests being private interests as opposed to public and common interests?
Perhaps I should sketch the background animating my worry.
One conception of individual freedom has it that you're free as long as you're not interfered with, and your being interfered with is justifiable only when you're interfering with someone else. This is freedom-as-non-interference, a cornerstone of the classical liberal understanding of governmentality.
Another conception has it that you're free as long as you're not subject to arbitrary, unchecked power. This is freedom-as-non-domination, a cornerstone of the classical republican understanding of governmentality.
For example, suppose you're standing on some private property that is not your own, but the owner gave you permission to be there, and the owner's not interfering with you. According to the first conception, you're free. According to the second, you're not. For you're subject to arbitrary power: the owner can revoke his permission at any time, for any reason, and you have no real recourse in the matter.
This is basically our situation vis-à-vis tech companies. As things look now, the online space serving as our de facto public square is in fact not public — it is not a public thing, not a res publica. So there is an important part of civil life taking place only because a few oligarchs let it. A user might be free from interference (because she's not interfering with anyone else), but that doesn't mean she's free from the whims of the oligarch whose property she's currently enjoying.
It's the loss of republican freedom — the loss of reasonable public control over the forces that make the most difference to a people's way of life — that worries me and animates my original question. I can see how your suggestion might actually restore some republican freedom, and I can see how it might not. So I'm asking what you think.
I share your doubts.
Here is my 2 cents:
We can define at least 3 levels of exchange between us humans : the public sphere, the private sphere (the few people I trust) and the personal (inside one's head). There is another one, the subconscious, but it does not explicitly 'communicate'.
As we all know from experience, only the personal one can be made full-proof (but it takes a lot of efforts).
Making it 'digital' does not make a difference to the problem.
So I would say the way forward can be for each of us to select which 'community' we trust to send our data to be used for collective training, but even better to each have our own, personal, AI, and decide from what external data sources it can be updated (trained).
Behind all that lurks this question : does intelligence get better if you pool it ?
That also leads to the question : is there such a thing as 'super-intelligence' ?
The past 50 years has seen the rise of 'big science' where projects involve groups of researchers.
At the same time the productivity of science as diminished drastically.
All scientific revolutions have stemmed from one mind looking at things differently, and while that process is ongoing in a mind it is very difficult to express it as it is incomplete, and trying to can even hinder it by restricting the 'mind flow'.
My view is that the 'creative' aspect can only take place within one mind (at subconscious level, see my other comment); A community of minds is then better to formalise it through external exchanges, as each one will raise questions as it adopts (adjust their worldview to) the new view.
Very interesting - as an approach (so I understand it) to structure (semantically) and federate/distribute "knowledge" "democratically", i.e. without intransparent, biased, self-serving moderators. But the question remains what the value of such "knowledge" is - is that value just our personal judgement - or majority-vote? I am reminded of the age-old scentific method that works by tacit consensus among a peer-group of accepted authority - but it remains confined to those with sufficient training, knowledge and research-experience - in silos of specialization. Then, any one of those silos can overwhelm us - become a life-task to the exclusion of anything outside it - so missing almost everything that really is important in life. But who can judge what that is, even in an enumerative (so not deeply analytic) sense? That has always been shaped explicitly/implicitly by an exclusive elite of "cultural leaders", ideally well-versed in matters of the world BUT ALSO philosophy (which I understand as science outside the silos) - AND spirituality (which informs us about purpose and meaning) - but, in practice, those personifying the aspirations arising from our own inadequacy and frustration. A cryptic quote I noted today on X comes to mind: Does the slave dream of being free - or becoming a slave-owner?
So, yes, federated and democratic learning has it's value - but only if it has mechanisms that assure the quality of knowledge - as the scientific method does - and respects meaningful spiritual goals - and putting all that together could shape culture in a new way.
Yet another interesting idea, and I share EDW's question.
Linked to that, I have a very simple and deep question in mind that could bring a way to start re-describing what is going on in 'AI', and that everyone can answer (and I would really like everybody to give their answer) :
When you 'think' about something to try find an 'answer', do you use words ?
Wow, lovely question!! I think I do sometimes, but I’d probably say more often it’s in “concepts”… but come to think of it, those probably somehow consists of words too… I’ll try to observe myself
Please do and let us know.
Ok so I noticed something - sometimes, when I’m tired, I fail to find the word describing an object that I’m asking my wife to pass me. We’ve made this into sort of a joke when I purposefully say a totally nonexistent word and she still manages to guess what I mean based on the location / surrounding environment etc. What’s surprising to me there is that I know exactly what I mean, but cannot remember the word. How can I be knowing what I mean without the word though? Am I thinking of it by somehow visualizing it maybe? Or thinking in terms of the actions I need it for? (E.g in case of the kitchen knife - the thing that cuts my food)… or some abstraction?
I suppose exploring exactly the moments when people struggle searching for words might be very interesting from this perspective. :)
A very neat, clear cut, simple, evidence that we use words to express our thoughts but we don't need them to think !
We use words (or signs) to express our thoughts in a way that can be shared with others.
We also use them, inside our brain, to structure our thoughts, sharing them with ourself, listening to what we are saying and adjusting it if we are not 'satisfied' with that expression.
But our thoughts come in our brain without them. They are sub-conscious.
Consciousness is 'listening' to our sub-conscious mind, selecting the most active 'idea' poping up from it and trying to make sense of it by expressing it with words in a rational manner.
Words are labels, stored in memory, attached to concepts and invoked when those are activated (your exemple shows a problem at that level). Language is used to group those concepts into a 'rational'.
I believe the deepest question (ever ?) is wether language is enough to account for human intelligence. LLMs have brought it to the light and it is unsettling the greatest minds (including mathematicians).
Creativity is a core element of that question (As in finding answers, seeing 'the light', not art). And the way Edison was using a ball in his hand, falling asleep thinking of a question, with the sound of the ball falling waking him up with an answer is very interesting in that respect (and I use it. It works).
Thinking of your thinking is available to all of us equally and mostly dismissed in the field of AI.
To extend my initial question, try and 'catch' an idea rising in your brain in real time. It is a challenge but you can 'tune' your brain into doing it. Have fun !
Thank you for such a wide comment!! I really like the idea of the consciousness listening to raw thoughts, that’s super interesting.
As you mentioned LLMs - one way LLMs are super puzzling for me is that I have always thought of words and thinking coupled together. So seeing words coming out of a LLM so consistently thanks to the enormous training set was (still is) extremely confusing for me, because I cannot help but imagine some kind of mind there. (For the record, I don’t believe there is anything but mathematics, but it’s blowing my mind nevertheless - or maybe all the more).
Thank you for the rising idea catching exercise! I’ll try!
This is a great point! Words and thinking seem coupled together in our cognitive experience, I agree. I don't really have the intuition there's a bona fide mind there, but I don't find it interesting that there's latent emergent information in the models that can be elicited by prompt changes.
Ugh, this was a sad read - in the way how the alternative sounded so much healthier, more privacy-friendly and overall better than what we have now.
My worry here is - is there any motivation to make this a reality? The current state is probably much more profitable for the big tech, so they have no incentive in coming up with something different …? And it sounds like a LOT of work to do, so I’m not sure it could be somehow enthusiast-driven?
The next few years are going to be a very transformative time in our history. This is not a niche or technical issue. It goes right down to our sovereignty as individuals. Our freedom of thought and expression.
We need healthy alternatives. We need smart and courageous people to find those solutions. I really appreciate this articles attempt at that. It made me realize how much work there is to be done.