I wonder if AI will eventually destabilize itself. What you describe amounts to a massive reduction in creativity and an extreme narrowing down of perspective and intent.
And for ChatGPT: as the internet itself will more and more be flooded by automated content, there will be less and less original human content and these systems will start using their own output to train themselves. An incestuous configuration that has been demonstrated (don't remember the source) to lead to downgrading of information and eventually gibberish.
Then there is also the erosion of trust in internet content, of course. And the principal impossibility to ascertain veracity of automated content.
Big data is mining human ingenuity and eroding trust in an unsustainable way. Nature by contrast does not work with centralised intelligence. If there was an evolutionary case for it, it would have happened and succeeded. We are currently seeing its failure evolve, in my opinion.
"What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as model collapse...We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models."
Thanks for this. Everything you say is true. There ARE some things happening, however, that may have the effect of democratizing the exploitation of AI. Meta's release of the LLaMA pre-trained model last spring was a bit of a bombshell and really juiced a huge surge of innovation and exploration on the application of pre-trained models for targeted use through "fine-tuning" of those models, which can often be done for hundreds, not thousands or millions, of dollars. Mozilla has released a commercially available pre-trained model, and recently Meta released LLaMA 2 with a new license that allows for commercial use. So there is some level of industry pressure or interest in democratizing/amortizing the costs of these models. It doesn't change your larger point about from-scratch training costs. But these last few months have been very interesting in regard to making the costs more tractable for the garage developer.
Thanks for a wonderfully productive metaphor. It helps to understand the young "experimenters" who think they're garage tinkerers when they submit complex prompts to GPT. In fact they're more like students in the '60s who punched cards and submitted the card deck to the University Computing Center, then waited a day for the printout. Back then we understood that we were servants of a big machine. The modern equivalents haven't figured out their position yet.
I wonder if AI will eventually destabilize itself. What you describe amounts to a massive reduction in creativity and an extreme narrowing down of perspective and intent.
And for ChatGPT: as the internet itself will more and more be flooded by automated content, there will be less and less original human content and these systems will start using their own output to train themselves. An incestuous configuration that has been demonstrated (don't remember the source) to lead to downgrading of information and eventually gibberish.
Then there is also the erosion of trust in internet content, of course. And the principal impossibility to ascertain veracity of automated content.
Big data is mining human ingenuity and eroding trust in an unsustainable way. Nature by contrast does not work with centralised intelligence. If there was an evolutionary case for it, it would have happened and succeeded. We are currently seeing its failure evolve, in my opinion.
To your very point: https://arxiv.org/pdf/2305.17493.pdf
From the abstract:
"What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as model collapse...We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models."
Perfect! Thanks much for the reference.
Thanks for this. Everything you say is true. There ARE some things happening, however, that may have the effect of democratizing the exploitation of AI. Meta's release of the LLaMA pre-trained model last spring was a bit of a bombshell and really juiced a huge surge of innovation and exploration on the application of pre-trained models for targeted use through "fine-tuning" of those models, which can often be done for hundreds, not thousands or millions, of dollars. Mozilla has released a commercially available pre-trained model, and recently Meta released LLaMA 2 with a new license that allows for commercial use. So there is some level of industry pressure or interest in democratizing/amortizing the costs of these models. It doesn't change your larger point about from-scratch training costs. But these last few months have been very interesting in regard to making the costs more tractable for the garage developer.
Thanks for a wonderfully productive metaphor. It helps to understand the young "experimenters" who think they're garage tinkerers when they submit complex prompts to GPT. In fact they're more like students in the '60s who punched cards and submitted the card deck to the University Computing Center, then waited a day for the printout. Back then we understood that we were servants of a big machine. The modern equivalents haven't figured out their position yet.