AI 2023. Meet ChatGPT. - page 182

 
Sergey Gridnev #:

The researchers argue that generative AI tools can produce low-quality answers to user queries because their models are trained on "synthetic data" rather than the unique human content that makes their answers special.

Other AI researchers have coined their own terms to describe this learning method. In a study published in July, researchers from Stanford and Rice universities called it the "Model Autograph Syndrome," in which an AI's "self-absorbed" cycle of learning content created by other AIs can cause generative AI tools to be "doomed" to degrade the "quality" and "diversity" of the images and text they create

Jathan Sadowski, a senior researcher at the Emerging Technologies Research Laboratory in Australia who studies AI, has labelled this phenomenon "Habsburg AI", arguing that systems trained on the output of other generative AI tools can create "mutilated mutations of responses".

While the specific implications of these phenomena remain unclear, some technology experts believe that "model collapse" can make it difficult to determine the original source of the information on which an AI model is trained. As a result, providers of accurate information, such as the media, may decide to restrict their content to prevent it from being used to train AI. Ray Wang, CEO of technology research firm Constellation Research, suggested in an essay that this could give rise to an "era of public information darkness."


h ttps:// www.ixbt.com/live/sw/kollaps-modeli-chatgpt--iskusstvennyy-intellekt-predstavlyaet-ugrozu-dlya-samogo-sebya.html

That's right. People who understand ML technology should not be surprised. Such degradation of the model was predictable in advance. By the way, a few months ago, before this phenomenon was noticed by the general public, we already discussed a high probability of the prospect of LLM "degeneration" here in this thread.

However, it is naive to think that this is the end of the AI era. More likely, an intermediate stage, followed by the arrival of a new technology. When that will happen is unknown, but it definitely won't be LLM. The language model cannot avoid a similar fate in principle. If it is not trained on a huge amount of new content, which, by the way, must be constantly filtered and censored, which is a huge and expensive job, it will inevitably become obsolete in a few years, and if it is trained - it will absorb content created by other LLMs and degradation will begin. So we are probably in for the end of the LLM era, not the end of AI. Besides, the language model is not a real AI, although it looks very similar to it.

So what will be the difference between real AI and LLM? Well. I would hazard a guess that it would have feedback from the real world. The same as we have, and thanks to which our brains are "cleaned" from all sorts of nonsense they are so eager to generate.))))

 
Interestingly, the rudiments of critical and logical thinking are present in LLMs - they can be easily detected by chatting with ChatGPT. What if LLMs were specifically trained to think critically and analyse information? In the end, it is probably possible to achieve self-learning, when the LLM, filtering the available data, will start to remove all unnecessary data on its own. To some extent, it already does this in the process of generating a response to a query. If this mechanism is applied in training and improved, perhaps the problem of LLM degradation can be solved. Although I still think that LLM is not a real AI, but who knows.....
 

The idea is as follows: an LLM trained to analyse information will compose its own training data set. That is, the set will consist not of the original data provided to it, but of derived data - processed by internal mechanisms of criticism and logic.

Self-learning on derived data.

 
By the way - criticism and logic are undoubtedly among the basic functions of intelligence. Imagine these functions amplified hundreds of times thanks to the computing power of a supercomputer ... The progress of AI is largely behind this - the strengthening of thinking functions, not the expansion of the knowledge base. Although one does not prevent the other.
 
After thinking about it, I came to the conclusion that criticism (from the Greek art of reasoning), as well as the ability to think logically, is impossible without a connection to reality and a huge layer of human subjectivity, produced by the interaction of human nature with the surrounding world. The resulting "product" is so unique and complex that LLM can only mechanically "guess" some fragments based on the colossal volume of collected information (model).

In general, the conclusion is disappointing. AI is a "miscarriage" of human intelligence and, as such, will never be able to fake our subjective compost so that we are always satisfied. That is, it will remain in our eyes unfinished.
 
It would seem - think about it, AI will not be able to accurately guess all the nuances of the unique human life experience and therefore it will always have lame logic and criticism.... So what?!

So he will never become a social homeostat and he will never be entrusted with the functions of public administration. And this means a lot in understanding the future that awaits us.
 

I continue to work on the material on the topic of the so-called "AI Uprising". I can't give an approximate deadline or date of completion - the work is complicated. I can say that the final result will be much different from my initial expectations, and there will be NO sci-fi with futurology, seasoned with the "sauce" of Musk's fairy tales. I will brutally destroy any myths, as I destroy parasites in the house. Parasites, in this case, are delusions or fantasies around AI, which easily get into people's heads, and from which it is sometimes difficult to get rid of. But this is the time when it is most needed.

It was a revelation to me that we are all under the influence of different "parasitic ideas" and they in turn open the door to "parasitic people" in our lives. And the most interesting thing is that I have identified such a "parasite" in the popularised version of the AI idea. It must be extracted and "dissected" using the full power of the scientific approach. For this purpose, it is necessary to turn to classical science and the opinion of recognised scientists, which is what I am doing.

At the end of this post I will say that man has finally created a mirror reflecting the main secret of his evolutionary uniqueness among species, and for some time he froze, looking at himself in it. Does man understand how his Intelligence works? - Of course not. It's a long way off. But for now, the man stares at the AI like a monkey at a mirror, and like a monkey, his emotions are running high. We need to shut down the delusion generator and think.... Which is what we're going to do.

 
In general, if anyone has a question why I am doing this and what my motivation is, the answer is simple - I am extracting from my mind the idea-parasite, which for decades poisoned my brain with colourful illusions. Enough.
 
Реter Konow #:
In general, if anyone has a question why I am doing this and what my motivation is, the answer is simple - I am extracting from my mind the idea-parasite, which for decades poisoned my brain with colourful illusions. Enough.

What's the bottom line, terminators won't take over the earth?

 
Maxim Dmitrievsky #:

What's the bottom line, terminators won't take over the earth?

To answer this question, it is necessary to objectively assess the probability of such a development of events from a scientific point of view. Otherwise it is impossible to get rid of this question, no matter how silly it sounds.