![MQL5 - Language of trade strategies built-in the MetaTrader 5 client terminal](https://c.mql5.com/i/registerlandings/logo-2.png)
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The researchers argue that generative AI tools can produce low-quality answers to user queries because their models are trained on "synthetic data" rather than the unique human content that makes their answers special.
Other AI researchers have coined their own terms to describe this learning method. In a study published in July, researchers from Stanford and Rice universities called it the "Model Autograph Syndrome," in which an AI's "self-absorbed" cycle of learning content created by other AIs can cause generative AI tools to be "doomed" to degrade the "quality" and "diversity" of the images and text they create
Jathan Sadowski, a senior researcher at the Emerging Technologies Research Laboratory in Australia who studies AI, has labelled this phenomenon "Habsburg AI", arguing that systems trained on the output of other generative AI tools can create "mutilated mutations of responses".
While the specific implications of these phenomena remain unclear, some technology experts believe that "model collapse" can make it difficult to determine the original source of the information on which an AI model is trained. As a result, providers of accurate information, such as the media, may decide to restrict their content to prevent it from being used to train AI. Ray Wang, CEO of technology research firm Constellation Research, suggested in an essay that this could give rise to an "era of public information darkness."
That's right. People who understand ML technology should not be surprised. Such degradation of the model was predictable in advance. By the way, a few months ago, before this phenomenon was noticed by the general public, we already discussed a high probability of the prospect of LLM "degeneration" here in this thread.
However, it is naive to think that this is the end of the AI era. More likely, an intermediate stage, followed by the arrival of a new technology. When that will happen is unknown, but it definitely won't be LLM. The language model cannot avoid a similar fate in principle. If it is not trained on a huge amount of new content, which, by the way, must be constantly filtered and censored, which is a huge and expensive job, it will inevitably become obsolete in a few years, and if it is trained - it will absorb content created by other LLMs and degradation will begin. So we are probably in for the end of the LLM era, not the end of AI. Besides, the language model is not a real AI, although it looks very similar to it.
So what will be the difference between real AI and LLM? Well. I would hazard a guess that it would have feedback from the real world. The same as we have, and thanks to which our brains are "cleaned" from all sorts of nonsense they are so eager to generate.))))
The idea is as follows: an LLM trained to analyse information will compose its own training data set. That is, the set will consist not of the original data provided to it, but of derived data - processed by internal mechanisms of criticism and logic.
Self-learning on derived data.
I continue to work on the material on the topic of the so-called "AI Uprising". I can't give an approximate deadline or date of completion - the work is complicated. I can say that the final result will be much different from my initial expectations, and there will be NO sci-fi with futurology, seasoned with the "sauce" of Musk's fairy tales. I will brutally destroy any myths, as I destroy parasites in the house. Parasites, in this case, are delusions or fantasies around AI, which easily get into people's heads, and from which it is sometimes difficult to get rid of. But this is the time when it is most needed.
It was a revelation to me that we are all under the influence of different "parasitic ideas" and they in turn open the door to "parasitic people" in our lives. And the most interesting thing is that I have identified such a "parasite" in the popularised version of the AI idea. It must be extracted and "dissected" using the full power of the scientific approach. For this purpose, it is necessary to turn to classical science and the opinion of recognised scientists, which is what I am doing.
At the end of this post I will say that man has finally created a mirror reflecting the main secret of his evolutionary uniqueness among species, and for some time he froze, looking at himself in it. Does man understand how his Intelligence works? - Of course not. It's a long way off. But for now, the man stares at the AI like a monkey at a mirror, and like a monkey, his emotions are running high. We need to shut down the delusion generator and think.... Which is what we're going to do.
In general, if anyone has a question why I am doing this and what my motivation is, the answer is simple - I am extracting from my mind the idea-parasite, which for decades poisoned my brain with colourful illusions. Enough.
What's the bottom line, terminators won't take over the earth?
What's the bottom line, terminators won't take over the earth?
To answer this question, it is necessary to objectively assess the probability of such a development of events from a scientific point of view. Otherwise it is impossible to get rid of this question, no matter how silly it sounds.