You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
What's going to change fundamentally? AI generates pictures? - so the whole Internet is already full of pictures long before any AI. Generates texts, articles? - the whole Internet is already full of the product of copywriters and other "writers". Video content will become even more perverse? Adverts even more impressive? Hmmm... Against the background of how society has already got used to all this flow of information, changing the pressure of this flow does not fundamentally change anything. Yes, there will be even more bots on social networks and dating sites that will very subtly simulate the behaviour of real people. Will scammers thrive even more with their fakes? Fake product reviews, fake subscribers, fake everything. And society reacts to this as? - the perceived value of this content is already dropping rapidly today, and will simply turn to mud from here on out.
You say AI will be cool to solve some problems? It's already being used wherever it's needed, in the form of specialised solutions. For example, the decoding of the human genome with all its details began long before this speech trinket.
Semantic search of information on the Internet is good, but again, will the technology help to rid users of paid advertising, censorship and propaganda? Do you seriously believe that?
What's going to change fundamentally? AI generates pictures? - so the whole Internet is already full of pictures long before any AI. Generates texts, articles? - the whole Internet is already full of the product of copywriters and other "writers". Video content will become even more perverse? Adverts even more impressive? Hmmm... Against the background of how society has already got used to all this flow of information, changing the pressure of this flow does not fundamentally change anything. Yes, there will be even more bots on social networks and dating sites that will very subtly simulate the behaviour of real people. Will scammers thrive even more with their fakes? Fake product reviews, fake subscribers, fake everything. And society reacts to this as? - the perceived value of this content is already dropping rapidly today, and will simply turn to mud from here on out.
You say AI will be cool to solve some problems? It's already being used wherever it's needed, in the form of specialised solutions. For example, the decoding of the human genome with all its details began long before this speech trinket.
Semantic search of information on the Internet is good, but again, will the technology help to rid users of paid advertising, censorship and propaganda? Do you seriously believe that?
Valid arguments.
About specialised solutions - they have reliability beyond the reach of LLM level. In any case, at the current moment. It is reliability (according to the chief developer of OpenAI) that is the key obstacle to mass adoption of AI in professional spheres.
And the problem of the public generating useless content (which I believe is a problem) is only going to grow. Plus, the repeated use of generated information in circles, between AI and humans, without updating. The consequences of this problem are hard to predict.
About neural nets being useless. They say to you, "What do you think when you say, 'I am you and you are me'?". I had Murat Nasyrov's song playing in my head. What if I didn't know it? Well, I'm you and you're me. I don't know. Nothing special. Nothing comes to mind.
Now let's ask the neural network:
That's a great idea. Stupid AI fantasises more beautifully than I do.
About neural nets being useless. They say to you, "What do you think when you say, 'I am you and you are me'?". I had Murat Nasyrov's song playing in my head. What if I didn't know it? Well, I'm you and you're me. I don't know. Nothing special. Nothing comes to mind.
Now let's ask the neural network:
That's a wonderful idea. Stupid AI fantasises more beautifully than I do.
you are what you eat :-) don't try to draw THIS !!!! :-)
Having watched twice the interview of Ilya Sutskever, the main OpenAI developer, in which he told in general the history of formation of his views and decisions that led OpenAI on the way of creating large language models, I noted for myself that Ilya did not study human intelligence at all. He was interested in machine learning technology and especially fascinated by the idea of predicting the next word. After reading the published article "Attention is all you need", he immediately embraced the technology and took the development to new directions. Further on in the interview, the developer touched on human intelligence issues in passing, talking more about models, scaling and improvements.
In the end, I came to the conclusion that Ilya was a man who recognised and realised the potential of the new technology in a timely manner. He did it without much philosophical speculation. Perhaps that is why he was one of the first to move in a direction that was promising at the time.
Practice showed that it is possible to create a statistical copy of intelligence on the basis of texts, without understanding the nature of intelligence. But, a statistical copy must have limits to development. Plus, it has no dynamics of intelligence and reacts passively to treatment. And dynamics can't be conveyed through text, you need live communication.
I would like to assume that the next AI will be created on the basis of deep understanding of Intelligence, but I don't know how far you can go in imitation without understanding.
you are what you eat :-) don't try to draw THIS !!!! :-)
That's a cool request))))) I'll have to give it a try
I asked you, what fundamental changes do you expect? There was already a process of content generation, it will speed up and become cheaper. So what's next? Is humanity suffering from a lack of content now? I see the problem from the other side, there is too much content to find what you are looking for.
Consumerism accompanies any field. I don't understand why you see it as a primary problem.
Because everyone is faced with shirpotrebov all the time (it first of all grows on demanded topics). And when its quantity in relation to meaningful useful content grows, the useful content drowns in it, dissolves. And society will be less and less able to distinguish between consumer goods and valuable content (it is already happening now, look around). Consumer goods devalue their sphere on average, in its entirety. I.e. everything that the modern person comes into contact with especially actively turns into a stream of useless adverts and time eaters.
Example: how spam killed e-mail. Now the same fate has befallen phone calls (many people simply switch to white list mode so that unknown numbers can't get through). Social networks have also very quickly descended into a fierce garbage dump.
Yesterday, I raised the question of the limits of implementing a statistical version of "intelligence" without actually understanding the principles of the device. This is a legitimate and relevant question. There must be natural limits to mindlessly copying one of the most complex systems in the world?
Counting the link weights of gigabytes of "chopped" text chains allows us to assemble an interactive model that outwardly reproduces many cognitive functions. The chains inherently contain knowledge, thoughts, and relationships. So, no special knowledge base is needed. Moreover, the knowledge within the model does not need order. Identifying the connections of chains (in the learning process) allows for further complex associative constructions. Thanks to the links, a wide range of contexts is available to the model. It would seem that what else is needed? Here's AI. Statistics defeats the mystery of the black box, like Alexander the Great cutting the Gordian knot.
However, the secrets of the nature of intelligence are not revealed. The shell is reproduced and only superficially resembles the original. We know that the authors do not want to stop and intend to go further. They are confident that the approach has a future. Let's consider the nearest difficulties they will have to face:
Due to its static nature, the model is subject to rapid information "ageing". The world is changing, the planet is spinning, and retraining the model requires significant resources and control. New facts, new topics, celebrities, fashion, politics.... Every day the world is in a flurry of events and the copy is frozen in the past.
How quickly does the model "age" without updating and how much does it cost to update it with today's methods? Sure, it gets connected to the internet. But, does that give it a re-learning experience? NO. A model connected to the internet does not retrain itself. It works as a semantic interface "chewing" information from websites, but it does not "absorb" new data from them. Perhaps, this is not provided by the technology.
Thus, we can conclude that LLM quickly "ages", and at the end of its "life", turns into a semantic speech interface, with a ballast of outdated information and the constant need to turn to the Internet to double-check their answers.
Will developers be able to overcome these problems within the confines of statistically copying a system they don't understand? I doubt it, but time will tell.
Stanford researchers have used generative artificial intelligence (AI) to create a simulated city made up of different characters, each with unique personalities, memories and behaviours.
To see for yourself how AI lives in a virtual world - https://reverie.herokuapp.com/arXiv_Demo/
Source - https://to.pp.ru/articles/stanford-researchers-create-mini-westworld-simulation-ai-characters-make-plans-memories