AI 2023. Meet ChatGPT. - page 113

 
Ilya Filatov #:

What's going to change fundamentally? AI generates pictures? - so the whole Internet is already full of pictures long before any AI. Generates texts, articles? - the whole Internet is already full of the product of copywriters and other "writers". Video content will become even more perverse? Adverts even more impressive? Hmmm... Against the background of how society has already got used to all this flow of information, changing the pressure of this flow does not fundamentally change anything. Yes, there will be even more bots on social networks and dating sites that will very subtly simulate the behaviour of real people. Will scammers thrive even more with their fakes? Fake product reviews, fake subscribers, fake everything. And society reacts to this as? - the perceived value of this content is already dropping rapidly today, and will simply turn to mud from here on out.

You say AI will be cool to solve some problems? It's already being used wherever it's needed, in the form of specialised solutions. For example, the decoding of the human genome with all its details began long before this speech trinket.

Semantic search of information on the Internet is good, but again, will the technology help to rid users of paid advertising, censorship and propaganda? Do you seriously believe that?

Propaganda... censorship... adverts. scammers. faith.

The steam engine was used to move Jews closer to the ovens at Auschwitz.

You are fixated on moral issues, development is a broader concept.
 
Ilya Filatov #:

What's going to change fundamentally? AI generates pictures? - so the whole Internet is already full of pictures long before any AI. Generates texts, articles? - the whole Internet is already full of the product of copywriters and other "writers". Video content will become even more perverse? Adverts even more impressive? Hmmm... Against the background of how society has already got used to all this flow of information, changing the pressure of this flow does not fundamentally change anything. Yes, there will be even more bots on social networks and dating sites that will very subtly simulate the behaviour of real people. Will scammers thrive even more with their fakes? Fake product reviews, fake subscribers, fake everything. And society reacts to this as? - the perceived value of this content is already dropping rapidly today, and will simply turn to mud from here on out.

You say AI will be cool to solve some problems? It's already being used wherever it's needed, in the form of specialised solutions. For example, the decoding of the human genome with all its details began long before this speech trinket.

Semantic search of information on the Internet is good, but again, will the technology help to rid users of paid advertising, censorship and propaganda? Do you seriously believe that?

Valid arguments.

About specialised solutions - they have reliability beyond the reach of LLM level. In any case, at the current moment. It is reliability (according to the chief developer of OpenAI) that is the key obstacle to mass adoption of AI in professional spheres.

And the problem of the public generating useless content (which I believe is a problem) is only going to grow. Plus, the repeated use of generated information in circles, between AI and humans, without updating. The consequences of this problem are hard to predict.

 
Consumerism accompanies all fields. I don't understand why you consider it a primary problem.

Books are not devoid of fiction, graphomania, hype and outright slag. Both have their readers.

Music is not devoid of pop, potboilers and outright slag. Both have their own listeners.

Fine art is not devoid of pop, graphomania and outright slag. Each has its own contemplator.

Cinema is not devoid of Russian comedies. Each has its own viewer.

Humour is not devoid of Petrosian, postirony and outright slag. Everyone has their own pensioner or schoolboy.

Dipfakes. Perfectly replace the wretched computer graphics, but not devoid of outright slag.

In all these and other spheres there is an element of fraud. ChatGPT, as another innovation, will also acquire its own percentage of cheaters. It is a completely normal, expected phenomenon, to which countermeasures will be naturally introduced.

So they threw Salma Hayek or Emma Watson on a porn site. Very convincing porn actresses play, dipfake well laid. So what? Everyone was scared, but it had no effect. Porn sites are full of such videos. There's no dirt.

I don't see a separate problem. An uncontrolled virus is a problem. When your loved ones are dying of the virus in droves, that's a problem. When the morgue is full and there's nowhere to put the bodies and not enough bags. And a co-worker got the virus along with the corpsman. That's the problem and it was real, pervasive, widespread and international. Did Chat and Migorney have a hand in the problem? No. The man managed to create a real problem without a neural network.

The only problem with AI or just neural nets is access to control nuclear missiles. The pure terminator plot is the only real, potential problem.

 

About neural nets being useless. They say to you, "What do you think when you say, 'I am you and you are me'?". I had Murat Nasyrov's song playing in my head. What if I didn't know it? Well, I'm you and you're me. I don't know. Nothing special. Nothing comes to mind.

Now let's ask the neural network:


That's a great idea. Stupid AI fantasises more beautifully than I do.

 
Ivan Butko #:

About neural nets being useless. They say to you, "What do you think when you say, 'I am you and you are me'?". I had Murat Nasyrov's song playing in my head. What if I didn't know it? Well, I'm you and you're me. I don't know. Nothing special. Nothing comes to mind.

Now let's ask the neural network:


That's a wonderful idea. Stupid AI fantasises more beautifully than I do.

you are what you eat :-) don't try to draw THIS !!!! :-)

 

Having watched twice the interview of Ilya Sutskever, the main OpenAI developer, in which he told in general the history of formation of his views and decisions that led OpenAI on the way of creating large language models, I noted for myself that Ilya did not study human intelligence at all. He was interested in machine learning technology and especially fascinated by the idea of predicting the next word. After reading the published article "Attention is all you need", he immediately embraced the technology and took the development to new directions. Further on in the interview, the developer touched on human intelligence issues in passing, talking more about models, scaling and improvements.

In the end, I came to the conclusion that Ilya was a man who recognised and realised the potential of the new technology in a timely manner. He did it without much philosophical speculation. Perhaps that is why he was one of the first to move in a direction that was promising at the time.

Practice showed that it is possible to create a statistical copy of intelligence on the basis of texts, without understanding the nature of intelligence. But, a statistical copy must have limits to development. Plus, it has no dynamics of intelligence and reacts passively to treatment. And dynamics can't be conveyed through text, you need live communication.

I would like to assume that the next AI will be created on the basis of deep understanding of Intelligence, but I don't know how far you can go in imitation without understanding.

 
Maxim Kuznetsov #:

you are what you eat :-) don't try to draw THIS !!!! :-)

That's a cool request))))) I'll have to give it a try

 
Ivan Butko #:
You are fixated on moral issues, development is a broader concept.

I asked you, what fundamental changes do you expect? There was already a process of content generation, it will speed up and become cheaper. So what's next? Is humanity suffering from a lack of content now? I see the problem from the other side, there is too much content to find what you are looking for.

Ivan Butko #:
Consumerism accompanies any field. I don't understand why you see it as a primary problem.

Because everyone is faced with shirpotrebov all the time (it first of all grows on demanded topics). And when its quantity in relation to meaningful useful content grows, the useful content drowns in it, dissolves. And society will be less and less able to distinguish between consumer goods and valuable content (it is already happening now, look around). Consumer goods devalue their sphere on average, in its entirety. I.e. everything that the modern person comes into contact with especially actively turns into a stream of useless adverts and time eaters.

Example: how spam killed e-mail. Now the same fate has befallen phone calls (many people simply switch to white list mode so that unknown numbers can't get through). Social networks have also very quickly descended into a fierce garbage dump.

 

Yesterday, I raised the question of the limits of implementing a statistical version of "intelligence" without actually understanding the principles of the device. This is a legitimate and relevant question. There must be natural limits to mindlessly copying one of the most complex systems in the world?

Counting the link weights of gigabytes of "chopped" text chains allows us to assemble an interactive model that outwardly reproduces many cognitive functions. The chains inherently contain knowledge, thoughts, and relationships. So, no special knowledge base is needed. Moreover, the knowledge within the model does not need order. Identifying the connections of chains (in the learning process) allows for further complex associative constructions. Thanks to the links, a wide range of contexts is available to the model. It would seem that what else is needed? Here's AI. Statistics defeats the mystery of the black box, like Alexander the Great cutting the Gordian knot.

However, the secrets of the nature of intelligence are not revealed. The shell is reproduced and only superficially resembles the original. We know that the authors do not want to stop and intend to go further. They are confident that the approach has a future. Let's consider the nearest difficulties they will have to face:

  • The model lacks processes. It is "frozen" at the moment of learning (which is year 21). There is a lack of internal dynamics.
  • Updating/changing information/rules requires a lot of work on the part of specialists and hired staff, and does not always bring the desired results.

Due to its static nature, the model is subject to rapid information "ageing". The world is changing, the planet is spinning, and retraining the model requires significant resources and control. New facts, new topics, celebrities, fashion, politics.... Every day the world is in a flurry of events and the copy is frozen in the past.

How quickly does the model "age" without updating and how much does it cost to update it with today's methods? Sure, it gets connected to the internet. But, does that give it a re-learning experience? NO. A model connected to the internet does not retrain itself. It works as a semantic interface "chewing" information from websites, but it does not "absorb" new data from them. Perhaps, this is not provided by the technology.

Thus, we can conclude that LLM quickly "ages", and at the end of its "life", turns into a semantic speech interface, with a ballast of outdated information and the constant need to turn to the Internet to double-check their answers.

Will developers be able to overcome these problems within the confines of statistically copying a system they don't understand? I doubt it, but time will tell.



 

Stanford researchers have used generative artificial intelligence (AI) to create a simulated city made up of different characters, each with unique personalities, memories and behaviours.

To see for yourself how AI lives in a virtual world - https://reverie.herokuapp.com/arXiv_Demo/


Source - https://to.pp.ru/articles/stanford-researchers-create-mini-westworld-simulation-ai-characters-make-plans-memories