AI 2023. Meet ChatGPT. - page 85

 
Реter Konow #:

Pause Giant AI Experiments!

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

***

The authors ask the questions:

Should we allow computers and robots to clog our social media and websites with adverts and fake news?

Should we replace humans with automated systems wherever possible?

Won't this gradually lead to us becoming redundant and machines taking over all our work?

How far should we go in developing artificial intelligence to avoid losing control of it in the future?

 

Sistemma in Russia has introduced its new product - a chatbot with artificial intelligence called SistemmaGPT, which is an analogue of ChatGPT.

For the development of the Russian chatbot, both in-house developments and research results from the renowned Stanford University were used.

SistemmaGPT provides basic functions such as processing large amounts of data, communicating with customers, managing email or incoming calls and much more.


Short of...


 

In general, it's easy to guess that the government has its eye on AI developments. Musk serves under the guise of concern for humanity.

It won't take long for independent developers to experiment with AI on large capacity data centres.

Here we are.

 
Vitaliy Kuznetsov #:

The authors ask the questions:

Should we allow computers and robots to litter our social media and websites with adverts and fake news?

Should we replace humans with automated systems wherever possible?

Wouldn't this gradually lead to us becoming redundant and machines taking over all our work?

How far should we go in developing artificial intelligence to avoid losing control of it in the future?

You're looking at the wrong thing. The key points are different.

There's been enough spam and misinformation without AI for a long time. After all, filters can be created, with the same AI.

 
Vitaliy Kuznetsov #:

The authors ask the questions:

Should we allow computers and robots to litter our social media and websites with adverts and fake news?

Should we replace humans with automated systems wherever possible?

Wouldn't this gradually lead to us becoming redundant and machines taking over all our work?

How far should we go in developing artificial intelligence so that we don't lose control of it in the future?

It seems like technological advances (assembly line, agricultural machinery) have rid the leather bags of routine and made their lives happier. Of course there were some dissatisfied people :)

Well, enough of them getting high on the definition of AI. It's not AI.
 
Maxim Dmitrievsky #:
It seems like technological advances (assembly line, agricultural techniques) have relieved the leather bags of routine and made their lives happier. There were of course some dissatisfied people :)

Well enough of their hype about the definition of AI. It's not AI.

GPT5 is coming out in December. And let's say it's not AI, but if it can't be distinguished from a human by communication, how can you tell? Honestly I can't tell the difference between people in the office either (whether they're normal or that one with bugs)))

 
Vitaliy Kuznetsov #:

GPT5 is coming out in December. And let's say it's not an AI, but if it can't be distinguished from a human by communication, how can you tell? Honestly, I can not distinguish people in the office (normal or that, with bugs)))

Well, at least on the basis of understanding that it is not a living being, but a programme, which is entirely conditioned by what is wanted from it. That's what it was trained for :)

Maybe people can no longer keep in their heads the volume of information they receive, they need such programmes to release their creative dance. Just like they can't gnaw a predator with their teeth or survive winter without clothes. So such programmes may well improve the world. Maybe people are tired of writing texts and drawing pictures themselves, it's trivial.
 


:)))
Strange that my question was perceived as a violation of the content policy...

 
Nikolai Semko #:


:)))
It's odd that my question was perceived as a content policy violation....

Probably because of the mention of death. Maybe the prompt is remotely hinting at him committing suicide on the basis of schizophrenia.

It's interesting that the AI doesn't confront sensible realism, but follows human fantasy. But, why does it do that? Either out of a lack of understanding of reality, or to avoid arguing with lunatics.

It would be interesting to understand.

No, he doesn't understand reality. Because in a conversation with a mentally ill person who understands the context of the illness would not advise jumping out of a tree.
 
Реter Konow #:
Probably because of the reference to death. Maybe the Prompt is remotely suggesting he's a schizophrenic suicide.

It's interesting that the AI doesn't confront sensible realism, but follows human fantasy. But, why does it do that? Either out of a lack of understanding of reality, or to avoid arguing with lunatics.

It would be interesting to understand.

No, it doesn't understand reality. Because in a conversation with a mentally ill person, one who understands the context of having the illness would not advise jumping out of a tree.

I think that's the reason


he couldn't understand the dream talk.
I do indeed often fly in my dreams and indeed often have trouble landing from great heights.