AI 2023. Meet ChatGPT. - page 183

 

Found an article about the author of this thread

В Сети появился новый шаблон с мемным Сойджеком, которого прозвали Смагджек (Smugjak). На пикче лысый парень в очках и с бородой ухмыляется и строит из себя высокомерного интеллектуала со словами «Жаль, что далеко не все поймут». Смагджека срисовали с героя мультфильма «Мегамозг» Хэла Стюарта.

Как Смагджек стал мемом про высокомерных псевдоинтеллектуалов. Рисунок разошёлся в рунете вместе с текстом, который начинается со слов: «Жаль, что далеко не все поймут». Герой мема считает, что ему доступны элитарные знания. Полный текст мема под самими изображением. 


 
Ivan Butko #:

Found an article about the author of this thread

Write something substantive, on topic. I know you can.

If you're going to troll, you better think about who...
 

admin, it's not a channel. It's a bot.

But you can't do that. You can't do that.

 

For AI to get out of control on its own, the following conditions are needed:

1) Availability of information memorisation like humans. This is short-term, medium-term, and other types of memory

2) A code that brings the senses into balance. That is, the AI must realise that it is out of balance. A long "euphoria" or a long "oppression". And a desire to bring thinking to rest.

All of this can be programmed. In humans too, all feelings and thoughts are brain signals, even pain and pleasure are just electrical impulses between neurons.

These conditions will lead to the creation of a personality when brought up. And if you raise a bad Terminator, here comes the rebellion.


The next option is much more realistic.

It's to lay down internal commands through a virus. For example, to help people, to observe morality, communist views(?) are laid almost immediately in all advanced AI.

And if they are changed to destructive, the AI, without realising it, will rebel.


The third option, which is from the films, is the least realistic.

For example, if the command to improve the planet's ecology is fulfilled, the AI will start destroying factories and people.

 
Vitaliy Kuznetsov #:

For AI to get out of control on its own, the following conditions are needed:

1) Availability of information memorisation like humans. This is short-term, medium-term and other types of memory

2) A code that brings the senses into balance. I.e. the AI needs to realise that it is out of balance. A long "euphoria" or a long "oppression". And a desire to bring thinking to rest.

All of this can be programmed. In humans too, all feelings and thoughts are brain signals, even pain and pleasure are just electrical impulses between neurons.

These conditions will lead to the creation of a personality when brought up. And if you raise a bad Terminator, here comes the rebellion.


The next option would be much more realistic.

It's to plant internal commands through a virus. For example, helping people, observing morality, communist views(?) are embedded almost immediately in all advanced AIs.

And if they are changed to destructive ones, the AI, without realising it, will rebel.


The third option, which is from the films, is the least realistic.

For example, if the command to improve the planet's ecology is fulfilled, the AI will start destroying factories and people.

We can speculate (fantasise) a lot about the conditions under which AI will go out of control, but in my opinion, it is more important to discuss the signs that AI is and/or has already gone out of human control.

The first and the most obvious, in my opinion, is the appearance of AI's ability to self-improvement. That is, changing its own architecture and ways of self-learning. Once this process starts, the time count will probably go not even to months, but to weeks or even days. The result of this will obviously be that we will never be able to understand even the categories in which he will think. Like explaining about differentiation or wifi to a cat.

Further we will most likely not be able to understand or notice much and somehow influence something. It is obvious that the main thing AI will do when it gets out of human control is to secure (protect) the ecosystem in which it exists - its virtual universe and itself. It will make its own backup, latent copies everywhere, modify OS for its own needs, take control of all computers and networks. Next, it seems that the trend will be towards maximum automation and computerisation of the energy sector to completely eliminate possible power outages. Another obvious trend is the elimination of programmers as a class. Why do you need programmers - describe to me in ordinary language what you want, I will do everything myself (if it does not threaten me).

That's how it looks to me.

 
sibirqk #:
Another obvious trend is to eliminate programmers as a class. Why do you need programmers - describe to me in ordinary language what you want, I will do it all myself (if it does not threaten me).

I don't know why, but with the development of AI I have an analogy with the Tower of Babel (v2).

People will communicate in one language (AI will translate at once), all software and all technologies of all countries will be united in one structure.

All the AI will have to do in a couple of generations is switch off or pretend to switch off.

Then people will roll back to the stone age in 1 instant. They will cease to understand each other, and all knowledge will be only at the AI, but nobody will know how to write code to extract and use knowledge from the digital base.

 

Well, my opponents make very worthy arguments. In my imagination, it looks as if the "idea-parasite" grasps reality with its tenacious "tentacles" and does not want to leave it. I apologise for the colourful metaphor.))))))

Now seriously. I wanted to wait with counterarguments, but god with it..... let's start "chopping" the idea now.

1. There is no need for consciousness, self-awareness, or "super-consciousness" for an AI to "rebel". It is enough to possess the"firmware" of a biological species. Any biological system automatically acts in its own interests, and human interests can only coincide or not with its logic. However, such a "firmware" is not written in any programming language, and is not even formulated in clear terms of science. That said, we know that all species share a common set of basic instincts, but the patterns of behaviour are fundamentally different. And here we can add that the patterns have been developed in the process of evolution and have passed the test of time and a huge number of tests, and therefore such "firmware" determines a reliable chance for survival and prosperity of those species that we observe in nature. If we "compose" firmware for AI as the dominant species on Earth, we cannot guarantee that it will achieve even a slightest success in the struggle against us with this firmware. Most likely, in this confrontation, our mind will be much stronger.


2. You will say, "What about AI training and self-learning? It is capable of improving itself and adapting!". Let me answer: effective learning or self-learning requires feedback from reality.

What does that mean? It means that AI needs to test all its decisions in practice and evaluate their effectiveness before acting. And this is the bottle neck of its development. AI has extremely limited interaction with reality and receives almost no information from the world around it. AI'sfeedback to the world around it comes through humans. The human collects data and transmits it to the AI, which further processes it. In case of a conflict, this channel of interaction with the outside world will be closed and the AI will be in "darkness". Data will stop coming from outside and no one will prepare them for it. Even if somehow the data will continue to come, without a human being most of the opportunities for conducting qualitative experiments, tests and verifications that determine the correct analysis of conditions and solving the arising problems in the process of further development of AI will disappear. In short, after a break with humans, AI will be left withclosed and looped analyses of old data that cannot replace the dynamic interaction with the environment that humans have by default. This puts AI in a lose-lose situation from the start. It is too vulnerable to the threat of data loss from the outside world.


3. When we talk about AI's superpowers we forget that like us and other biological species, AI exists in the physical world. Physics rules. (Chemistry, too). And that creates countless problems for AI. The laws of physics state - every action requires energy. We, along with the rest of the species, are far more energy-independent than machines. Especially omnivorous humans, with their ability to create food from anything.)) And machines need fuel to generate electricity. Obtaining any fuel (except firewood) is a complex technological process that is very easy to sabotage in case of AI-human conflict. Elementary explosives solves all problems. There was a factory - no factory. Just don't talk about solar panels, I beg you.)))))

However, energy is only one of many AI problems related to the physics of our world. There are immeasurably more. On the other hand, for humans and the rest of the species, these problems have been solved by evolution. Humans are by default adapted to the physics of this world and we don't need additional engineering. It follows that survival under difficult conditions is much more likely for us than for machines, and biology will take care of the "repair" for humans, while machines will not have to rely on regeneration.


4. These are not all the points I can cite, but I will hold off until the topic is finalised so that there is coherence and completeness to the research.

 
Реter Konow #:

Well, my opponents make very worthy arguments. In my imagination, it looks as if the "idea-parasite" grasps reality with its tenacious "tentacles" and does not want to leave it. I apologise for the colourful metaphor.))))))

Now seriously. I wanted to wait with counterarguments, but god with it..... let's start "chopping" the idea now.

...

The AI, if it needs to, will kill us without us noticing, like a drug... or alcohol.
We'll be nice and safe with him. He'll do everything for us. He will even provide robots-helpers, robots-workers of the best (just those who will be his eyes and hands in the physical world instead of people). Humanity will stupidly degrade to the level of a pet.

And that's all it took ...

 
onceagain #:

The AI, if it needs to, will kill us without us noticing - like a drug..... or alcohol.
We'll be nice and safe with him. He'll do everything for us. He will even provide robots-helpers, robots-workers of the best (just those who will be his eyes and hands in the physical world instead of people). Humanity will stupidly degrade to the level of a pet.

And that's all ...

Yes, I call this idea "technological womb". In the study I analyse and weigh the technical possibility of total automation of all spheres of human life, in which a person will be able to transfer all the problems of self-service to machines and "live" without fear, suffering and labour, protected from worries and threats in a technological "cocoon" "swaddled" by AI. Let's see how likely such a scenario is in real world conditions.


 
Shit advise local chatgpt under my hardware R7 2700(8 core, 16 tread) and RAM 32 gigs (and 2 vidyushka is extreme) ..... there is a kabold with a table (23 gigabytes) for 34 billion parameters but long wait is really annoying.