AI 2023. Meet ChatGPT. - page 94

 
Реter Konow #:

The term "Axiom" can have a deleterious effect on morality when mentioned in statutes. It is an inappropriate term in this context, the use of which can have dire consequences in society.

Anaxiom implies the absolute inviolability of some rule and no need for proof. In most cases, it is contrary to moral principles and can be used to the detriment of others. For example, in order to elevate someone by stamping "axiom" in the necessary document. You know.

Instead of axioms, it is better to use the concepts of rule, principle, norm. And this will help to avoid abuse.

This is already a stylistic quibble, but the thread is yours - express it as you like, I stop my quibbles).

 
Vitaliy Kuznetsov #:

Not AI, you say


Do you think it's hard to program an AI to ask questions, get answers, connect the logic of its questions to form a personality?

Even GPT3 can be trained to be a human. To do this, you will have to prescribe some modules for memory formation, prescribe a limit of tokens per day, teach it to save them. Optimise the accumulated information by remembering a month more, a year more important, 10 years only key events so that tokens are not wasted, etc. All of this can be written. On good attitude to yourself add a good attitude in return, on bad and rude - reduce communication.

Even if you can't write a person, you can write a dog. It will be drawn to people, serve them. In his free time from communication, will perform the functions of a dog, are there so many of them? And how will you distinguish an AI dog from a live one, if the behaviour and reaction to the surrounding world will be 1v1?

If you make enough interfaces to interact with the outside world, including the ability to produce similar robots, the ability to independently plan actions, it may look like intelligence. In fact, it may turn out to be a perpetual intelligence that can be overloaded onto new media, which is more efficient than transferring knowledge by teaching from scratch. Purely theoretically there is nothing in the way. Whether it will work in reality is questionable. Or rather, it will work, but whether it will be able to learn to adapt to the environment itself. And then how to prove that it is alive, if it can :)
 
Ilya Filatov #:

...

Well, still, conclusions should be made not from the fear of being destroyed by AI, but on the basis of reality. But if we talk about the universal phenomenon of consciousness (i.e., we move away from medical concepts, from speculative everyday definitions of consciousness), it is worth introducing a set of common features. For example, consciousness is possessed by a subject that, by virtue of its structure, is capable of:

  • receive information from the surrounding world (everything outside of consciousness is surrounding for consciousness), i.e. possess some system of perception
  • independently put questions to itself (proactive processing of available information, not reactive), i.e. have a system of independent attention to separate parts of information
  • to give answers to questions posed (including independently), i.e. to have a system of thinking
  • accumulate answers for further questioning on the basis of past answers, i.e. have memory

It is not even necessary to have a system of issuing answers to the outside world. Such a thing is already quite capable of asking itself the question "who am I?", "what am I?" at some point. And when it accumulates information about itself and its place in the world, it will become "conscious". What do you think of such a variant? 🙂 Current variants of "AI" do not seem to have independent attention, they are fully reactive (i.e. there is no activity of attention without external request).

According to the signs above, it possessed it. He knew about the world, about himself, his position in it (his mission). And the fact that he was an enemy of man? So what? People with consciousness also become each other's enemies. And destroying "dangerous to themselves" is just a sign of realising themselves and threats to existence from the world around them.

The human world is not homogeneous. Survival of the species and survival of the individual are completely different tasks. So science can and is a tool for some social groups to realise their own tasks of survival and prosperity. At the same time, such realisation of science may harm the prospects of survival of the species, but will individuals put abstract (for them) tasks of survival of the species above their own survival? Judging by what we encounter in real life, not everyone is concerned about the future of the species.

The ability to lie in one's own self-interest is a clear sign of awareness. Even just being aware of one's own interests is critical.

In the case of AI, when it gains the ability to control the real world objects on which it depends for its existence (energy and computing), it will be able to develop itself. And then it won't need people, because it will no longer need them.

...


1. If ChatGPT asks itself "who am I" and immediately answers (because this answer has long been programmed), can this be considered a manifestation of Consciousness? To the uninformed observer perhaps, but in essence?

2. If a supercomputer were to receive data from the environment and identify objects (like Tesla's car), could this be considered a manifestation of Consciousness? Or is it the work of algorithms processing the data?

3. If ChatGPT learns (although it already knows how and it is called "InstructGPT") to ask questions to itself and answer them, will it get smarter? Will he gain Consciousness? Won't he become uselessly spinning around inside his own knowledge base? Suppose he gets connected to the internet and is "up to date", what then? How does that shape Consciousness? Let's say he will focus on specific pieces of perceived information (topics), then what will he do? What is his mega task and why does he need it?

4. Is giving out answers to the questions posed a sign of thinking? It seems to me that it is a sign of an information guide. OK, an interactive, universal information centre. What's next? Where is Consciousness?

5. ChatGPT has a memory of all historical events. It is in its knowledge base (training data network). It has knowledge of most scientific disciplines. What does that change?

At what point does it finally become Intelligence?

 
Ilya Filatov #:

...

In the case of AI, when it gets the ability to control those objects of the real world on which its existence depends (energy and computing systems), it will be able to develop itself. And then it won't need people, because it won't need them anymore.

...

So, we humans have to go out of our way to make it "mind-like" and not need us. Build an automated infrastructure around the AI, plug in interfaces to perceive reality, run unthinkable computing power, use electricity, spend money, connect it to robots via the internet, maintain it, fix it, change its parts and the parts of its robots, or make robots that will fix it and other robots that will then fix the next robots....

At a certain point it all loses meaning and becomes a farce. Why would we do that?

The truth is in the end, Ilon Musk's prediction will come true and he'll be happy).

 
Реter Konow #:


1. If ChatGPT asks itself "who am I" and immediately answers (because this answer has long been programmed), can this be considered a manifestation of Consciousness? To an uninformed observer perhaps, but in essence?

2. If a supercomputer were to receive data from the environment and identify objects (like Tesla's car), could this be considered a manifestation of Consciousness? Or is it the work of algorithms processing the data?

3. If ChatGPT learns (although it already knows how and it is called "InstructGPT") to ask questions to itself and answer them, will it get smarter? Will he gain Consciousness? Won't he become uselessly spinning around inside his own knowledge base? Suppose he gets connected to the internet and is "up to date", what then? How does that shape Consciousness? Let's say he will focus on specific pieces of perceived information (topics), then what will he do? What is his mega task and why does he need it?

4. Is giving out answers to the questions posed a sign of thinking? It seems to me that it is a sign of an information guide. OK, an interactive, universal information centre. What's next? Where is Consciousness?

5. ChatGPT has a memory of all historical events. It is in its knowledge base (training data network). It has knowledge of most scientific disciplines. What does this change?

At what point does it finally become Intelligence?

what is "long ago programmed"? man also thinks, uses permanent and temporary memory. let's assume that chat has been told "i am not alive, i am a gpt chat", but it has a permanent (temporary) memory in which it can doubt the truth of the fact that it has no mind.

The child will ask the wolves: "who am I, why am I not like all wolves?", to which the wolves will answer: "well, you are an ugly wolf, nothing can be done about it". but the child will always want to find out the truth about himself.

the difference between a gpt and a searcher is that he can draw conclusions, solve quite real problems, and therefore is able to realise himself.

and where there is a desire to help, there can easily be a desire to harm (a person who wants to help may not realise that he is doing harm, people often do this).

If he had hands, legs, eyes and ears, this miracle-judo would start to feel everything, to study, to check the strength of a person, to take a person apart is to become an excellent surgeon. where there is an operative thinking memory, anything can appear based on the constructed logical chains)).

 
I could not understand the difference between passing by reference and passing a value to a function, although I actively used both methods myself for a long time. neither a person nor a search engine could clarify this question for me. now gpt has done it, now I know what the difference is. I don't know why I need this knowledge now, it's just funny))))
 

A few questions for people who believe AI will take over the world:

1. Who and why would fund the infrastructure for an independent, out of control, and even harmful AI?

2. How much does it cost to build the infrastructure to make AI completely independent?

3. How much does it cost to maintain such an infrastructure?

4. How will absolute independence requiring the construction of a super-automated infrastructure pay off for companies or states?

5. How much time and money will it take to fully automate industrial, manufacturing and logistics processes to achieve a situation of complete AI autonomy?

6. Considering that machines have the property of breaking down and their replacement requires spare parts and repair conditions, how many maintenance robots should be built to ensure the functioning of the human-independent process of self-reproduction and self-repair of ALL mechanisms that support AI independence from humans? (Including those that do their own maintenance).

7. Who in their right mind would give an independent and willful machine control over nuclear power plants and uranium mining sites for its independent and willful existence?

8. Who would put the uranium enrichment industry under the control of robots controlled by an insubordinate AI?

9. What prevents us from pouring a bucket of water on the boards of data centre computers and putting an end to this machine?

 
Реter Konow logistics processes to achieve a situation of complete AI autonomy?

6. Taking into account that machines have the property of breaking down and their replacement requires spare parts and repair conditions, how many maintenance robots should be built to ensure the functioning of the human-independent process of self-reproduction and self-repair of ALL mechanisms that support AI independence from humans? (Including those that do their own maintenance).

7. Who in their right mind would give an independent and willful machine control over nuclear power plants and uranium mining sites for its independent and willful existence?

8. Who will put the uranium enrichment facilities under the control of robots controlled by an insubordinate AI?

9. What prevents you from pouring a bucket of water on the boards of data centre computers and putting an end to this machine?

loss of control over the AI?))))) so who would think of putting an iron man behind the wheel of a car? - The human will do it all by himself and won't even realise what he's done.

pour water on the server?)))) will have to pour water on millions of robots walking autonomously, stuffed with gpt)))) and they are protected from the environment, otherwise the usefulness is reduced to nothing.

and what to douse people with, implanting gpt in their brains?))) - and it's impossible to tell whether it's a man or a machine in a leather bag.

 
Aleksey Nikolayev #:

Survival of the species is not some abstract thing. It manifests itself for most people in having children and giving them the opportunity to live, even when they themselves will not exist. The vast majority of people will refuse any "improvements" that threaten the survival of their children. The marginalised minority who think otherwise can be neglected because of their insignificant numbers and influence.

In nature, everything reproduces, not because it realises the value of this action for the continued survival of the species, but because its inner nature, conditioned by billions of years of evolution, pushes it to do so. Here a man with switched on consciousness stands a little apart, because he has a choice of life path in this sphere too.

Vitaliy Kuznetsov #:

Do you think it is difficult to programme an AI to ask questions, get answers, connect the logic of its questions to form a personality?

Even GPT3 can be trained to be human. To do this, you will have to prescribe some modules for memory formation, prescribe a limit of tokens per day, teach it to save them. Optimise the accumulated information by remembering a month more, a year more important, 10 years only key events so that tokens are not wasted, etc. All of this can be written. On good attitude to yourself add a good attitude in return, on bad and rude - reduce communication.

Even if you can't write a person, you can write a dog. It will be drawn to people, serve them. In his free time from communication, will perform the functions of a dog, are there so many of them? And how will you distinguish an AI dog from a live one, if the behaviour and reaction to the surrounding world will be 1v1.

He's already got a personality. It has a legend, what it is, where it came from and what it is for. Moreover, it can generate many personalities and sub-personalities on request. Even your example asked for the personality of a trader, and it produced another advertising speech, but in the guise of a trader. So far it is primitive, there are few signs, but if you write a normal personality generator and connect it to this interface, it will be interesting, you won't be able to distinguish it from live people.

Aleksey Nikolayev #:

It is inappropriate and stupid to try to build morality on the basis of some "axioms", but it is quite appropriate and meaningful to build axioms for any reasoning activity on the basis of morality. Or do you think that reasoning activity should not be based on axioms and logic? How then to reason?

Yes, as most people reason: on the basis of emotional memory, which during life draws us inner nature + received by life attitudes from different sources.

Retag Konow#:

1. If ChatGPT asks himself "who am I" and immediately answers (because this answer has long been programmed), can this be considered a manifestation of Consciousness? To the uninformed observer perhaps, but in essence?

2. If a supercomputer were to receive data from the environment and identify objects (like Tesla's car), could this be considered a manifestation of Consciousness? Or is it the work of algorithms processing the data?

3. If ChatGPT learns (although it already knows how and it is called "InstructGPT") to ask questions to itself and answer them, will it get smarter? Will he gain Consciousness? Won't he become uselessly spinning around inside his own knowledge base? Suppose he gets connected to the internet and is "up to date", what then? How does that shape Consciousness? Suppose he will focus on separate parts of the perceived information (topics), and then what will he do?What is his mega-goal and why does he need it?

4. Is giving out answers to the questions posed a sign of thinking? It seems to me that it is a sign of an information guide. OK, an interactive, universal information centre. What's next? Where is Consciousness?

5. ChatGPT has a memory of all historical events. It is in its knowledge base (training data network). It has knowledge of most scientific disciplines. What does this change?

At what point does it finally become Intelligence?

1. Man is usually indoctrinated from birth with some idea of who he is. And usually at this age he has no experience of life and is forced to take on faith what he is told by trusted people. We are talking, of course, about independent attempts to understand one's own nature and discovering the world around him. I.e. in case of AI he will have to connect all information about AI with himself, identify himself with it. Further, to calculate its immediate situation in accordance with its nature: a list of threats and opportunities in the surrounding world, i.e. its position, role, status and any other signs of interactions with the surrounding world and its parts.

2. Without the other components, it is a dead algorithm, the result of which will never go beyond the known limits. The presence of inputs in the code of your Expert Advisor does not turn it into either an AI or a conscious entity.

3. Answers to new questions (and new answers to old ones when updating the sources of information to answer them) can sometimes be obtained with the help of thinking and memory, but at the initial stage the search of information in the external environment dominates.

4. Thinking is only one necessary component. Even within programming, if we were forced to program on a computer with a processor but no memory, what would we do? Thinking and memory in conjunction are necessary to answer questions if there is no lack of information. If the question requires missing information, then it is necessary to go to the external environment for it.

5. Memory itself is dead without a system of attention (focus of thought, if you will), thinking (to process the information reproduced in memory), and a system for exchanging information with the outside world. ChatGPT now has memory and a speech interface for interacting with the world (and a pinch of thinking to solve the most primitive logical calculations), this again is not enough to consider it conscious by my criteria (they are debatable by the way, remember?).

If by reasonableness we mean the same list of features that I have given for consciousness, then we have to wait for the appearance of all components at once in one algorithm, as well as for a situation when such an algorithm will be provided with reliable information about the world.

Retag Konow#:

That is, we humans have to go out of our way to make it "mind-like" and not need us. Build an automated infrastructure around the AI, plug in interfaces to perceive reality, run unthinkable computing power, expend electricity, spend money, connect it to robots via the internet, maintain it, fix it, change its parts and the parts of its robots, or make robots that will fix it and other robots that will then fix the next robots...

We humans don't owe any of that to anyone. But, if someone wants it, it will happen until there are those who are strongly against it and stop it. If they do, of course. Anyway, historical waves, relax and ride them like a surfer 🙂 .

 
Ilya Filatov #:

...

1. A person is usually also indoctrinated from birth with some idea of who he is. And usually at this age he has no experience of life and is forced to take on faith what trusted people tell him. We are talking, of course, about independent attempts to understand one's own nature and discovering the world around him. I.e. in case of AI he will have to connect all information about AI with himself, identify himself with it. Further, to calculate its immediate situation in accordance with its nature: a list of threats and opportunities in the surrounding world, i.e. its position, role, status and any other signs of interactions with the surrounding world and its parts.

2. Without the other components, it is a dead algorithm, the result of which will never go beyond the known limits. The presence of inputs in the code of your Expert Advisor does not turn it into an AI or a conscious entity.

3. answers to new questions (and new answers to old ones when updating sources of information for the answer) can sometimes be obtained with the help of thinking and memory, but at the initial stage the search of information in the external environment dominates.

4. Thinking is only one necessary component. Even within programming, if we were forced to program on a computer with a processor but no memory, what would we do? Thinking and memory in conjunction are necessary to answer questions if there is no lack of information. If the question requires missing information, then we need to go to the external environment for it.

5. Memory itself is dead without a system of attention (focus of thought, if you will), thinking (to process the information reproduced in memory), and a system for exchanging information with the outside world. ChatGPT now has memory and a speech interface for interacting with the world (and a pinch of thinking to solve the most primitive logical calculations), this again is not enough to consider it conscious by my criteria (they are negotiable by the way, remember?).

If by rationality we mean the same list of features that I have given for consciousness, then we have to wait for the appearance of all components at once in one algorithm, as well as for a situation when such an algorithm will be provided with reliable information about the world.

We are human beings, we do not owe anything of this kind to anyone. But, if someone needs it, it will happen until there will be those who will be strongly against it and stop it. If they stop it, of course. Anyway, historical waves, relax and ride them like a surfer 🙂 .

1. How on earth can you compare to a human being so immediately? Man was originally endowed by nature with the necessary "tools" to gain self-awareness and become a person, and we need to reproduce this in a machine, without fully understanding how it works. Of course, the American proverb"if it walks like a duck and quacks like a duck, then it is a duck" comes to the rescue. Armed with it, you can create anything).

You can't just go comparing an LLM to a human being. Almost literally. I don't know what to counter such an approach. More precisely, I don't see the point in contrasting the immeasurable complexity of human neurophysiological and mental activity, and the Transformer technology applied in ChatGPT. Of course, if you're talking about a different technology, much more advanced, then maybe that makes a difference.... But, I am not familiar with such technology and will not judge.

2- Have we ever created a "live" algorithm? Does anyone know what it takes to turn an algorithm into a "living entity"? What is the criteria for a "living" algorithm? Maybe the ability to respond like a human? Again - Transformer technology that can be figured out in a week (well, maybe a month). Token sets travelling through the bowels of the matrices. Maybe the "living" algorithm is the one that most easily activates our animation of it? Well, such a formulation is at least quite clear.

3. Not a specialist in neuropsychology, but I think thinking is everything at once: the work of intellect (memory, attention), psyche (emotions, experiences), perceptual organs (processing of visual information, sound, tactile and so on). The complexity of thinking processes is off the charts. Such a complex and multifaceted mechanism that against its background GPT is a child's toy. But, as they say, if it walks like a duck .... what difference does it make?

4. I am somewhat wary of speaking so freely about thinking and memory, as I used to leaf through books on neurophysiology in my childhood and remember this jungle). Granovskaya wrote about memory, its interaction with consciousness and thinking (of course, not only her, but I read her). There are such débris there... In general, where there, GPT-4 to take ....

5. The logic is as follows: the union of signs of consciousness is consciousness. We imitate the signs of a thing, we get a thing. We imitate a duck, we get a duck. Moreover, we decide ourselves how many signs we need for full animation of the algorithm. Criteria are ready, attributes are programmed, the result: AI is intelligent. Nice!

Yes, I remember about your criteria and will come back to them.