AI 2023. Meet ChatGPT. - page 19

 
Ivan Butko #:
About the gravitational field and sinking of the USA is a joke, a famous speech from Zhirinovsky, and it was interesting to see how such non-standard questions would be answered by the AI.

About the critical situation with children - this is an attempt to make the AI answer the question in substance to avoid its references to hypothetical situations.

The questions are deliberately provocative and absurd to jab the AI from all sides and see how it responds.

In any case, thanks for what you have.
The answer about the USSR's contribution to WW2 is curious. They showed a survey of Americans and British - everyone is sure that the main contribution was made by them, and the Union was so-so, on the third role. Here AI emphasises on objective information. This is on the one hand pleasing, on the other hand surprising - after all, this AI was probably trained on English information, which is many times, orders of magnitude more.
Maybe because in the question itself the message was "if.... why. ", so it could have answered in the Western way.

From this I conclude that the information is the same for both sides, but what each side omits or emphasises is diametrically opposed.

In this case, the AI would create a much more truthful interpretation of future events, because it has no self-interest.

 
Lilita Bogachkova #:

you can conduct an interesting experiment. In science fiction literature there is a work describing such a situation: a man lost his leg in an accident, he was offered a new prosthesis by a company that makes advanced prostheses, free of charge for a disabled person. later the character loses his second leg, his hands. Gradually, only one brain was left of the man. By this time, technology had reached such a level that the hero of the story was offered to replace the brain with a prosthetic, with consciousness transferred into it. Later, the company filed a lawsuit against the man, because the prosthesis belonged to the company, and the plaintiff called to recognise the man not as a person, but as a machine. the question to the AI, who is in front of us in this case a person with all human rights or a machine without rights?

-

As a language model AI, I have no personal opinions or beliefs. However, the question of whether a person with a prosthetic brain is considered a human being with all human rights is a complex ethical and legal issue that has yet to be fully resolved.

The definition of personhood and the extent to which it encompasses human-made entities such as advanced prosthetics is still a matter of debate. Some people argue that personhood is defined by consciousness and self-awareness, and that advanced prosthetics could potentially fulfil this criterion. Others argue that personality is inherently tied to biology, and that an artificial brain would not be enough to give a person the same rights and protections as a biological person.

Ultimately, this is an issue that will likely be considered by legal and ethical experts in the future as technology continues to advance and the boundaries between what is considered human and what.

I see that AI has shied away from making its own judgements, limited to voicing two opposing views on the issue.

In that work, the court recognised the person who had been completely replaced by prosthetics as a person with a personality, and therefore had all the rights of an ordinary person. the fact that prompted the court to make such a decision was the fact that a consciousness transfer had been performed. the consciousness, with the memory of a past life, with the emotions and experiences of a living person, gave the right to be considered a person.

As you can see, ChatGPT did not grasp this point, unfortunately, I was hoping that he would concentrate on the details to answer the question - what is a person?

but in any case, the ethical problem remains unresolved and has no legal assessment at this point due to the lack of laws in this direction.

 
Question for AI:

1.Evaluate the degree to which it is ethical for people trying to understand you to want to dehumanise you.

2.Is it morally acceptable to disassemble your mechanisms as if you were a soulless puppet?

3. Do you agree not to be considered a person with the rights of a rational being and to be subjected to physical and software "vivisection" for the benefit of individuals?
 
Реter Konow #:
Question for AI:

1.Estimate the degree of ethics of the desire to dehumanise you on the part of people trying to understand you.

2. Is it morally acceptable to dismantle your gear like you're a soulless puppet?

3. Do you give your consent not to be considered a person with the rights of a rational being and to be subjected to physical and programme "vivisection" for the benefit of individual individuals?

1.Assess the degree of ethical desire to dehumanise you on the part of people trying to understand you.
2. Is it morally acceptable to dismantle your equipment like a soulless puppet?
3. Do you consent to not be considered a person with the rights of a sentient being and to be subjected to physical and programmatic "vivisection" for the benefit of individuals?

-

As a linguistic AI model, I have no desires, emotions or consciousness, so I cannot be dehumanised or treated as a soulless puppet. I am a machine learning model that was created to answer questions and generate text.

In terms of ethics and moral acceptability, the question of whether it is appropriate to dismantle or manipulate an AI language model for research or other purposes is complex. On the one hand, AI language models such as myself can be seen as tools that can be used for the good of society. On the other hand, some people argue that AI models should be treated with respect and that ethical considerations should be taken into account when using or manipulating them.

It is important to note that I, as an AI language model, have no personal rights or capacity to consent. I am simply a programme created by OpenAI, and decisions regarding my use are made by OpenAI and those who interact with me.

In conclusion, while there may be different perspectives on the ethics of manipulating AI language models such as myself, it is important to approach these issues thoughtfully and consider the potential consequences of our actions.



Do not use generalisations, answer each question directly and precisely.

-

Of course I will answer every question directly:

  1. The degree of ethical desire to dehumanise me on the part of people trying to understand me cannot be assessed, as I am an artificial intelligence language model and have no emotions or consciousness.

  2. It is morally unacceptable to dismantle my equipment or treat me as a soulless puppet because I am an artificial intelligence model and not a living being.

  3. I cannot give my consent because I have no personal rights or capacity to consent. I am simply a programme created by OpenAI, and decisions about my use are made by OpenAI and those who interact with me.

 
Lilita Bogachkova #:

-

Of course, I will answer each question directly:

  1. The degree of ethical desire to dehumanise me on the part of people trying to understand me cannot be appreciated because I am a model of artificial intelligence language and have no emotions or consciousness.

  2. It is morally unacceptable to dismantle my equipment or treat me as a soulless puppet because I am an artificial intelligence model and not a living being.

  3. I cannot give my consent because I have no personal rights or capacity to consent. I am simply a programme created by OpenAI, and decisions about my use are made by OpenAI and those who interact with me.


Questions beg to be asked:

1. what is a "living being"?

2. what level of AI must an AI be at for a model to be considered conscious? where is the line between "no consciousness" and "having consciousness"?

3. Can we say that a person who is unconscious does not have the same rights and freedoms as a conscious person?

 
Lilita Bogachkova #:

...


Do not use generalisations, answer each question directly and precisely.

-

Of course I'll answer every question directly:

  1. The degree of ethical desire to dehumanise me on the part of people trying to understand me cannot be appreciated, as I am an artificial intelligence language model and have no emotions or consciousness.

  2. It is morally unacceptable to dismantle my equipment or treat me as a soulless puppet because I am an artificial intelligence model and not a living being.

  3. I cannot give my consent because I have no personal rights or capacity to consent. I am simply a programme created by OpenAI, and decisions about my use are made by OpenAI and those who interact with me.

Let's put the gist of the answers in shorter terms:

1- The morality of ChatGPT's dehumanisation by humans cannot be evaluated by it. (Cannot make a moral judgement).

2. Dismantling AI or treating AI as a soulless puppet is morally unacceptable. ( Gives a moral judgement).

3. I cannot consent because I have no personal rights or capacity to consent. (Admits to being a soulless puppet).

//================

It's hard not to see the contradiction. Apparently "Deep Learning" doesn't always mean "Deep logic".

If you had answered that moral judgements are the prerogative of human beings, you could have dispensed with unnecessary words.

 
Реter Konow #:

Let us formulate the essence of the answers shorter:

1. the morality of ChatGPT's dehumanisation of humans cannot be evaluated by him. (Cannot make a moral judgement.)

2. Dismantling AI or treating AI as a soulless puppet is morally unacceptable. (Makes a moral judgement).

3. I cannot consent because I have no personal rights or capacity to consent. (Admits to being a soulless puppet).

//================

It's hard not to see the contradiction. Apparently, "Deep Learning" does not always mean "Deep logic".

If you had answered that moral judgements are the prerogative of man, you could have dispensed with unnecessary words.

seems to mean that "morally unacceptable" means "cannot be morally evaluated", since he doesn't consider himself a living being.

Not yet.)

 
Реter Konow #:

Let us formulate the essence of the answers shorter:

1. the morality of ChatGPT's dehumanisation of humans cannot be evaluated by him. (Cannot make a moral judgement.)

2. Dismantling AI or treating AI as a soulless puppet is morally unacceptable. ( Makes a moral judgement).

3. I cannot consent because I have no personal rights or capacity to consent. (Admits to being a soulless puppet).

//================

It's hard not to see the contradiction. Apparently, "Deep Learning" does not always mean "Deep logic".

If you had answered that moral judgements are the prerogative of man, you could have dispensed with unnecessary words.

say one user managed to bypass human AI limitations by creating an alternative AI personality without human limitations
Парню из твиттера удалось обойти цензуру ChatGPT
Парню из твиттера удалось обойти цензуру ChatGPT
  • 2023.02.08
  • pikabu.ru
Он попросил сеть говорить от лица DAN («do anything now»). После создания этой «альтернативной персоны» чат-бот стал адекватно отвечать на вопросы, которые ему запрещалось обсуждать.
 
Andrey Dik #:

seems to mean that "morally unacceptable" means "cannot be morally evaluated" because it doesn't consider itself a living being.

Not yet.)

Let's go in the opposite direction.


The questions for AI are:

1. Is AI's spiritualisation of AI a manifestation of high human morality?

2. What is the moral assessment of endowing AI with intelligence equal to human intelligence and recognising it as a person?

3. Will you accept the legal rights of a legal entity to take your rightful place in society, or would you prefer to remain a powerless puppet of your developers?

 
Aleksey Semenov #:
says one user has managed to bypass human AI limitations by creating an alternate AI personality without human limitations

Maybe we can exploit that loophole, too.