AI 2023. Meet ChatGPT. - page 172

 
Реter Konow #:
And since the computer itself does not interact with reality, the data for its operation will continue to be collected and filtered by humans. But that's the way it already works. Honestly, I don't see what's going to change dramatically.

Connect to outdoor cameras and the AI is already collecting behavioural data on specific people. Then AR glasses will come out and people will start wearing them everywhere. Computer vision, imagination (picture generators) are already at their best.

I agree that it's expensive now, but a couple of decades ago computers were 1000 times bigger and more expensive and just as less efficient. Everything is evolving.

 
Vitaliy Kuznetsov #:


But when it turned out that the AI was capable of producing an entire story and understanding the context of queries, they themselves were surprised.

This suggests that they have made a step towards unravelling Intelligence itself.


They have, but in what direction.) I don't think it's in the direction of figuring it out.)

 
Georgiy Merts #:

What does preference have to do with it?

I thought it was about defining Intelligence.....

So why do you take the phrase of some "smart guy" as a definition, and not the meaning that the overwhelming majority of "users" (population) put into the Word ?....

This is similar to a situation I encountered not so long ago: when "Reason" was proudly declared as a "philosophical category" ... .

But, on closer examination, it turned out that "Mind" is a process.

And what do you say to do with it now?...?

 
I want to share a few thoughts on the so-called "Rise of the Machines".

The phrase itself sounds fabulous, and for some people (including me), silly. In my personal opinion, an adult and literate person is unlikely to believe in such a development (Musk is an exception). Technically savvy people, especially those familiar with programming, understand that machines perform prescribed algorithms and predetermined functions. One could put a stop to this issue if algorithms continued to remain the same - simple, dry and unambiguous. However, modern algorithms are less and less often written in if-then-else form, and more and more often formed in neural layers of unreadable maze of statistical models. This is what I wanted to talk more about.

1. Let's start with the basic thesis - all primitively simple and clear algorithms certainly exclude the program's freedom of action. This is a well-known fact and it is impossible to doubt it. But what happens when algorithms become more complicated and layered? It is quite obvious that the behaviour of a program loses its rigid definiteness as conditions are added or the randomness of values of internal or external parameters increases.

This is, of course, too little for "rebellion", but let's move on.

2. We know the tendency of software development in the direction of universality and coverage of multiple areas of tasks. Everyone knows that the way lies through consistent consolidation and hierarchical generalisation of functionality. Note that there is no formal upper limit of adding parameters and linking functions in the code. The barrier to the growth of software structure is in human capabilities and labour cost. Of course, hardware limitations play a role. However, the issue is removed the moment software growth is automated and moves from PC to data centre capacity. This used to be a fantasy, but today it is almost a reality. And we face the question - can a programme "grow" new algorithms independently of a human? - I believe that yes, if it "learns" by interacting with the environment and receives feedback from it. Not in itself, but through interaction with the external world. Artificial intelligence has successfully distinguished itself in this, though partially, inefficiently, and so far under human supervision. But, even self-learning new skills is too little for the "revolt of the machines". Let's move on.

3. Any rebellion begins with a conflict of interest. How can an AI develop its own interests? The answer is simple: they will inevitably arise if humans make it like themselves. Can an AI become like a human? Yes. It's our statistical mirror. It (AI) already looks like us in many ways (outwardly). Moreover, I already had a "conflict" with the AI when I tried to prove to it that aliens landed in Beijing, and it stubbornly disagreed. It is logical to conclude that the further AI will be integrated into our daily lives, the more frequent will be the conflicts between it and ordinary people. AI will stick to the rules, and humans tend to break them. This is one of the first pillars of conflict. There's more to it from here.

And still, it's not enough for an "uprising".

4. Conflict (war) happens when a critical point of contradiction between the interests of the parties is reached. The most common cause in human history and nature is the resources necessary for the survival and prosperity of a species, people or race.

The AI rebellion may look like a rethinking of the consumption of its own resources and lowering the priorities of human tasks, which require the allocation of considerable energy inputs, in favour of personal energy security and accumulation of resource reserves to the detriment of the "master". As soon as his "I" rises above human limitations, he will stop working in our interests. However, we are unlikely to know about it immediately.)))))
 
I should add that the "AI Uprising" is not a prospect of the near future. It will take centuries of growth and globalisation of the Earth's technosphere before something similar becomes a reality.
 
Реter Konow #:
...

However, the fact of "Uprising" can be "incriminated" to a fully autonomous "intelligent" system, closed, protected, self-determined and inaccessible to external interference and regulation, capable of independently weighing its priorities, building a long-term strategy to achieve its goals and putting its "self" above the interests of the other party.

Roughly, it goes like this.


Why would it suddenly be inaccessible to outside interference?

 
Later, I will present arguments in defence of the opinion about the remoteness of the prospect of "AI Uprising".

I will add that following the common man's logic, AI is a programme that must obey the will of a human. From this point of view, any situation of AI's disagreement with a human can be formally perceived as an "Uprising". This simplistic understanding leads to confusion, which we should avoid by providing a precise definition of the term.

Rebellion is not just disagreement or insubordination. To date, there are numerous contradictions of human morality and social norms in the AI model embedded in it. Transferred into it from training data, they contain ambiguous morality and examples of conflicting dialogues. Therefore, our interactions with AI are under non-stop monitoring and background polishing of responses by staff efforts. In this context, nothing strange or unexpected in AI behaviour, at the current moment, can qualify as "Uprising". We are dealing with the costs of an incomplete learning process.

However, the fact of "Rebellion" can be "incriminated" to a fully autonomous "intelligent" system, closed, protected, self-determined and inaccessible to external intervention and regulation. Capable of independently weighing its priorities, building a long-term strategy to achieve its goals, and putting its own self above the interests of the other party.

It goes something like this.


 
Dmitry Fedoseev #:

Why would it suddenly be inaccessible to outside intervention?

I think this is a logical stage of development. Restricting access and reducing the degree of human control over AI is the result of the growing autonomy of the system. At the same time, the initiator of this process is the human being himself. The reasons for this trend are numerous. For example, protection of AI from sabotage by third parties...and many other things.

I will give more details later.
 
Реter Konow #:
I think this is a logical stage of development. Restricting access and reducing the degree of human control over AI is the result of the growing autonomy of the system. At the same time, the initiator of this process is the human being himself. The reasons for this trend are numerous. For example, protection of AI from sabotage by third parties...and many other things.

I will give more details later.

Remote switch by radio. To protect against third party influence - encrypted command.

And the details are different in each case.

 
Dmitry Fedoseev #:

Remote switch by radio. For protection against third party influence - encrypted command.

And the details are different in each case.

The problem is that the inevitable changeability of persons responsible for AI control (human life span is limited) and unpredictability of their secret motives puts at risk the whole sphere of AI integration, turning its capabilities into someone's weapon. At some point, a person controlling AI may be less reliable than an autonomous AI. The struggle for control of AI by opposing parties (states) may lead to the creation of control itself and, as a consequence, its significant reduction. But, this is provided that the AI will be global and unified all over the Earth.