Artificial Intelligence 2020 - is there progress? - page 13

 
Yes, you've got a lot on your plate here, you've got a lot on your plate. Intelligence is still set by man - but that is the first stage. The second stage, the intellect will set the intellect, that will be the end of humanity
 
Vitaliy Maznev:

I'm wondering for myself, what exactly is complicated about the thought process?

In humans, everything is complex there.

any human learning is simply the formation of stable connections between neurons; the stronger the connections, the greater the experience - everything is simple here

it is more complicated with distortions, distortions both psychological (cognitive) and perception of the external world - they introduce error and correction from previous experience


and everything is simpler with a machine - it can't distort data and if you want to think that it is possible to train a machine, then the training will be different, and a machine can't compare its previous experience and decide that the new knowledge will be useful or harmful, i.e. there will always be an internal I which will allow the new knowledge to become experience or not - the subconsciousness is most likely to do it

 
Реter Konow:

I can hardly help - I don't yet understand it myself. I only know that the thinking algorithm exists and we are intelligent because it IS.

Well, I assume that since you have expressed a linguistic operational format, some dissonance in tasks is logical here. But the same dissonance arises between people. What one says is not necessarily what the other hears. And it arises from the linguistic component. And there are at least two stages of potential distortion here: the first is when the speaker carelessly expresses the idea, the second is when the perceiver processes the expression.

Now let us return to the semantic component. At this level no distortion is possible. An idea at the level of meaning is equally both generated and perceived. An example of this would be neural interfaces. They, after all, directly capture a clear semantic code (be it a simple nerve prompt or a more extended semantic sequence). If the AI is built on an inherently semantic format, with options for converting the format, including into linguistic forms, then I don't see the difficulty of processing information and generating relevant expressions at the expense of the AI.

In fact, you should at least be directly confronted with them before claiming difficulties. So how can one conclude about complexity when it is not even potentially defined?

 
Igor Makanu:

A machine, on the other hand, cannot compare its previous experiences and decide whether the new knowledge will be useful or, conversely, harmful,

What prevents the machine from recording the experience and drawing conclusions from it? Seems to me that's how many programs work. Take for example suggestions for corrections in text editors. Isn't there some embedded experience and inference here of what is right and what may not be correct?

 
Vitaliy Maznev:

What prevents the machine from recording the experience and drawing conclusions from it? Seems to me that's how many programmes work. Take, for example, suggestions for corrections in text editors. Isn't there some inherent experience and conclusion about what's right and what can be wrong?

I wrote - mistakes are inherent to man, and even the process of re-learning is always distorted.

if you draw the usual conclusion - the neighbour is a drunkard who drinks every day and night, during a thunderstorm lightning struck and the neighbour's house burned down, it is concluded that drinking is harmful and may have severe consequences ))))


the car compares lightning and drinking? - people, in numbers, will be able to compare natural phenomena and human weaknesses


by the way, so many of the great and talented scientists had unstable psyches, then childhood traumas, then difficult life situations, as a variant errors of perception of reality and helped make them talented... but that's not certain!

 
Igor Makanu:

will the machine compare lightning and drunkenness?

What is the difficulty in comparing any data? Especially if certain data often overlap? It is possible to 1) initially link from one data to another, and 2) set automatic matching dirrections for multiple crossings. Let's say a lot of adjustments need to be made at first. But in general, I personally see that these things have been implemented for a long time (this can be taken as a subjective point of view).

 
Vitaliy Maznev:

What is the difficulty in comparing some data? Especially if certain data often overlap? It is possible to 1) initially link from one data to another, and 2) set automatic matching dirrections for multiple crossings. Let's say a lot of adjustments need to be made at first. But in general, I personally see these things being implemented a long time ago (this can be taken as a subjective point of view).

It depends on the creator of the AI, usually everyone wants the machine not to make mistakes and at the same time the machine must think like a human, who very often thinks through the prism of his experience consisting partly of mistakes


And what you write has long been implemented and is called expert systems

 
Igor Makanu:

depends on the creator of the AI, usually everyone wants the machine not to make mistakes and at the same time the machine has to think like a human, who very often thinks through the prism of his experience consisting partly of mistakes

This is purely a matter of preference. It is possible to put into an intellect both schizophrenia or schizophasia and data that will exclude them. And it is possible to initially define the boundaries and principles of views. In this case the AI will be able to communicate with the respondent specifically in his/her way. With a fool as a fool, with a scientist as a scientist.

 
Vitaliy Maznev:

Well, I assume that since you have expressed a linguistic operational format, some dissonance in the tasks is logical here. But the same dissonance arises between people. What one says is not necessarily what the other hears. And it arises from the linguistic component. And there are at least two stages of potential distortion here: the first is when the speaker carelessly expresses the idea, the second is when the perceiver processes the expression.

Now let us return to the semantic component. At this level no distortion is possible. An idea at the level of meaning is equally both generated and perceived. An example of this would be neural interfaces. They, after all, directly capture a clear semantic code (be it a simple nerve prompt or a more extended semantic sequence). If the AI is built on an inherently semantic format, with options for converting the format, including into linguistic forms, then I don't see the difficulty of processing information and generating relevant expressions at the expense of the AI.

In fact, you should at least be directly confronted with them before claiming difficulties. So how can one conclude about complexity when it is not even potentially defined?

The first difficulty is in the encoding of meaning and its linguistic "wrapping". A single invariant meaning can have a conditionally infinite number of compressed and expanded forms, which makes its extraction an extraordinarily difficult task. The context is single and the envelope is polymorphic. Processing the envelope representation of meaning is the main task. It's like trying to penetrate a tank through armor.) You can't do it "head-on".
 
Реter Konow:
The first difficulty is the encoding of meaning and its linguistic "wrapping". One invariant meaning can have a conditionally infinite number of compressed and expanded forms, which makes its extraction an unusually difficult task. The context is one and the envelope is polymorphic. Processing the envelope representation of meaning is the main task. It's like trying to penetrate a tank through armor.) You can't do it "head-on".

Give an example, please. Usually the meaning form (in my experience) comes from a certain context. There are guys like psychoneticists. They have been parsing this. And in their experience it comes out that initially there is some background from which figures (specific units of meaning) are formed. You're just approaching it from the wrong side, which is why potential difficulties pop up that may not even arise in practice.