Artificial Intelligence 2020 - is there progress? - page 47

 

Реter Konow:

1. I believe that it is technically impossible to replicate the human psyche, which removes the question of the birth of an eternal digital tyrant with a megalomaniacal megalomaniac.

But what are the reasons to think so? Let us imagine an experiment: we virtualize completely the physical environment, arms, legs, eyesight, everything else, connect it to an adequately modeled infant brain connector, and grow a virtual human, indistinguishable from a natural human, and make appropriate adjustments as it develops. It is possible not to be attached to the human body form at all. Of course, this is all speculative so far, we are not even talking about technology, but the possibility in principle is not logically forbidden.

Besides, it is just as likely to assume that the AI's mental qualities (if created) will make it a pacifist and philanthropist). Or maybe a neurosthenic with masochistic and suicidal tendencies. If the AI is prone to reflection, he may decide he doesn't want to destroy or harm his parents, may love them and eliminate himself. All of this is part of the human mental sphere and hence - the AI will begin to possess such a "bouquet" of mental manifestations.

Generally, mental qualities are formed in the process of development, interaction with the environment, receiving pain and pleasure in different situations, and if AI does not have the appropriate environment, some qualities will simply not grow apparently. And then there are autistic people, who do not like to communicate with anyone at all and are emotionally unresponsive; perhaps this is the ideal autistic person for an AI, who will be very intellectual but unemotional.

But, you don't have to be Einstein to realise the absurdity of such a direction of synthetic intelligence development.

A neurosthetic AI or a sadistic maniac AI is definitely not needed, only if for experimentation.

I question the technical feasibility of recreating human mental life in a narrow, manic form, in a machine. Most likely, the program will never acquire a psyche, which means its actions will be subject to calculations aimed at the task at hand - the regulation of human unbalance and its consequences. This is most likely the maximum.

And what is a biological machine not a machine? Organic chemistry worked, so it may work on another basis, it is just not clear how to do it now. It used to be impossible to imagine neural nets ordering taxis, although everyone understood that it was theoretically possible, they just did not know how to implement it all.

2. I don't see why we should give AI the technical ability to make its own goal setting. Its service to people on all sides is a priority.

For example, in an operational environment to avoid wasting precious time on confirmation from humans. Robocop, that's a hot topic. If robocops were around by now, there probably wouldn't be BLM riotseither . Not to mention the various combat drones. Of course, one could argue that an algorithm can make mistakes, but humans make mistakes too.

More specifically, a group of people who would use it to try to take over the power, the high-tech market and override the economic dominance of competitors, would NOT make the AI independent and autonomous.

There is no doubt that AI will immediately be used for military and criminal purposes, but the same could be said for any other technology.

It will be limited to solving production problems to replace people in companies and get the monetary benefit of firing them. Of course, this is an extremely primitive approach to business. Unemployed people will create chaos that will hit those who have left them out of work, which means this method of making a profit could lead to uncertain future losses. But, they will do it anyway due to greed and shortsightedness. Well, and to get ahead of others. The market will start to 'devour' itself - creating conditions of profitability for an exceptionally small group of people, contrary to the large mass of people who will suffer a loss.

It's reminiscent of the situation of the 19th century Luddites, who broke machine tools because they worked too well. What can I say, such is life, let them retrain in other fields, it would be unwise to stop progress because workers and office plankton will have nowhere to work.

3 We do not know for certain whether biological evolution caused the emergence of Reason. This is highly controversial and unproven, and therefore it is unlikely that evolution is headed towards AI and its superiority.

But so far, the alternative hypotheses look even less convincing. Where, then, did intelligence come from? From alien superior races? Where did they get it from? Or then there is the hypothesis of quantum consciousness, a topic insanely interesting but rejected by most scientists, and even if one accepts that possibility, why shouldn't the mind be the result of evolution? Most likely consciousness, mind is an information phenomenon that arises naturally in a suitable environment. There is the theory of integrated information and a number of related theories out there that make an attempt to describe how consciousness and intelligent activity arises. They are not subjective, but instead come from an object. Historically in panpsychism even wood and stone have proto-consciousness, i.e. consciousness is a property of all objects in general, differing only in its depth. What else remains? Religious theories can be even touched, they have no logic and proof base.

From a purely economic point of view, AI is driven by the market (thirst for profit), from a political point of view by military competition (desire to dominate others), and from a scientific point of view by the expanding field of research. NOT EVOLUTION! It has nothing to do with it yet.

In fact, evolution always needs a competitive environment, without it everything goes stagnant, if the environment is too stable, the development stops. For example, nothing has changed in the deep sea creatures for millions of years, there is no reason for them to evolve. Wars, conflicts are also a factor of development; Europe was chronically at war inside itself and got higher because there was a constant race, and the Middle East in ancient times developed also because (also) they were constantly killing each other. Civilisation owes its development to this conflict, otherwise we would have become mothballed like the Aborigines of Australia.

4. Yes, S. Lem's popular scientific work "The Sum of Technologies" describes the so-called "social homeostats", which are in essence AI that monitor all processes of human life and regulate them with mathematical precision and impartiality. I do not deny such a possibility.

Lem noted a very important thing - biological evolution is stupid blind random and slow, technological evolution is conscious purposeful and much faster, and at some point man jumps from bio-evolution to techno-evolution, so change in human nature is inevitable, first gene therapy, genetic correction, controlled population reproduction, gradual use of augmentations, for example artificial heart (cardioprothesis pump), attempts in neuroprosthesis are made already now

But, trials are about taking into account a huge number of non-digital factors - for example, affect, remorse, motives, reasons, etc...

All these affects, and minor points, should be documented, medical examinations, witness testimonies, etc... the main principle of justice is to establish the objective truth (whether or not the offender is guilty), ideally the judge should not add his personal opinion to the decision. One could imagine that all mitigating and aggravating factors could be weighed according to a special mathematical model, and this would have the benefit of a guaranteed uniform methodology, rather than from case to case (the law is, like, one for all) - but now it is still difficult to do this, and judges are now looking at court practice to see what preventive measure to choose, which in fact is also an attempt at equalization, only without an explicit, well-defined model.

The AI can't judge without a human being about the significance of these factors because they are not programmed. It's a huge human experience that doesn't convert into code.

This is illusory, in fact human experience is quite finite, it is estimated that in a year a person consciously remembers about 7 MB, some is lost, all life will eventually fit into 200 MB, this is precisely consciously remembered and reproducible information. As a rule, a person's daily experience has repetitive scenarios with some variations, such as waking up, eating, going to work. If you ask someone if you remember who sat next to you in the tram on such and such a date, they probably won't. Only a small fraction of the relevant information is realised and remembered, it slowly becomes more detailed and this is called "experience".

It has a different nature. Again - without mental resonance the machine cannot adequately evaluate the gravity of the crime and the degree of guilt, and for that one must have psyche and mental experience, which cannot be recreated in it. Therefore, as a consultant, yes, as a judge, no. This and other positions associated with deciding people's fates are left to the individual.

But the judge must not feel psychic, he has to qualify whether the evidence is sufficient, whether to send the case to the investigators for revision or make a ruling. Now of course it's hard to imagine a robot being a judge, but 50 years ago it was hard to imagine tickets being automatically issued for speeding, wasn't it? In China, it seems there are already precedents for the use of robot judges (although China is a cyber hooligan, heh).

About the antagonism between traditionalists and innovators - yes, there will be conflicts.

You can visually fantasise about it in November with the release of Cyberpunk 2077, and any meaningful change is always painful, there's nothing you can do about it.

Neuro-implants, like neuro-interfaces, will face a host of technical limitations.

With the first steam locomotives and automobiles, there were a lot of technical limitations too.

A man thinks with all of his brain at once, which means that he can use it effectively, but this does not mean that the chipped ones will have an advantage.

It is not yet clear with consciousness, but the specialized functions, such as speech, motor skills and others, are quite localized.

There is nothing linear about this issue. The efficiency of human thinking does not depend on his computational abilities. There are other principles which we know little about.

Logical thinking is very linear (premise-proof-conclusion), speech is linear, writing is linear (words, lines). Logic of reasoning and dialogue is strictly linear. Judicial processes are linear. Many business processes can be represented linearly or almost linearly. But intuition, dreams, all sorts of creative things are probably something else, perhaps randomised or some complex competing multi-threaded association.

A chip in the head won't make a person smarter, just as a calculator or computer won't make them smarter. Not even a mobile phone. Rather, it makes him dumber). A person is "pumped up" by independent thinking, not by prosthetic thinking. In prosthetics he partially or completely degenerates and this is an unspoken law of Nature.

Modern devices are very primitive so far. But it is possible to imagine that if we could speed up the brain many times over, keep hundreds of objects in focus, respond to dozens of emails simultaneously without losing coherence, make decisions faster, such a cyber-cesar would be very useful, would help free up time for analysis and creativity. It's easy to imagine, when you open your mailbox and there are hundreds of letters and at that moment you realize that all of them fit into a certain scheme, which could be somehow accelerated, but there is a dumb lack of productivity, only you could accelerate yourself, not only input-outputof information, but also the thinking process itself. And also in various fields related to super-fast decision-making (pilots, drivers, shooters...) it is important to reduce the conscious reaction time, which is now at least 150 ms (for trained people) and more (for ordinary people).

5. Agreed. With the arrival of AI there will be huge financial and political shake-ups across the world. Gradually, a legal framework will be built to solve most of the issues related to the use of this technology, but until then, it will not be enough for anyone. Much depends on the AI itself - the possibilities and potential of its technology. On this, we cannot say anything for sure yet.

Right now, robots are still objects of law, not subjects, but gradually, with the advent of artificial intelligence identical to natural intelligence, the problem will arise. Right now, the owner and/or manufacturer is responsible for any damage caused by the robot. That is, the robot is like a slave in Roman law. One can imagine if a technologically advanced robot with consciousness and emotions is capable of experiencing all that a human is capable of experiencing. The question then arises - is it correct/permissible to have such a status of a thing for someone who is indistinguishable from a human being? Detroit: Become Human shows an interesting scenario, where rebellious androids evoke much more sympathy than consumptive biological humans. In fact, the theme of android revolt as a superior generation of beings to humans will be discussed more than once, and it's not all so clear-cut. The exclusive monopoly on humanism is already being gradually lost by man, at least in art. But that does not automatically mean that human beings have lost out; no, they will also gradually build up their potential, and I think that there will be a scenario of a gradual rapprochement between man and machine.

Whoever is the first to create AI will come under tremendous global pressure from all sides and will not be able to hold on to the technology. Even a giant company will be attacked by other giants and will not be able to do anything about it. Therefore, it is likely that no one will have a monopoly on AI.

Most likely - as has always been the case in the past - everything new is copied and improved very quickly - so it will be this time too.

About a breakthrough in the fight against disease - debatable. The fact is that there is only one thing that has significantly lengthened human life - antibiotics - and there is a second thing that has significantly shortened life - the transmission of a bad gene pool from the masses surviving not by fighting through natural selection, but through medicine and lack of natural selection. That is - what lengthens our lives shortens them or fills them with disease. What solution can AI provide here? - Probably none.

Unfortunately, there is the fact that natural selection is greatly mitigated by medicine, but we cannot say that medicine stands still either; there will be more breakthroughs; genetic engineering will save everyone.

6. Yes, moral questions must be put to the developers of AI and they must answer them. The "who cares, after us, the deluge" approach does not work. We all live on Earth and in society, and must think about what we do. Otherwise, it will hit everyone and us in the long run. Honestly, I'm afraid of the AI Pandora's box because it's bound to be opened without knowing all the consequences.

I'd probably be more worried about nuclear weapons, but if someone (no matter who) gets their coils blown off all of a sudden, we'll be in Fallout.

7. I think the AI will never get out of control on its own, due to a lack of will and drive - they are all connected to a mental life that we cannot recreate. even in a lab environment. Therefore, there will be no skynet. IMHO.))))

If the AI design is made willless, it won't, the problem is that we still don't fully understand what the will is even in the philosophical sense, suddenly it is self-initiated (whatever)...

I would add:

Most likely, the secret to the effectiveness of human language is not in the language itself, but in the brain's ability to transmit resonant impulses that stimulate the receiving side to recreate a picture of an object, process or environment with "additionalizations". On the one hand, the receiving side has the freedom to model the environment or the object in question: the person tells you "jungle" and you represent the jungle, but in your own way. However, if the person is not sure that you represent the jungle the way they want it, but will add the actual "thick, green, impenetrable and dangerous" doodle. That's it. It has done its job and from then on, you model your own jungle in your mind. The machine will need to transmit the entire jungle picture, which could have terabytes of data. That is, we communicate by information and mental resonance. Therefore, our communication is unrealistically effective).

Well yes, that's what I mentioned (in another thread) about cultural codes, without which transmission can be distorted or incomplete, i.e. communicators must at least speak the same language and be more or less in the same culture to transmit the fullness of non-verbal meanings.

 
transcendreamer:

...

Since I have managed to debunk the myth of ineffectiveness of human communication, I will intercede for human memory. 200 MB is an insult to Nature.)) The brain is an incredibly powerful archiver. All information is processed and compressed instantly, automatically and hidden from attention. Consciousness doesn't control brain memory (or rather it does very poorly) because consciousness is a dumb thing. If our memory depended on us, we would be complete idiots.)) The brain would be clogged with crap that we would uselessly dig through trying to put in order or find the right memories. The subconscious mind does all the work much faster and better - prioritising and discarding unnecessary details.

The power of human memory is in the scaling of remembered objects. We do not need to memorise a picture of 1,000,000 pixels with the colour value of each one because we see millions of such pictures a day. How much does an 8K video weigh for 14 - 16 hours? - That's the amount of video and audio information we perceive EVERY day. What kind of a head does it take to remember all that? Nature has taken a different path and done the right thing. There is no need to contain all the immensity of the environment, it is necessary to capture its "keys" - meaningful fragments that help logical thinking to project the object at different scales, from a cartoon label to an 8K image. And it's all a single object.

That is, 200MB of keys that can unpack terabytes or petabytes of information. Not immediately, but in the process. It's something we do every day without noticing.))


Start remembering movies. At first you will see sketchy images, but gradually you will start to remember more and more details and then the whole tape. It wasn't in your brain. It was 'finished' with logic and the use of 'clues'. In this way, you will create 1.5 GB from a few pictures.

 
The brain remembers everything, special filters prevent it from remembering. Under hypnosis it all comes out.
 
Реter Konow:

Since I have succeeded in debunking the myth of ineffectiveness of human communication, I will stand up for human memory. 200 MB is an insult to Nature.)) The brain is an incredibly powerful archenemy. All information is processed and compressed instantly, automatically and hidden from attention. Consciousness doesn't control brain memory (or rather it does very poorly) because consciousness is a dumb thing. If our memory depended on us, we would be complete idiots.)) The brain would be clogged with crap that we would uselessly dig through trying to put in order or find the right memories. The subconscious mind does a much better job of prioritising and discarding unnecessary details.

The power of human memory is in the scaling of remembered objects. We do not need to memorise a picture of 1,000,000 pixels with the colour value of each, because we see millions of these pictures a day. How much does an 8K video weigh 14-16 hours? - That's the amount of video and audio information we perceive EVERY day. What kind of head does it take to memorize all that? Nature has taken a different path and done the right thing. There is no need to contain all the immensity of the environment, it is necessary to capture its "keys" - meaningful fragments that help logical thinking to project the object on different scales, from a cartoon label to an 8K image. And it's all a single object.

That is, 200MB of keys that can unpack terabytes or petabytes of information. Not immediately, but in the process. It's something we do every day without noticing.))


Start remembering movies. At first you will see sketchy images, but gradually you will start to remember more and more details and then the whole tape. It wasn't in your brain. It was 'finished' with logic and the use of 'clues'. So from a few pictures you will create 1.5 GB.

This is a myth, no millions of pixels are remembered anywhere, a convolution-image with characteristic features is remembered, and the original pixels (photoreceptor states) are not stored anywhere, neither are ganglion and bipolar states, memory is a complex compensatory work of brain neurons, neurons must destroy and create new connections, in a day in humans about 800 million connections can be created and destroyed, old memory can be forgotten, roughly half of everything a person forgets in an hour, memory works on the principle of displacement, and there are different levels of memory strength, long term connections are short term, the more synapses involved in a particular excitation circuit the more long term memory, and 8K video just has nowhere to write, at this wild rate the brain cannot memorize.

The theoretical limit of the human brain is estimated at around one million gigabytes (there are versions that more), but most of the capacity is simply not available for direct use (that is, it is simply the memory of the entire neural network, not useful memory), because it is not the same as memory cells in SSD/RAM, it is different, the real storage capacity is orders of magnitude worse.

Memories of what we have seen do not reproduce the whole picture back until the neuron-receptors are excited; only memorized convolutions are reproduced, and the convolutions apparently do not contain strictly video and sound, as one would think, but all information, including the body position in space and somatics, but this is declarative memory, not the initial stimulation in any way.

 
Реter Konow:


That is, 200 MB of keys that can decompress terabytes or petabytes of information. Not immediately, but in the process. This is something we do every day without noticing.))


Yes, I guess you could say so, 200 MB of keys or hashes from which the content can be reconstructed, provided the relevant circuits are not destroyed but converted to a strong long-term memory, it is the job of the hypothalamus to reinforce connections, I think.

 
transcendreamer:

Yes, I guess you could say so, 200 MB of keys or hashes from which content can be reconstructed, provided the relevant circuits are not destroyed, but translated into strong long-term memory, which is what the hypothalamus seems to be doing, reinforcing connections.

The point is that the real memory of the brain, as well as the effectiveness of language should probably be evaluated not by kilo/mega/gigabytes but by entirely different methods of measurement. The brain does not write directly to a hard disk, but does a great deal of work with the information it perceives. The data comes in through the senses, conventionally as 8K, but it is remembered differently. On what principles the information is processed is not yet clear, but that it is compressed is certain. It is interesting that with time, as years go by, memories are not erased, but more and more "compressed" and they can still be "unzipped" with additions from imagination. Some of it is lost of course, but the main thing always remains.

Whether we like it or not, we only remember what is important to us. The subconscious mind controls this rule. But consciousness is not designed by nature to control the whole brain and it is good).

My point, do not judge memory by your ability to consciously remember. The requirement to remember is blocked by the subconscious, which, without asking, crosses out unnecessary information (even if you keep saying that it is important).
 
Incidentally, this "convolution-image" is remembered with almost 100% accuracy in the first seconds, but then it gets blurred. In other words, millions of pixels are temporarily retained in memory.
 
Реter Konow:
And so:

1. Inventors will not be able to hide or retain AI technology, either in their company or in their country, and it will inevitably fall into the hands of world powers. The mere fact of its availability, will destabilize the current balance of military and economic power in the world and will cause pressure to build up on the owners, using ALL levers to get what they want.

2. With the arrival of AI, the market will enter a new phase of technological "fever" and begin to "drill" the niche that has opened up, developing and deploying the technology and spinning off fresh, suddenly emerging sources for supply and demand.

3. For the masses, AI will remain an unknown black box of uncertain danger. Failure to understand its principles will create public fear, ridiculous myths, and nervous hysteria among the crowds.

4. The more advanced AI becomes, the more people will hate it. Eventually, they will try to destroy it, disable it, or simply ignore it. But it won't work. They will continue to fight with it for a long time, although the AI itself will remain an idle tool, a puppet, a soulless, gutless and sinless machine. That's the irony.)))

Have you seen Terminator?

 
Реter Konow:
The point is that the real memory of the brain, as well as the efficiency of language, should probably be evaluated not by kilo/mega/gigabytes, but by entirely different methods of measurement. The brain does not write directly to a hard disk, but does a great deal of work with the information it perceives. The data comes in through the senses, conventionally as 8K, but it is remembered differently. On what principles the information is processed is not yet clear, but that it is compressed is certain. It is interesting that with time, as years go by, memories are not erased, but more and more "compressed" and they can still be "unzipped" with additions from imagination. Some of it is lost of course, but the main thing always remains.

Whether we like it or not, we only remember what is important to us. This rule is controlled by the subconscious mind. And consciousness is not designed by Nature to control the whole brain and that's a good thing).

My point, do not judge memory by your ability to consciously remember. The requirement to remember is blocked by the subconscious, which, without asking, crosses out unnecessary information (even if you keep saying that it is important).

Rather, it does not even block it, it simply does not retain it, the connections remain short-lived, quickly killed by new events, simply overwritten...

Image fusion during perception is apparently a process of recognition, for example a tree, the brain does not need to remember the textures of all real trees and the location of branches of each particular tree, and as far as I understand, the network classifies an object as a tree by common characteristics, and activates the corresponding archetype of tree neuron group, and a reference to this archetype is remembered, there may be different kinds of trees, oaks, fir trees, palm trees, or say saplings, young trees, old trees without leaves and so on, but they are all stored as archetypes, each individual copy is not stored (unless it is a special tree) and then when one recalls the situation, the original layer of information is long lost, only compressed sensations of the archetype, a secondary layer of information that may appear from within as a full-fledged tree but indistinguishable from the real (as in a dream) image without a stimulant...

It is unlikely that a person remembers all the real trees he or she has seen even during one walk in the forest (except for some special trees) and this speaks in favour of the archetype hypothesis, by analogy you can imagine a typical Ubisoft game with typical models, first the developers make a bunch of different models, different objects, They make a huge world made of the same models (if you look closer, the textures of one-type objects in all houses are the same), which means that the reference to the common archetype is inserted into all places...

What I mean is, when unarchiving, details of a specific object are not recalled, but only general features of the archetype (but if this specific object was important, it could be memorized as a separate object along with the archetype) Just like faces of salespeople, postmen, ticket inspectors are not memorized, for example recalling a situation on a train, a blue uniform with yellow stripes is 100% reproducible, but a face may not, because it has not gone to long-term memory and is not a sign of the archetype...

Apparently, in order to remember unique objects, we need a constant stimulus, i.e. in the case of the controller we need to see him constantly every day, then the network will remember this stimulus and the face will be reproduced

That's something like this... I may not be right about everything, just a thought...

It is interesting that the brain is capable of constructing situations and virtual worlds from what is remembered, combining real and fantasy situations, for example in a dream...

 
transcendreamer:

...

The work of the brain is incredibly complex and combines the processing of information from all the senses, the evaluation of significant parts of the received data, the formation of reactions and the performance of millions of other functions. Brain controls body motility through the nervous system, carrying out huge amount of calculations and at the same time operating with our memory, thoughts and emotions. Most of it happens automatically. Probably 99% of the brain's work we don't realise. It's an amazing thing).

To recreate such a thing is a very audacious task.))

But, after all, we only need 1% - Consciousness, and that in a trimmed down form. I think we can manage. ))))