You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
However, as we go through stages of technological growth, people's employment will decrease exponentially and this too is irreversible.
Oh, come on. One man without a horse can feed a family of 10, working tirelessly in the sweat of his brow for half a year. And with a horse... And with a tractor...
But no, he's been ploughing from dusk till dawn all year round. All this so he wouldn't sit down and think and ask: What the hell is this?
Oh, come on. One man without a horse can feed a family of 10, working tirelessly in the sweat of his brow for half a year. And with a horse... And with a tractor...
But no, he's been ploughing from dusk till dawn all year round. All this so he wouldn't sit down and think and ask: What the hell is this?
The state is just a state, not a catastrophe. The world is changing fast, but there are fewer hungry people. There are not fewer unhappy people, but it is a paradox of societal development)
Elon Musk believes that humanity is threatened by some kind of digital identity, which will unknowingly evolve from Google's (or anyone else's) neural networks. He attributes to AI unlimited computing power, incredible capabilities and evil intentions. The fears are naive in form but justified in essence.
Any system, a priorihas many technical limitations. The most alarming thing about AI seems to be its ability to self-learn, which is a specific process of processing consumed information, including analysis and synthesis of objects, classification, modelling, calculations... But, this is not sufficient for the emergence of psyche and (or) self-consciousness. The evil beginning of AI can only be generated by a pinched Ego, which it does not have at all. Emotions and drives are products of higher mental activity, so the question is: how could they arise in an AI without initial self-consciousness? How can it arise at all? Is there a threshold for self-awareness in a computer system? Even learning the basics would be a problem for an AI, let alone acquiring personality traits.
Humans are subject to two types of purpose, external and internal, while a machine is only subject to external purpose. A machine is incapable of having its own goals/desires/ aspirations unless someone makes such a machine, but for now, technology only allows us to think of its ability to perform only human-assigned tasks and not to build its "life" in spite of humans.
However, the risks associated with AI are still very high, although the reasons are different. AI itself is no more dangerous than a kitchen knife in the hands of a housewife making soup, but the problem is what the market, corporations, politicians and ordinary people will turn it into. What will they do to the World through it? What calamities await us with this technology? Will there be another global and unpredictable historical turning point?
The problem is not AI, but how humans will use it and what they will do with it on Earth.
You're getting a lot of mileage out of this... Interesting.....
Yes, we are trying to analyse possible futures in which AI will have a permanent place. Let's split the discussion into themes:
1. Finding a definition of AI. What do we mean by AI? Where is the line between myth and reality?
2. What is AI, a market product that fades as demand fades, a historically significant technology (at the nuclear level) that we cannot ignore, or some digital personality that realises itself, rises up, and with technical and intellectual advantage presses on us for the idea of superiority and power over biological carriers of intelligence?
3. Is intelligent AI technology possible in principle? If so, what are its physical limitations? Will it acquire self-consciousness? At what point in the development of AI does human control over the "synthesised" mind cease?
4. How will the arrival of AI affect world politics? We try to imagine the redistribution of military and economic power and the consequences for the country of the AI inventor as well as for its developers. How will they live after that? Will they be able to keep the technology to themselves?
5. The impact of the presence and use of AI on the lives, future and psychology of ordinary people. Will unemployment rise? Will there be protests, rallies or riots against the policy of total automation of labour and the widespread replacement of the use of human thinking in enterprises with electronic thinking.
6. We consider the moral responsibility of inventors to humanity and to God. Do they have the right to use their genius thoughtlessly in designing AI without calculating all the consequences?
7. What is the safest and most prudent approach in creating and transmitting AI to the World?
Join in. :)
Yes, we are trying to analyse possible futures in which AI will have a permanent place. Let's split the discussion into themes:
1. Finding a definition of AI. What do we mean by AI? Where is the line between myth and reality?
2. What is AI, a market product that fades as demand fades, a historically significant technology (at the nuclear level) that we cannot ignore, or some digital personality that realises itself, rises, and with technical and intellectual advantage presses on us for the idea of superiority and power over biological carriers of intelligence?
3. Is intelligent AI technology possible in principle? If so, what are its physical limitations? Will it acquire self-consciousness? At what point in the development of AI does human control over the "synthesised" mind cease?
4. How will the arrival of AI affect world politics? We try to imagine the redistribution of military and economic power and the consequences for the country of the AI inventor as well as for its developers. How will they live after that? Will they be able to keep the technology to themselves?
5. The impact of the presence and use of AI on the lives, future and psychology of ordinary people. Will unemployment rise? Will there be protests, rallies or riots against the policy of total automation of labour and the widespread replacement of the use of human thinking in enterprises with electronic thinking.
6. We consider the moral responsibility of inventors to humanity and to God. Do they have the right to use their genius thoughtlessly in designing AI without calculating all the consequences?
7. What is the safest and most prudent approach in creating and transmitting AI to the World?
Join in. :)
I'm just in time, throwing in right away:
1. intuitively it is clear that AI is like natural intelligence only artificial, i.e. the initial task is to replicate human intelligence and then see if there can be an intelligence whose "power" exceeds human intelligence...
2. right now the AI is just a calculator with a database connected to it, it is not very correct to call it a full-fledged intelligence... It is necessary for AI to gain a certain independence... it is quite possible that consciousness will turn out to be an informational phenomenon and can also be simulated, creation of artificial consciousness, and then AI will be a personality... and of course AI will sooner or later be more perfect than biological one, but people can probably change too...
3. if biological evolution has produced an example, at least one, it is probably likely to be replicable in other ways, I have almost no doubt about that, just a matter of time...
4. AI in the most radical way on politics - first of all AI will become the judiciary, for laws are algorithms, and it would be logical for judicial decisions not to depend on the opinions and inclinations of people, then the executive branch of power will also become AI, at some point there will be a serious conflict between traditionalists and those who support AI, perhaps the world will be split into 2 camps for some time, Even more significant conflict will arise when not only neuro-interfaces but also neuro-implants will appear, allowing people with implants will obviously gain significant advantages, from which the supporters of traditionalism will be very angry and demand equality, it will blaze for a while, then everyone will get used to it.... of course, the first to create artificial intelligence will have huge advantages, it is no accident that China and the U.S. are making huge investments in this area now ...
5. Undoubtedly there will be serious social problems, but they will begin even earlier, when medicine will be able to conquer a number of diseases and significantly lengthen human life, this will immediately affect the entire economy and financial system, for example it is obvious that long-lived people will be less inclined to borrow, because the savings strategy for them will be easier than for short-lived people, the rates of monetary market will inevitably creep down, but an even greater problem will be overpopulation, if people will not stop reproducing at the same rate, in part colonisation may be the main problem. In this context there may be some conflicts up to and including holy wars if the matter gets into a religious context, and I am sure it will, it could get pretty funny :)
6. this question has been raised before, e.g. can we study the atom? can we make crossbows? etc. - one way or another it will happen and the Pandora's AI box will be opened and God himself may suddenly turn out to be a cosmic scale AI :) for example if the hypothesis of a mathematical universe is confirmed and all that exists is just information...
7. this issue will probably come up when it's clear that the AI is capable of getting out of control and deciding that it doesn't need worthless leather bags as overseers-slave owners, Skynet, all that stuff... so either the friendliness/passivity is initially prescribed at the design stage, Asimov's laws of robotics... but either way it won't stop some villain or group of villains from creating a pure AI mind without the limitations of a robocop, so either way either the cyberpunk cyberhulag will happen or humans must evolve dramatically and most likely they will - human-machine fusion, not just neurointerface, but full personality placement in the machine...
I'm just in time to throw in right away:
...
Interesting points of view. I'll respond to a few points in this post of yours and a previous one in another thread at the same time.
1. In my opinion, it is technically impossible to repeat human mental sphere, that removes the question of birth of eternal digital tyrant with delusions of grandeur. Besides, it is just as likely to assume that the mental qualities of AI (if created) will make it a pacifist and philanthropist). Or maybe a neurosthenic with masochistic and suicidal tendencies. If the AI is prone to reflection, he may decide he doesn't want to destroy or harm his parents, may love them and eliminate himself. All this enters into the human mental sphere and hence - the AI will begin to possess such a "bouquet" of mental manifestations. But, one does not need to be Einstein to realise the absurdity of such a direction of synthetic mind development.
I question the technical possibility of recreating human mental life in a narrow, manic form, in a machine. Most likely, the program will never acquire a psyche, which means that its actions will be subject to calculations aimed at the task at hand - the regulation of human unbalance and its consequences. This is most likely the maximum.
2. I don't see why granting AI the technical ability to set its own goals. Its service to humans on all sides is a priority. More accurately - the group of people who will try to take over power, the high-tech market and outweigh the economic dominance of competitors through it will NOT make AI independent and self-reliant. It will be restricted to solving production problems to replace people in companies and get the monetary benefit of firing them. Of course, this is an extremely primitive approach to business. Unemployed people will create chaos that will hit those who left them out of work, which means this method of profiting could lead to uncertain future losses. But, they will do it anyway due to greed and shortsightedness. Well, and to get ahead of others. The market will start to 'devour' itself - creating conditions of profitability for an exceptionally small group of people, contrary to the large mass of people who will suffer a loss.
3 We do not know for certain whether biological evolution was the cause of the emergence of Reason. This is highly controversial and unproven, and therefore it is unlikely that evolution is headed towards AI and its superiority. From a purely economic point of view, AI is driven by the market (the thirst for profit), from a political point of view by military competition (the desire to dominate others), and from a scientific point of view by the expanding field of research. BUT NOT EVOLUTION! It has nothing to do with it yet.
4. Yes, S. Lem's popular scientific work "The Sum of Technologies" describes the so-called "social homeostats", AI in essence, which monitor all processes of human life and regulate them with mathematical precision and impartiality. I don't deny such a possibility. But, judging is about taking into account a huge number of non-digital factors - such as states of affect, remorse, motives, reasons, etc... An AI can't judge without a human being about the significance of these factors because they aren't programmed. It is a vast human experience that is not convertible into code. It is of a different nature. Again - without mental resonance the machine cannot adequately assess the severity of the crime and the degree of culpability, and that requires a psyche and mental experience that cannot be recreated in it. Therefore, as a consultant, yes, as a judge, no. That and other positions related to deciding people's fates will be left to the individual.
About the antagonism between traditionalists and innovators - yes, there will be conflicts. Neuro-implants as well as neuro-interfaces will face a lot of technical limitations. Humans think with their whole brains at once, which means they can effectively engage them, but that does not mean that the chipped ones will gain an advantage. There is nothing linear in this matter. The efficiency of human thinking does not depend on one's computational abilities. There are other principles involved, which we know little about. A chip in the head will not make a person smarter, just as a calculator or computer will not make a person smarter. Not even a mobile phone. Rather, it makes you dumber). A person is "pumped up" by independent thinking, not by prosthetic thinking. In prosthetics he partially or completely degenerates and this is an unspoken law of Nature.
5. Agreed. With the arrival of AI there will be huge financial and political upheavals across the world. Gradually, a legal framework will be built to solve most of the issues related to the use of this technology, but until then, it will not be enough for anyone. Much depends on the AI itself - the possibilities and potential of its technology. On this, we cannot say anything for sure yet.
Whoever creates the AI first will be exposed to enormous global pressure from all sides and will not be able to hold on to the technology. Even a giant company would be attacked by other giants and wouldn't be able to do anything about it. Therefore, no one is likely to have a monopoly on AI.
About a breakthrough in the fight against disease is debatable. The fact is that there is only one thing that has significantly lengthened human life - it is antibiotics, and there is a second thing that significantly shortens life - it is the transmission of a bad gene pool from the masses surviving not by fighting through natural selection, but through medicine and lack of natural selection. That is - what lengthens our lives shortens them or fills them with disease. What solution can AI provide here? - Probably none.
6. Yes, moral questions must be put to the AI developers and they must answer them. The "who cares, after us, the deluge" approach is no good. We all live on Earth and in society, and must think about what we do. Otherwise, it will hit everyone and us in the long run. Honestly, I'm afraid of the AI Pandora's Box because it will surely be opened without knowing all the consequences.
7. I think AI will never get out of control on its own, for lack of will and aspirations - they're all about mental life, which we can't recreate. even in a lab environment. Therefore, there will be no skynet. IMHO.))))
In another thread, we talked about the effectiveness of human language, which is supposedly inferior to machine language. I made the counterargument that machine language is only effective in the technical domain. Here is the quote:
1. Неэффективной можно считать только передачу технической информации через звук. Остальная информация несет множество невербальных, контекстных связей с эмоциями и отношениями людей, восприятие которых завязано на биологических ритмах мозговой деятельности и не может быть ускорено. Если общение свести только к передаче массивов цифровых данных, процесс потеряет смысл из за разрушения творческих, психологических основ человеческой личности и общества. Убить в себе человека чтобы стать роботом? Так себе цель.)
2. Human language is (globally) an amazingly efficient tool, capable of transmitting colossal amounts of data in the shortest possible terms. To describe a complex jungle environment, all you have to do is say 'jungle' and another person's brain will recreate and draw it, whereas when you pass the word 'jungle' to a computer, you don't wait for a reconstruction and modelling from scratch, filling their ecosystem with the right sets of creatures...
And another thing:
You can transmit a person's attitude in 2 words in a second, or you can transmit gigabytes of their biography data, burdening the processor with calculating moral qualities and personality type, which will take enormous power and lots of time. Now, draw conclusions about the efficiency of human communication.) It's not all that clear-cut.))
I'll add:
Most likely, the secret to the effectiveness of human language is not in the language itself, but in the brain's ability to transmit resonant impulses to stimulate the receiving party to recreate a picture of an object, process or environment with "additionalizations". On the one hand, the receiving side has the freedom to model the environment or the object in question: the person tells you "jungle" and you represent the jungle, but in your own way. However, if the person is not sure that you represent the jungle the way they want it, but will add the actual "thick, green, impenetrable and dangerous" doodle. That's it. It has done its job and from then on, you model your own jungle in your mind. The machine will need to transmit the entire jungle picture, which could have terabytes of data. That is, we communicate by information and mental resonance. Therefore, our communication is unrealistically effective).