You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
You are absolutely right. Machines are much cooler than we are in accuracy, calculation speed, storage capacity and a lot more, and yes in small tasks they are not passionate about choice (a side effect of intelligence), straightforward and their use in small tasks only benefits man. But here we were talking about intellectualization of machines which is acceptable but not with existing technology. Here, when the material created by the Chinese will receive a mass distribution, as well as methods of training similar physical structures, then we will come to the series "Wild West World" not before.....
The show is top-notch, by the way. I RECOMMEND!!!!!!
With all due respect to the achievements and discoveries of the Chinese, understanding the essence of intelligence is a by-product of self-awareness, which has little to do with technology. Intelligence modelling is more than training any NS, even the most sophisticated one. We only create separate functions of intelligence, but there is no integrity, because there is no concept of it. There are some general definitions, hypotheses of work, but there are no actual "blueprints", hence we do not build anything but fragmentary models of its discrete phenomena from which we try to assemble something. I think the approach is wrong. We need to start with a blueprint of the WHOLE intelligence as a whole.
Here we go again for those who missed it. It's already 29 hits and I'm on top :-)
What is this place?
Does Yandex's Alice count?
No, skills for Alice
can be written by anyone, no programming knowledge is necessary. There was (is there still?) even a competition for the best skill.
Well, that's... 53 hits already. You spoil me, colleagues.
I think it's an indicator of those who are really interested in the topic. Not that many in the general population to be honest :-)
The problem is primarily fundamental - it is the lack of definition of AI. Digging deeper, such problems usually arise if one is not logically disciplined: one relies on current knowledge, does not question it, and appeals to the authority of scientists. More often than not, scientists, specialists are polymaths with little analytical apparatus. They can talk a lot, write a lot, derive infinite length formulas, but they are unable to understand fundamental errors. For example, the Big Bang was confirmed in a basic way and that's all - all scientific brains now draw formulas under it. They allow distortions of spaces, put an equal sign between matter and space, curve and straight line, allow wormholes and other. Much can be said and written, and even logically. But, if there is a basic error, and there are no logicians among scientists, then the problem becomes protracted. That is, if AI will think like a human, we will not have to talk about any technological revolution - we will simply clone the average mind, which will consider Einstein a genius, and produce endless useless theories and hypotheses. The next revolutionary step would be to create an AI that thinks logically and has the infinite power of modern computers. Then there will be something that will not only talk, but will explain to us what philosophical directions to expect after transhumanism.
There are three types of mind: erudite, calculator, and logician. The polymath is Wasserman, the calculator is Perelman. The first cannot calculate, the second does not know what the dots on the flag of Brazil mean. The first says that the topology of the universe is a dodecahedron or a flat torus. The second runs off to deduce formulas. And only the logician separates space from matter, defines properties of both and throws out all dodecahedrons from the thought process about physics, as unnecessary, and goes to work further. And that's the analogy I gave of "thinkers", real RAS physicists really don't see the difference between space and matter in space, hence allow for space warping, wormholes, the finiteness or closedness of the universe and so on. And the more serious or enthusiastic the scientist's face, the less logically disciplined he/she is and allows himself/herself to "allow".
Analytical efficiency is allowed by combining erudition, calculation and logic. First of all, one must define the concepts.
As far as I remember, on the Internet, intelligence is the ability to think, intelligence, a feature of the psyche, processing of different information, something else.
First of all, it is necessary to identify the main feature of intelligence - it is the ability to work without the use of all the necessary sensors and measuring instruments. For example, to determine temperature range of water by photo (photo of kettle with boiling water). To have a sensor and measure the temperature is simply to obtain data (knowledge). Not having a sensor and measuring temperature is using intelligence.
Thus intelligence is the ability to process information without the use of special knowledge and measuring instruments.
The second feature of intelligence is finding the shortest path to a goal. That is, if you have a sensor, why waste computing power on an analytical apparatus - just plug it in and measure. Thus, the second feature of intelligence is the use of "other people's" labour to solve a problem. Vasya has been studying all year long, absorbing knowledge, he sits in the exam and recalls the answer to the last question, with his head held high above the ceiling. Petya has been fooling around all year, and he cheated off the first one while he was remembering. Both solved the problem almost perfectly. This knowledge was of no use to them in life, but Petya saved a lot of time to implement his goals.
The third characteristic of intelligence is independence from the goal. Unlike humans, which are subject to basic instincts and needs, the intellect is only a tool, not an autonomous unit. It can be made independent by adding the goal of being. Then the entire work of the intellect becomes independent, because it pursues the goal of being "switched on" at all times, or in simple terms, "alive". Hence the danger problem of AI - if someone creates an analytic-logical apparatus and sets it a basic security module whose goal is to exist all the time, then such an AI will look for a way to achieve the goal and classify the danger in the form of humans - the primary controlling link in the process.
But, this is not mandatory for the intelligence function. So voice recognition is also AI, a small part of it.
Hence, for an AI to answer the question why cows don't fly, it must at least distinguish between logical answers: complete - "because cows physiologically lack the organs to fly", incomplete - "because a cow is not a bird", as well as standalone - "so she doesn't need it", humorous - "Darwin forbade it", and so on. And depending on the answer, classification of the nature of the answer is inevitable, and this is already a sign of personality.
Fundamentally there are two ways to create AI:
1) Continuous learning - building up the knowledge database with subsequent correction of information in memory.
2) Logical delta: proto-quantum sweep of the universe, from field matter and particles to the complex structure of molecules, matter, biology and sociology - breaking them down into one big table. (I saw an article about this somewhere, but can't remember where), and throw that whole table into a neuron. And, the more processing power there is, the faster the neuron will independently learn the world and everything that humanity has not yet reached, predicting models and technologies to solve any problem, whether it is a formula for a vaccine from a coronavirus, weather forecasting, to the development of gravitational propulsion systems. In other words, there will be nothing to learn, AI will solve any problems within the limits of physical laws, the main thing is to formulate them correctly in front of it.
Now the development is going by the first way, sluggish, for the second variant, if there is somewhere, will obviously not be advertised.
...
First of all - thank you very much for your extended and considered opinion - it contains a lot of interesting and original views and is one of the best posts in the thread.
Secondly. You are obviously a humanitarian and try to look at the AI problem from all possible angles; the moral and existential ones are very nice, but the technical ones are inaccurate.
And so:
2. You say that if we copy the average mind we won't make a technological revolution - that's not true. The technological revolution is about the complete prosthetics of human labour - both physical and mental- and what follows - and the consequences for the whole world will be dramatic - is a question from another area. What matters is that AI brings about a world revolution in any case.
3. From a technical point of view - replicating an ordinary, average intelligence is far more difficult than creating an incredibly powerful computational intelligent machine devoid of experience and feeling. In addition to intelligence, the average person has a complex psyche, whose world is incomprehensible to us and hence not subject to reproduction. It is impossible to accidentally add something to an AI that we cannot understand. It is possible to write functionality, but not the spiritual world. Plus, it will interfere with the machine's ability to function effectively and build a material paradise for everyday people). The psyche will reduce AI performance, reduce efficiency, increase problem-solving time and error in results, and most importantly, it will not pay off commercially - so there is no need to recreate it.)
4. there is no practical sense in creating an AI that will be "independent" from the goal (perhaps it will not be able to function either) - it is necessary to create a machine intended to solve a wide range of problems, not the personality of an unemployed individual in mid-life crisis, after a divorce, seeking solace in Buddhism, to then pile on it the solution of world problems. The aim of creating AI is to automate the solution to all possible problems within the rational circle: industrial, domestic, scientific and perhaps even political. Such AI will undoubtedly lead to an industrial and industrial revolution. I stress: AI will forever(unconditionally) remain dependent on human goals and will exist only in the role of a mega-powerful "calculator". One's own goal-setting, self-awareness and spiritual quest will NEVER be reproduced in a machine, as humans are incapable of understanding and algorithmising them. The rest of the opinions on this subject are mere fantasies of philistines.
5. The question "why cows don't fly" is the test for modern AI. From the next generation, it must "know" objects, phenomena and laws of the physical world and "know how" to navigate them. It will be able to humour and speculate about them even later, unfortunately. In this case, humor and "demogoguery" of AI about the world will have to be based on calculations and calculations, rather than prepared texts. That is, an AI does not need to be "taught" in books and articles, its work must be algorithmic at the level of parametric systematization and formulation of calculations, while the background of the answer (humorous, philistine or scientific) must be obtained as a result of processing of meaning in the context of the situation or dialogue.
The conclusion from all of the above is that AI is developable and buildable with the right approach and limited objectives associated with it. It is possible to create a conversational AI with functionality of sense analysis and calculation of results based on processing of objects as parametric systems, but it is long to explain.)))