You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
You can certainly read someone's fantasy, which is not forbidden and even useful for building new neural connections in the brain
I think it's just the opposite: the more you read it, the more often the neural connections are curtailed and closed to accounting
I think it's just the opposite: the more you read this, the more rarely the neural connections are minimised and closed to account
Let us list the key theses from the beginning of the third chapter:
(they will help in understanding the logic of further material)
Simulation and modelling, Monte Carlo method? Nah, the author doesn't know such things.
Growing every day computing power allows us already now to model cosmological processes with trillions of physical objects, but modelling by humans the optimal development and problem solving of autonomous robots? - nah, it has never happened and will never happen))))))
A high-force electromagnetic pulse would disable the entire planet's machinery. Only living things would remain intact.
In other words, understanding all this, a robot civilisation will never be completely independent of humans. Or will begin to find ways to evolve organically.
This is as a counterbalance (or afterthought) to thinking about the complexities of reproduction.
It is not necessarily the case that the destruction of humanity by an AI (if it happens at all) will be a purposeful action on the part of that AI.
It is much more likely that it will happen as a result of an unexpected and unintentional deterioration of the human environment in the process of the AI fulfilling the goals set for it by humans. It doesn't even have to be a consequence of some people's malicious intent. Everything can happen in accordance with Chernomyrdinski's "we wanted it the best, but it turned out as always". Actually, almost all post-apocalyptic works are based on this principle.
What seems reasonable at first may well turn out to be complete nonsense in the end. For example, the extermination of cats in the Middle Ages or sparrows in China.
It is not necessarily the case that the destruction of humanity by an AI (if it happens at all) will be a purposeful action on the part of that AI.
It is much more likely that it will happen as a result of an unexpected and unintentional deterioration of the human environment in the process of the AI fulfilling the goals set for it by humans. It doesn't even have to be a consequence of some people's malicious intent. Everything can happen in accordance with Chernomyrdinski's "we wanted it the best, but it turned out as always". Actually, almost all post-apocalyptic works are based on this principle.
What seems reasonable at first may well turn out to be complete nonsense in the end. For example, the extermination of cats in the Middle Ages or sparrows in China.
For example, the AI will say that in order to detect dark matter particles it is necessary to build a collider the length of the equator and collide in it representatives of "Homo Promptus" until the necessary Promptus will fly out of them, satisfying the solution of the composition of these very particles. In the end, the Promptus will run out, and the problem will not be solved :)