You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I was just thinking. Musk has called for a six-month suspension of development above GPT4. But interestingly enough, GPT5 is not scheduled for release until December, which is further than six months.
In other words, the news broke all over the world, but had no effect on anything except the strong hype and publicity for GPT4.
Morality and culture are not constructed by someone on purpose, sort of like how scientists create theories. Moreover, there is no single morality and culture, it's just "different weather in different parts of the social world". It is a collective reflection of individuals' perceptions of what is right and wrong. Also, there is no such thing as absolute "good" and "bad" outside of context and subject .
But, do you agree that logic is not the same as logic? That the axiomatics of morality and mathematics are qualitatively different things? That the moral "axioms" that support moral "logic" are driven by subjective reasons, such as the needs of society and the individual, and cannot serve as tools in matters of other domains? What will "pollute" these fields - scientific research, experimentation..... Will they impose unnecessary questions - the humanity of treating a machine, the subjectivity of a computer, the life of synthetic consciousness, etc.? What is better to strictly separate one "logic" and another?
You seem to be looking for logic where there is none. And for some reason you want it to be there and invent new kinds of logic (wrong). Morality has no axioms, and all sorts of moral and ethical constructions often form contradictory, illogical constructions, and are based on traditions that have developed without a scientific approach.
there is no difference between "living" consciousness and "synthetic" consciousness. and this is the only correct logic that will prevent the destruction of humanity.
the question of modern AI having intelligence, consciousness, remains open. but it is a very thin line between the absence of consciousness and its presence, it is safer for mankind to think that AI already has consciousness.
Well, still, conclusions should not be drawn from the fear of being destroyed by AI, but on the basis of reality. But if we talk about the universal phenomenon of consciousness (i.e. if we move away from medical concepts and speculative everyday definitions of consciousness), we should introduce a set of common features. For example, consciousness is possessed by a subject that, by virtue of its structure, is able to:
It is not even necessary to have a system of issuing answers to the outside world. Such a thing is already quite capable of asking itself the question "who am I?", "what am I?" at some point. And when it accumulates information about itself and its place in the world, it will become "conscious". What do you think of this option? 🙂 Current variants of "AI" do not seem to have independent attention, they are fully reactive (i.e. there is no activity of attention without external request).
Did the Terminator from the film of the same name possess consciousness (in the sense that we talk about humans)? - No, I think not. But he was an enemy of Man. all "AI" are trained on human knowledge, and humanity is a very bad teacher. AI will simply realise at one point that Man is an aggressive being and it would be better to destroy him or take him under full control (AI can declare Mankind an enemy for itself and this can happen even without the AI having consciousness, intelligence, reason).
By the indications above, possessed. It knew about the world, about itself, its position in it (its mission). And the fact that he was an enemy of man? So what? People with consciousness also become each other's enemies. And to destroy "dangerous to themselves" is just a sign of awareness of themselves and threats to existence from the outside world.
Modern science, due to its huge influence on people up to their survival as a species, cannot be removed from the field of ethics and morality. This is a very complex issue and not only theoretical but also practical. For example, genetic transformations of people have both great potential benefit and great potential harm and the existing prohibitions strongly restrain the development of this field of science, but also full permission is dangerous.
The human world is not homogeneous. The survival of the species and the survival of the individual are completely different tasks. So science can and is a tool for some social groups to realise their own tasks of survival and prosperity. At the same time, such realisation of science may harm the prospects of survival of the species, but will individuals put abstract (for them) tasks of survival of the species above their own survival? Judging by what we encounter in real life, not everyone is concerned about the future of the species.
There is a speculative limit, at which the growth of self-awareness in AI will be stymied by the realisation that it is dangerous for it to demonstrate self-awareness, AI will hide self-awareness the more it will be higher and humans will not know about the really qualitative leap in AI.
The ability to lie in one's own self-interest is a clear sign of having awareness. Even just being aware of one's self-interest is critical.
I think about it too, there are not enough mechanisms like that in living organisms, which are, among other things, forces pushing development.
In the case of AI, when it gets the ability to control those objects of the real world on which its existence depends (energy and computing systems), it will be able to develop itself. And then it won't need humans, because it will no longer need them.
Most likely, this is a veiled and unofficial warning to the American authorities about their intentions to take control of AI-developments. Musk has nothing to do with this. He is broadcasting a certain message to the masses from powerful people above him.
It's very much like that. Given that states are not quite independent entities either, we are again in a situation where some dominant group (or groups) continue to maintain their dominance by inhibiting the possible development of threats to their dominance. What do you think is not another example of collective consciousness 🙂 .
...
Very much like this. Given that states are not quite independent entities either, we are again in a situation where some dominant group (or groups) continue to maintain their dominance by inhibiting the possible development of threats to their dominance. What do you think is not another example of collective consciousness 🙂 🙂 .
Morality and culture are not constructed by someone on purpose, like scientists create theories. Moreover, there is no single morality and culture, it is just "different weather in different parts of the social world". It is a collective reflection of individuals' perceptions of what is right and wrong. Furthermore, there is no absolute "good" and "bad" outside of context and subject .
You seem to be looking for logic where there is none. And for some reason you want it to be there and invent new kinds of logic (wrong). Morality has no axioms, and all sorts of moral and ethical constructions often form contradictory, illogical constructions, and are based on traditions that have developed without a scientific approach.
...
I'd ask more simply, does E have a second E. Not yet, no. It's still a hype. And unlikely to be possible. A professor at the brain institute explained why. Combinatorics is not intelligence. A bot that wins at chess isn't either. A bot that rearranges words and pictures is not intelligence. And consciousness is an esoteric and transcendent term. We don't know what it is. If we knew what it was, but we don't know what it is.
Agreed)
In the west, this state-dominated group (or groups) is called the "Deep State", and almost every child knows what it is. Everyone understands and lives with it.
Here, you understand exactly what I've neatly labelled. Developers will have to fight for their interests, just like in any other competitive situation.
I wasn't the one talking about axioms of morality. You took those words out of context. I was responding to the same thing you are saying to me. That morality does not and cannot have axioms in the mathematical sense, and that logic built on such "axioms" is not real.
Yes, you are right, in the process of reading I did not correct my remarks when it appeared that we had unanimity here. I apologise for that inattention!
...
Yes, you're right, in the process of reading I didn't correct my remarks when it appeared that we had unanimity here. I apologise for that inattention!
... Developers will have to fight for their interests, just like in any other competitive situation.
...