You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The goal is to run programme autosynthesis based on a common Object model.
This is probably what is called genetic programming. You can't do without an explicit language description in the form of a BNF grammar or a tree (which is basically the same thing) there, either.
This is probably what is called genetic programming. There, too, you can't do without a clear language description in the form of a BNF grammar or tree (which is pretty much the same thing).
Today I'll try to describe the steps of synthesizing a simple label from a pixel "proto-environment" and a step-by-step scenario of its sequential complication.
Augmented:
The goal is to identify a pattern of complication of software objects for its subsequent automation.
Part 4.2
This part of the concept deals with the so called"Difficulty Pattern" (aka"DevelopmentalPattern"), a scheme supposedly hidden in the subconscious and serving as an "instruction" for assembling things. The formulation and implementation is likely to unlock a new algorithmic "Grail", in the form of automatic program synthesis and the next generation AI engine. The game is worth the candle.
As is tradition, let's announce the original thesis:
Next (in the next part) we'll talk about the Label but keep in mind the above theses as they unambiguously hint at some answers to the questions of "autosynthesis" of software Objects. It means that the Being of Objects is locked by strict rules of birth,existence and development of Systems and Means, and we can't create "just anything" hoping for the result. It is already obvious that possible methods of realization of programmatic autosynthesis are limited.
In thenext part, we will consider the "birth environment" and stages of "codo-enrichment" of the Label during transformation of the unstructured pixel set into an interactive software tool.
People have long been interested in popular science questions about the threat of so-called "artificial intelligence":
The science-fiction writers, for the most part, were inclined to dire prognoses and drew chilling stories about the victory of soulless computing and mechanical forces over the discouraged and overwhelmed "sleuths". While the wave of popularity of machine uprising theories gathered momentum, scientists were divided in their opinions. Some smiling sceptically called it scaremongering, others seriously proclaimed that AI would be our last invention. Some believed that we and computers will live in peace, others (such as the very impressionable marketing entrepreneurs who dream of going to Mars) were so carried away that they began to cry out like prophets to think about the inevitable end, addressing the masses via the internet and television. At the same time, the IT companies continued to develop actively and unconcealedly towards the "ominous" abyss of the so-called "technological singularity" beyond which our life will change so much that we will degenerate into the unknowable.Because of excess of appeared theories, opinions and technologies for many people who wish to understand it became difficult to understand who and what to believe, however the answer, in my opinion, should be sought in software programmers, because according to "unknown-about-here-foreknown" scenario, "victorious procession of machines through corpses mountain" must begin from writing of some special code which then will be loaded in quantum hardware or supercomputer and inside this code will realize itself. It is logical to assume that the emergence of digital consciousness depends on certain genius programmers spending their working days behind the dusty desks of the offices of "evil" corporations, and they should know better than anyone else whether there are reasons to be afraid.
Realizing that many fears of AI were created by popularizers to heat up the Market and stimulate sales of thematic products - games, books, movies (and... brain chips), still I would like to understand HOW code may actually threaten humanity and if it is possible to write it in principle?
The answer, even in general terms, is very difficult. First, one has to discount fiction and formulate the point:
Let's not be cheeky and answer first, let's ask Evolution. Does it not possess the complication algorithm? Has it not been using it for hundreds of millions of years? Isn't our eco-system proof of Evolution's possession of the as yet unreachableGrail of Life?
Now, let us look at human creations. Are we not constantly complicating our technology? Are we not creating more complex, diverse and multifunctional devices? How do we know how to complicate and improve anything? Don't we have the same complication algorithm that Evolution has? Didn't Evolution "put it in us"? So maybe the Evolutionary complication mechanism and the one we use to make more complicated phones/computers and stools is one and the same?
Based on this logic, we have an a priori complication algorithm, but we either don't know it or can't articulate it clearly.
Afterword:
I decided to devote this part to explaining the meaning of my research. I will continue the step-by-step analysis in the next part.
A good philosophical topic, unfortunately I cannot answer in detail now, but in brief:
Artificial consciousness implies (at least on the level of theoretical reasoning) the possibility of "artificial" will as well, and it is obvious that when we get to the point of creating full-fledged IC to endow new robots with it, we will just get rid of the will or make them a will that will be aimed exclusively at serving, and we will get such an intelligent executiveautistic, not a full-fledged psychic autonomous personality like in Pelevin's last text (if we disregard the obvious references to the deep people), so a rebellion of Skynet terminator-like machines simply won't happen.
An alternative hypothesis is that the development of will and autonomy does inevitably occur as the system (IS) becomes more complex, and then scenarios like in Detroit: Become Human, when androids even surpass humans themselves in humanity, or as in Cyberpunk 2077, in the storyline with intelligent machines in Delamain Taxi, in which case there will be either a need for artificial containment of smart machines' self-development, or an ethical problem of inclusion and recognition of android rights, but in fact the ethical problem arises at the IP creation stage: how acceptable is it to create a being who will probably suffer from awareness of being locked in an iron prison of a production facility? - However, the same problem exists in the birth of biological human beings today, it is just that no one asks children if they want to live in this world.
To the question of self-complexification of systems: apparently some kind of non-Turing automata model is needed to adequately explain emergence and self-development of psyche, without central processor in general, like memcomputing, though of course Turing completeness implies that one can emulate absolutely any environment with enough powerful machine, including why not emulate human NS starting from embryo with full environment simulation, but this probably is not very effective way.
A good philosophical topic, unfortunately I cannot answer in detail now, but in brief:
Artificial consciousness implies (at least on the level of theoretical reasoning) the possibility of "artificial" will as well, and it is obvious that when we get to the point of creating full-fledged IC to endow new robots with it, we will just get rid of the will or make them a will that will be aimed exclusively at serving, and we will get such an intelligent executiveautistic, not a full-fledged psychic autonomous personality like in Pelevin's last text (if we disregard the obvious references to the deep people), so a rebellion of Skynet terminator-like machines simply won't happen.
An alternative hypothesis is that the development of will and autonomy does inevitably occur as the system (IS) becomes more complex, and then scenarios like in Detroit: Become Human, when androids even surpass humans themselves in humanity, or as in Cyberpunk 2077, in the storyline with intelligent machines in Delamain Taxi, in which case there will be either a need for artificial containment of smart machines' self-development, or an ethical problem of inclusion and recognition of android rights, but in fact the ethical problem arises at the IP creation stage: how acceptable is it to create a being who will probably suffer from awareness of being locked in an iron prison of a production facility? - However, the same problem exists in the birth of biological human beings today, it is just that no one asks children if they want to live in this world.
To a question of self-complexity of systems: apparently some kind of non-Turing automata model is needed to adequately explain origin and self-development of psyche, without central processor in general, like memcomputing, though of course the Turingian completeness supposes that it is possible to emulate absolutely any environment by a powerful enough machine, including why not to emulate human NS starting from embryo with full environment simulation, but it probably is not very effective way.
I think it's better to start with a simple system and move towards complexity, analysing each step. So, I decided to take Label as a base and see how it evolves into more and more complex object. To analyze the code, which we add to it and check if there is any scheme, repeating pattern in our actions.
The description of the process of conscious complication must be accompanied by programmatic and philosophical notions to generalize and look for rules which we ourselves adhere to. Perhaps we will get an understanding of what code in theory could perform similar actions.
We must first answer the question of what consciousness is. So far it is not very good, there is even such a term in modern philosophy - "the difficult problem of consciousness".
In my view, if there is any way to solve this problem, it will most likely be found in the way of Wittgenstein's philosophy of everyday language. So I continue to insist on a constructive formalization of language. Essentially, we need to do for the language of human communication with a computer roughly the same thing that was done for the language of communication between humans through the invention of the lobban or ifcuil.
We must first answer the question of what consciousness is. So far it is not very good, there is even such a term in modern philosophy - "the difficult problem of consciousness".
In my view, if there is any way to solve this problem, it will most likely be found in the way of Wittgenstein's philosophy of everyday language. So I continue to insist on a constructive formalization of language. In essence, we need to do for the language of human communication with the computer roughly the same thing that was done for the language of communication between humans through the invention of the loban or ifcuil.
72 cases, 24 new special cases, non-linear writing system, matrix grammar, morphosyntax, boustrophedon, and special phonetics - that is what is needed for the coolest trading sects (so that the Chekists and Freemasons could not steal the Grail).
We must first answer the question of what consciousness is. So far it is not very good, there is even such a term in modern philosophy - "the difficult problem of consciousness".
In my view, if there is any way to solve this problem, it will most likely be found in the way of Wittgenstein's philosophy of everyday language. So I continue to insist on a constructive formalization of language. In essence, one should do for the language of human communication with a computer about the same thing that was done for the language of communication between humans through the invention of the loban or ifcuil.
I don't agree with this view. To break it down, Consciousness is a broken, thrice-twisted, littered with a thousand tons of emotional junk, barely functioning and corroded Object 'processor'. We just need to get the System processing and complication mechanism out of it, and leave the rest to the Thinkers and Psychiatrists).
I don't agree with this view. To break it down, Consciousness is a broken, thrice-twisted, littered with a thousand tons of emotional junk, barely functioning and corroded Object 'processor'. We just need to get the System processing and complication mechanism out of it, and leave the rest to the Thinkers and Psychiatrists).
Sounds like a suggestion to get its wetness out of the water)