Neural network - page 12

 
registred >> :
I love this one though. >>) How much I suffered with it in my time, before forex:).

Have your agonies brought long-awaited result in the form of increasing your personal wealth by successfully creating and using in your life your Expert Advisor on neural networks? Or maybe it's all bullshit. :)

 
StatBars >> :

>>What are the outputs, inputs?

How many teacher values is better to use to train a neural network? For example if there are four possible actions on the network responses. If you use 4 values, I notice the error is greater than if you smooth out the mean values. How many is optimal? In the attached file on the left are the actual outputs, on the right are the values on which the network was trained.


Files:
 
Burgunsky >> :

And your tortures have brought long-awaited result in increase of a personal financial well-being by means of successful creation and application in life of the adviser on neuronetworks? Or maybe it's rubbish. :)


Kohonen yes, sometimes it helps. BackProp - you need a teacher. Tried teaching, the result is negative. You need to know where to get a teacher. As many articles as I've read, it's all nonsense. So I threw it in the trash. Maybe I am wrong and someone else here will tell you about the backprop. By the way there are some interesting things in Makarenko, Golovko, for example in his lectures on neuroinformatics MEPhI, I advise you to read them.

 
registred писал(а) >>

You have to know where to get a teacher.

Can't you write it yourself?

 
Swetten >> :

Can't you write it yourself?


If there are suggestions for network outputs, i.e. teachers, please state your view of the situation. I have not been able to get any results. It seemed cool at first, then it went downhill.

 
A quick question for the public.
I've tried using the script to look for inconsistent input vectors. If the input vector has a given deviation or coincides completely with another vector, see what the teacher says on those bars. If it says directly opposite results - the inputs are inconsistent. The inputs are the AO indication, the teacher is the analogue of MRR from Ivanov's diploma. So, if deviation is set not equal to zero but a bit larger (e.g. 0.5), the script finds a lot of contradictory vectors. If the deviation is even larger, it finds even more, etc. In other words, it turns out that each vector is completely individual. So how can one try to combine vectors into groups of similar vectors in such a case, as the Kohonen net does?
 
Burgunsky >> :
A small question for the public.
I tried to search for inconsistent input vectors using a script. If an input vector has a given deviation or coincides completely with another vector, see what the teacher says on these bars. If it says directly opposite results - the inputs are inconsistent. The inputs are the AO indication, the teacher is the analogue of MRR from Ivanov's diploma. So, if deviation is set not equal to zero but a bit larger (e.g. 0.5), the script finds a lot of contradictory vectors. If the deviation is even larger, it finds even more, etc. In other words, it turns out that each vector is completely individual. So how can one try to combine vectors into groups of similar vectors in such a case, as the Kohonen net does?

It's not about whether each vector is individual or not. It's about how the vector itself is composed. Understand that a neural network is just a tool for interpreting data, it is not a "smart artificial intelligence", which can be fed whatever you want and it will understand everything on its own. The NS works according to the most primitive rule of conditioned reflex - impact->reaction. That is why you show AO signals to it, or any other indicator, or your horoscope in general - it does not matter; if initial signal does not contain useful information, the net will not get anything out of it. Just think, you have shown it, say, twenty samples of AO. And now imagine how many different variants of market situations could result in forming such (or "almost such" - in terms of correlation) sequence. Even if there are only two of them (a limiting case) - the probability that they will give diametrically opposite results is not at all negligible. And what if there are more of them? And in general, how would she know whether you showed her AO or AC, or a graph of solar activity? This is where the so-called "inconsistent vectors" come from - from the raw data, because the so-called inconsistency actually says that the network, or rather the mathematical model it describes, simply cannot make decisions in this situation due to a lack of sufficient arguments.

Do not waste your time in training networks on the bare price and linear indicators such as AO - it is a stage behind us and a great number of experiments have proved it to be at least unprofitable. Dig into the non-linear side, isolate the main components, etc. The network will only work successfully when it analyzes meaningful data - and only programmer's head can make sense of it (not necessarily the only one - it can be connected to various technical tools).

 
to alsu: Is non-linear data non-linear? Can you give me an example of non-linearity, because I am not sure how to apply it in this case. In general, my mathematical model of the network turned out to be somehow linear, as it only has two versions of outputs after tuning.
 
Burgunsky >> :
Is non-linear data non-sorted data?

I would rather say reflecting to a certain extent the underlying essence of the processes taking place rather than their external manifestations. What does a trader have to begin with? There is a news flow, a price flow and a volume flow (indeed, the data is not consistent). And if the last two objects are already "mathematized" so to speak - expressed in numbers, then with the news the problem is more complicated - firstly, they must be obtained, and secondly, somehow formalized (well, that is a separate topic, there has even been a thread about it here recently).

So, our task is to show all this to a neural network in a digestible form. That is imagine a parrot that has been taught for hundred years to respond in a certain way to quite specific phrases, for example "buy" "sell" and "sitikuri". Obviously, it would be reasonable to give him for example sentences like "the price has been acting so-and-so for the last Н days (it was rising from so-and-so to so-and-so, then it was falling, then - pay special attention, Popka - it made such-and-such a pattern), the volumes of deals were so-and-so, the market reacted on news that were published then-and-so, and all this was happening in the background of so-and-so trend". - And in his parrot-head after the mentioned hundred years - maybe even unconsciously - a picture of the market situation will form, after which he will with a certain probability give the correct answer. If we tell him that "the price was so yesterday, so yesterday, and so on for a month", he simply won't know what to hang on to, because the task of singling out meaningful elements in the homogeneous information flow will be a heavy burden on his tiny brain. So he will at best have a vague idea of what to look for when making a decision, and if we are going to scold him too diligently for wrong answers (read: training with a teacher :), he will be completely lost and will leave the learning process completely ignorant.


Another analogy: how do you and I use our eyes to recognize objects? Roughly speaking, our consciousness does not analyze a set of pixels coming directly from the retina; it receives a ready-made visual image for analysis and recognition - that is, in order to understand what we see, corresponding brain sections along with the image itself already provide us with a list of features to pay attention to; that is, the data are already prepared for final analysis; they already contain a certain semantic load.


To pick out the essential and discard the unimportant is what I understand by non-linear processing.

 
It is useful to think about this question. let us say we have a neural network that has been trained to give correct answers 90% of the time based on certain input signals (Soros rests). Obviously, no, because information about what inputs are required and how to interpret outputs is not stored in the network, but in the mind of its creator. Thus, although the network is trained, it turns out to be useless. Once again. An NS is just a tool (imho, no better or worse than any other available), having it and knowing how to use it are very different things.