Article: Price forecasting with neural networks - page 9

 
PraVedNiK. Or maybe it's time to move from a single neuron to a normal neuron. It's a little different, and everything is different in general.
 
Vinin, you once wrote that you have dealt with NS and even that in the championship your EA is an implementation of NS. In other words, compared to me, you are an expert. Advise me what to read to understand this huge topic. The goal is not only to understand the principles of network operation and design, but to understand it deeply enough to use MQL to write both the network itself (which I hope to plan when I understand the subject) and all infrastructure, related to its training.
 
Yurixx:
Vinin, you once wrote that you have dealt with NS and even that in the championship your EA is an implementation of NS. In other words, compared to me, you are an expert. Advise me what to read to understand this huge topic. The purpose - not only to understand the principles of networking, but to understand it deeply enough to use MQL5 to write the network itself (which I hope to plan when I understand the subject) and the entire infrastructure, related to its training.

I don't consider myself a specialist, but I can always make a network if necessary.
 
Vinin:
PraVedNiK. Or maybe it's time to move from a single neuron to a normal neuron. It's a little different, and everything is different in general.
Is there any reason to switch to a multilayer?...Actually, everything makes graphical sense:

Perceptron is a line that divides 2 classes: green balls are"Price Will Most Likely Go Up",
The red ones are "Price ...Down". The problem is that there is a messy area where the balls are mixed up.
are all interspersed. Some clever people / including - and this forum /, after reading books
Shumsky and others will suggest: we should switch to a multilayer to make more of these very dividing lines.
You may or may not do it that way, you can put a filter in your DiRoLnoDoLgo EA:
High[1]<High[2] && Low[1]<Low[2] && iOsMA... and High[1]>High[2] && Low[1]>Low[2]&& iOsMA... ,
and it will remove about 2/3 of these hemorrhoidal bumps, and - LEFT ! see figure..:

Then, after that, it is easier to draw the dividing line - that means t h o u g h.
This is whatDiRoLnoDoLgo is all about: at least partially removing this
partially remove the hemorrhoids - the results of the forward - analysis / over the last 5 months/ have turned out to be
The results of forward - analysis for last 5 months turned out to be quite good: gross profit = +16 figures, expected payoff = + 2 figures /almost/, profitability = 30.
 
PraVedNiK:
Is there a rationale for switching to multilayers?...Actually, it all makes graphic sense: ....

I would never have thought that anyone would question the use of multilayer meshes to improve classification performance. =)

I recommend reading what Jan LeCun writes about this - http://yann.lecun.com/exdb/mnist/index.html. However, the subject area is a bit different there - character recognition. Anyway, anyway, single-layer meshes showed the worst results - 8.4%. However! One of the multilayer ones (bilayer, with 300 neurons in the hidden layer) had a very good result - 1.6% of error. That is, adding even one layer makes the mesh much "more powerful".

I very much don't think that reducing the size of the training sample is a good option. Much better to achieve greater class separability - i.e. to transform the input data so that there would be no conflicts (for example, to increase time interval of quotes visibility). I remember that fxclub book "Trading - your way to financial freedom" recommends to send more than one pair of quotes to the grid.

Yes, there is one more disadvantage of using single-layer grids: the person who wants to build this grid and train it will not even have to learn what BackProp and many other things are. I.e. by using meshes of ancient architectures, the probability that effective meshes of new architectures will be created in the near future decreases, which is very, very bad, because we have to help the meshes somehow. =)

 

In general, as someone who has been using neural networks in financial markets for a long time, I can say one thing - the main things are not described there. Of course, I do not program neural networks - I deal exclusively with their APPLICATION, which is a separate and very "delicate" topic. A lot depends on it. And this very application is not described in this article - but it is one of the main and basic topics of "neural network application in financial markets". A lot depends on it.... ..... ...

But this is my personal opinion.....

 
LeoV:

In general, as someone who has been using neural networks in financial markets for a long time, I can say one thing - the main things are not described there. Of course, I do not program neural networks - I deal exclusively with their APPLICATION, which is a separate and very "delicate" topic. A lot depends on it. And this very application is not described in this article - and this is one of the main and basic topics of "neural network application in financial markets". A lot depends on it.... ..... ...


But this is my personal opinion.....



Yes.

As a person who is just slightly involved in neural networks (only 12 years) I can tell to a person who has been involved in neural networks for a long time that application of neural networks in any task is inseparable from their design (programming). The main thing is two postulates: initial data (that is a separate song), and the most important - training algorithm. Networks can do anything - the main thing is to train them correctly.
 
juicy_emad:
PraVedNiK:
Is there a rationale for switching to multilayers?...Actually, it all makes graphic sense: ....

I would never have thought that anyone would question the use of multilayer meshes to improve classification performance. =)

I recommend reading what Jan LeCun writes about this - http://yann.lecun.com/exdb/mnist/index.html. But the subject area is a bit different there - symbol recognition. Anyway, anyway, single-layer meshes showed the worst results - 8.4%. However! One of the multilayer ones (bilayer, with 300 neurons in the hidden layer) had a very good result - 1.6% of error. I.e. with the addition of even one layer, the mesh becomes much "more powerful".


Exactly, because it is a completely different subject area, and therefore a different approach. The outlines of characters in standard fonts are unchanging, so it makes sense to train the network once on one example, for example on a couple of pages, so that the neural network will recognize the characters in the rest of the book with high accuracy.

As for financial markets, it is another area where everything is constantly changing and in constant motion. And hence complex multilayers are screwed up here. An exaggerated analogy in the field of character recognition is that if on one page of a book a symbol "A" should be interpreted as "A" and on the next, the same "A" is already interpreted as "B".

For this reason the same pattern recognised in different sections of the financial instrument history data can be interpreted differently in trading signals, i.e. in some sections its identification is more appropriate for opening long positions and closing short ones, while in other sections it is vice versa: opening short and closing long ones.
 
Reshetov:
juicy_emad:

PraVedNiK:

Is there a rationale for switching to a multilayer...? Actually, it all makes graphic sense: ...
makes sense: ...

I never would have thought that any person would question
about using multilayer meshes to improve
classification characteristics. =)



I recommend to read what Jan LeCun writes about it - http://yann.lecun.com/exdb/mnist/index.html. But the subject area is a bit different there - symbol recognition.
characters recognition. Anyway, anyway, the single-layer meshes were
the worst results, 8.4%. However! One of the multilayer ones (bilayer,
with 300 neurons in the hidden layer) showed a very good result
- 1.6% error. That is, the addition of even one layer makes the grid
much more "powerful".






Exactly, that this is a different subject area and therefore a
a different approach. The shape of the characters in standard fonts is the same,
so it makes sense to train the network once on one example,
like a couple of pages, so that the neural network can accurately
the characters in the rest of the book.

And that network (for recognizing symbols) is written for every single font. Or do all the machines print the same way?
Or is the paper equally white and high quality.
No, it is also a variable task, if everything is as you write then you don't need neural networks, a simple comparison is enough.
 
Sergey_Murzinov:
Reshetov:
juicy_emad:

PraVedNiK:

Is there a rationale for switching to a multilayer...? Actually, it all makes graphic sense: ...
makes sense: ...

I never would have thought that any person would question
about using multilayer meshes to improve
classification characteristics. =)



I recommend to read what Jan LeCun writes about it - http://yann.lecun.com/exdb/mnist/index.html. But the subject area is a bit different there - symbol recognition.
characters recognition. Anyway, anyway, the single-layer meshes were
the worst results, 8.4%. However! One of the multilayer ones (bilayer,
with 300 neurons in the hidden layer) showed a very good result
- 1.6% error. That is, the addition of even one layer makes the grid
much more "powerful".






Exactly, it's a different subject area and therefore a
a different approach. The shape of the characters in standard fonts is the same,
so it makes sense to train the network once on one example,
like a couple of pages, so that the neural network can accurately
the characters in the rest of the book.

And that the network (for recognizing symbols) is written for each specific font. Or do all the machines print the same way?
Or is the paper equally white and high quality.
No it is also a variable task, if everything is as you write then you don't need neural networks, a simple comparison is enough.

1. It is not written, it is learned.
2. A book of the same edition is printed the same way by all machines. If it's different, then it's a defect.
3. For the same edition, the paper has the same format: e.g. "format 70x100 1/16. Offset printing. Print size 37.4." The paper should also conform to the standard. Well and sets of fonts do not differ great variety, so as not to spoil the vision of the readers.

Well anyway the tasks of the pattern recognition for fields, where standards exist, e.g. polygraphy and fields where there are none, e.g. financial markets, are completely different and the error probabilities in the solutions are also different.

An even simpler explanation can be made: if pattern recognition algorithms for financial markets were wrong with the same frequency as pattern recognition algorithms for printed texts, then ... (no need to go on, because that would be clear as it is).