Article: Price forecasting with neural networks - page 10

 
Sergey_Murzinov:

Yes.

As a person who is just slightly involved in neural networks (only 12 years) I can tell to a person who has been involved in neural networks for a long time that application of neural networks in any task is inseparable from their design (programming). The main thing is two postulates: initial data (that is a separate song), and the most important - training algorithm. Networks can do everything - the main thing is to train them correctly.
I would add interpretation of results to training algorithm. It is easier to achieve results by training than to provide the correct solution to the problem. If we consider that the price fluctuation is a pseudo-stochastic time series, then just the question of application and interpretation comes up sharply ...
 
Reshetov:
juicy_emad:

I would never have thought that any person would question the need to use multi-layer meshes to improve classification performance. =)

I recommend reading what Jan LeCun writes about it - http://yann.lecun.com/exdb/mnist/index.html. But the subject area is a bit different there - symbol recognition. Anyway, anyway, single-layer meshes showed the worst results - 8.4%. However! One of the multilayer ones (bilayer, with 300 neurons in the hidden layer) had a very good result - 1.6% of error. I.e. with the addition of even one layer the mesh becomes much "more powerful".


Exactly, because it's a different subject area, and therefore a different approach. The outlines of characters in standard fonts are unchanging, so it makes sense to train the network once on one example, for example on a couple of pages, so that the neural network will recognize the characters in the rest of the book with high accuracy.

As for financial markets, it is another area where everything is constantly changing and in constant motion. And hence complex multilayers are screwed up here. An exaggerated analogy in the field of character recognition is that if on one page of a book a symbol "A" should be interpreted as "A" and on the next, the same "A" is already interpreted as "B".

For this reason the same pattern recognised in different sections of the financial instrument history data can be interpreted differently in trading signals, i.e. in some sections its identification is more appropriate for opening long positions and closing short ones, while in other sections it is vice versa: opening short positions and closing long ones.

The work I linked above used the MNIST database. This database contains handwritten, not handwritten or printed images of characters.

Of course, I understand that everything in the financial markets is in constant dynamics, but the conflicting patterns (input is the same and output is two different classes) can be eliminated by increasing the amount of information at the input of the grid or (as someone suggested above) such patterns can be excluded from the training sample. Of course, in the variant suggested by you in the article about using a single-layer perceptron, there were a lot of conflicting patterns. For, there were only 4 inputs.

My point is that single layer perceptrons are not capable of solving the XOR problem (see Minsky's book) and that makes them flawed.

 
rip:

I would also add interpretation of the results to the training algorithm.
Achieving a learning result is easier than ensuring the correct
to solve the problem. If you consider that the price fluctuation is a pseudo-stochastic
time series, it's the question of application and interpretation
...

With the interpretation of the output(s) of the network the work begins. That is, there goes the problem statement. So I completely agree with you.

 
I would like to raise the question of what exactly you use to create a training sample. This is the most important thing, after all.
 
slava1:
I would like to raise the question of what exactly you use to create a training sample. This is the most important thing.

The learning sample is created by conventional indicators

And what kind is the most intimate, as well as the preparation of the data

 
Why? Because no one is asking for an algorithm. Just to share their thoughts.
 
slava1:
Why? Because no one is asking for an algorithm. Just to share their thoughts.
It's not a public matter.
 
Then I wonder what we are talking about here if nobody wants to talk about the most important things.
 
slava1:
Then I wonder what we are talking about here, if no one wants to talk about the most important thing.

In this case it's like an arms race, no one believes anyone :)


The initial set of training data can also be {H,L,O,C} ... What matters is the model, the idea behind the network and the system as a whole.

If the idea is correct, the aim is formulated correctly and error estimation f-function is chosen correctly, the result is a certain hit of the net to the local

minimum after N training epochs. Further the art is to get the network out of the deadlock with minimal losses and continue learning.

And here all means are good, preprocessing of data, replacement of architects, learning algorithms - the main thing is to generally achieve the model you are developing.



And as for what to feed, I recommend to implement an idea of one of Reshetov's networks, there are several of them on this forum, and then to evaluate.

The model, the result - well, everything is in your hands.

 
I have known what to apply for a long time. I wanted to discuss possible models, so to speak. It usually works better if we work together. I myself have been working on a robot for a year now. There are results, but not very stable.