Using neural networks in trading - page 14

 
TheXpert:
Ah, that's easy. As soon as it starts to learn, that's enough.


I will now demonstrate the new network. With all the tests.
 
TheXpert:
Ah, it's easy. As soon as it starts to learn, that's enough.

Reshetov's Perseptron also learns something, but it is obviously not enough....

My opinion, in order to talk about sufficiency one must somehow learn to analyze the results of training, on OV, depending on the period of training (number of input examples) and OOS alone is not a helper here. I have been stumbling on this place for a long time, I feel the truth is somewhere nearby, but I can not grasp it.

 
Figar0:

I've been stumbling around in this place for a long time, I feel the truth is somewhere close by, but I can't get a grip.

Well dunno, that's the elementary thing about it.
 
Figar0:

Reshetov's Perseptron also learns something, but it is obviously not enough....

My opinion, in order to talk about sufficiency one must somehow learn to analyze the results of training, on OV, depending on the period of training (number of input examples) and OOS alone is not a help here. I have been stumbling on this place for a long time, I feel the truth is somewhere nearby, but I cannot grasp it.


I seem to have grasped it. Optimising on a large sample. I get the drawdown to be below the net profit. Then I reduce the sample and add one last refining neuron. I could be wrong. I'll post an example.
 

I have used several approaches in my research:

1) give the network a left input, well out of the blue, such as a daily change in the sun's weather, and teach it to trade on this input for example for a month. Here the net in its pure form should demonstrate its remembering/following ability. Then gave normal inputs. I tried to analyze the difference between training results somehow.

2) Tried to analyze the result of training as a function of increasing the size of the training sample. Almost all networks and configurations, up to a certain moment there is an increase in the result, then usually stagnation occurs, and further increasing the number of input samples may lead to worse results.

With these results of my research I'm trying to make conclusions about the network sufficiency and period of training, and whether there is any connection between them. That's why I got into this thread. Maybe someone will suggest something.

 
There is a cumulative root-mean-square error. It can be compared for different networks. And use it to determine if it learns anything at all.
 
TheXpert:
There is a cumulative root-mean-square error. It can be compared for different networks. And use it to determine if it learns anything at all.

Error of what?
 
grell:
Mistake of what?
Exit, of course.
 
TheXpert:
Exit, of course.


And if the output is not forecast, then how?
 
grell:

I'm kind of fumbling. Optimising on a large sample. I get the drawdown to be below the net profit. Then I reduce the sample and add a last refining neuron. I could be wrong. I will post an example.
It would be nice, for example I would love to see how the Expert Advisor works exactly on the training period, which you consider successful.