Neural networks. Questions from the experts. - page 9

 
LeoV писал(а) >>

Until when do we train her? To the minimum error? It has to be understood that it will be 100% overtraining. Not to the minimum error? Then until what? What is the profit? And why exactly to this error? Will the profit increase or decrease if we slightly decrease the error? And if you increase the error?

Like this.....))))

Up to the minimum error. To avoid "overtraining" (a word which does not reflect the meaning of the phenomenon at all), the number of neurons in the network must be as small as possible. After training, there are procedures such as analyzing the influence of individual inputs in the network, and removing weak neurons, and such a procedure as reducing the number of neurons. As if to put it figuratively ... so that in this electronic brain there are no empty spaces left unaffected by training.

 
LeoV >>:
Ну это же не ответы, нужно понимать))). Это просто размышления "на тему" вобщем. Хорошо, берём НШ(не трейдер) или Солюшн, не важно, делаем сеть(не важно какую) и начинаем тренировать. До каких пор её тренируем? До минимальной ошибки? Нужно понимать, что это будет переобучение 100%. Не до минимальной ошибки? Тогда до какой? Какой будет при этом профит? И почему до именно этой ошибки? Если уменьшить немного ошибку профит увеличится или уменьшиться? А если увеличить ошибку?

How not an answer. The answer.

joo wrote >>

Let's say you are interested in the TS giving out as much profit as possible and as often as possible, that is trying to increase the percentage of profitable trades and of course MO.

From a network trained on this principle, you can expect that there will be profits on OOS as well. You need to apply a root mean square error that accentuates the network on the patterns that contribute to these goals. That is, the network focuses on specific patterns that lead to some sort of consequence.

If you use root-mean-square error, however, there is an "averaging" of the patterns, not an emphasis.

You need to train to the minimum root mean error. And overtraining will happen if you use root mean square error (not for approximation). For approximation, the smaller the RMS error, the better.

Of course, no one is likely to give specific answers to your questions, even if they wanted to. I only tried to show that the choice of the fitness function is almost a more important task which will determine the answers to our questions, than the selection of input values for the grid. And as a rule, it is limited to agonising and time-consuming enumerations of input data......

And Integer got a little ahead of the curve while I was writing. I agree with him.

 
Integer писал(а) >>

Up to the minimum error. To avoid "overtraining" (a word which does not reflect the meaning of the phenomenon at all), the number of neurons in the network must be as low as possible. After training, there are procedures such as analysing the influence of individual inputs in the network, and removing weak ones, and such a procedure as reducing the number of neurons. As if to put it figuratively ... so that in this electronic brain there are no empty spaces left unaffected by training.

And what do you mean by "retraining"?

 
joo писал(а) >>

I agree with you, just as I agree with Integer. But you wrote yourself -

>> your questions are unlikely to be answered.
))))
 
LeoV писал(а) >>

And what do you understand by the word "retraining"?

In the context of application and training of neural networks I do not understand it at all, it does not reflect the meaning of the phenomenon. As they write about neural networks, for example here (and not only) - http://www.exponenta.ru/soft/others/mvs/stud3/3.asp:

Too few examples can cause "overtraining" of the network, when it performs well on training sample examples, but poorly on test examples subject to the same statistical distribution.

Understood as training the network on fewer examples than it can accommodate. It gets jagged and confusing if the situation doesn't resemble the jagged experience exactly. "Notched" is from the word "rote" - to know by heart, but not to understand or be able to apply the information.

 
LeoV >>:

Я с вами согласен, точно так же как и с Integer. Но вы сами написали -

))))

Well, still, I meant in specific numbers unlikely. :)

 
Integer писал(а) >>

In the context of application and training of neural networks I do not understand it at all, it does not reflect the meaning of the phenomenon. As they write about neural networks, for example here (and not only) - http://www.exponenta.ru/soft/others/mvs/stud3/3.asp:

I understand it as training a network on fewer examples than it can accommodate. It gets jagged and confusing if the situation does not resemble the jagged experience exactly. "rote" comes from the word "rote" - to know by heart, but not to understand or be able to apply the information.

The term "overlearning", in my view, applies more to the application of neural networks in the financial markets. We know that the market changes over time, patterns change, and in the future the market will not be exactly the same as in the past. So when a network learns, it learns the market too well and is no longer able to work adequately in the future - in the market that has changed. This is "over-learning". Reducing the number of neurons is of course one method of avoiding 'retraining'. But it does not work alone.

 
LeoV >>:

Ну это же не ответы, нужно понимать))). Это просто размышления "на тему" вобщем. Хорошо, берём НШ(не трейдер) или Солюшн, не важно(для "академических целей"), делаем сеть(не важно какую) и начинаем тренировать. До каких пор её тренируем? До минимальной ошибки? Нужно понимать, что это будет переобучение 100%. Не до минимальной ошибки? Тогда до какой? Какой будет при этом профит? И почему до именно этой ошибки? Если уменьшить немного ошибку профит увеличится или уменьшиться? А если увеличить ошибку?

Ну вот как-то так.....))))

The network is trained to a minimum error on the test sample, adjusting the weights on the training sample.

 
StatBars писал(а) >>

The network is trained to a minimum error on the test sample, adjusting the weights on the training sample.

It's understandable. The smaller the error the bigger the profit? Or what is the correlation?

 
Integer >>:

До минимальной ошибки. Чтобы "переобучения" (слово совершенно не отражающее смысл явления) не было, количество нейронов в сети должно быть минимально возможным. После обучения существует такие процедуры, как анализ влияния отдельных входов сети и удаление слабовлиящих, и такая процедура, как сокращение количества нейронов. Как бы так образно ... чтобы в этом электронном мозге не оставалась пустых мест не затронутых обучением.

The number of neurons does not always play a decisive role, although selecting the number of neurons (which in most cases is minimal without loss of accuracy) leads to a reduction in error.

Influencing inputs and removing unnecessary ones can often have a greater effect than selecting neurons in a layer.