Machine learning in trading: theory, models, practice and algo-trading - page 13
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Thanks for the clarification, now I understand you perfectly, can genetic algorithms help here? instead of RF, I have quite a few interesting ideas for implementing a target, I'd like to try them.
is it possible to do this in R?
A little about myself: I am not a programmer, R is my first language that I have been learning for a month and a half.
is it possible to do this in R?
A little about myself: I am not a programmer, R is my first language, which I have been learning for a month and a half
Here's roughly the process of teaching the NS with feeding it information about the results of the trade. Well, you could try to write your own function and feed it as a custom fitness function into some R package. But first you need to find out which R package allows this. You might have to write the NS itself and do it on the pluses in a separate package. ( R itself is slow.
The thing is, this is a very unusual case. Usually fitness features are not customizable, with one exception that I know of. The maximum I've done is write my own function in the caret package, whose maximum value is used to select a set of training parameters during crossvalidation. But at the same time the machine itself is trained in a standard way. That is, my solution is not quite what I need. You see?
Googled... yeah, you have to write it yourself, but this standard target is such a limited approach, it's annoying...
We'll have to keep looking:
http://stackoverflow.com/questions/25510960/how-to-implement-own-error-function-while-using-neuralnet-package-in-r
/go?link=https://stackoverflow.com/questions/36425505/own-error-function-including-weights-for-neuralnet-in-r
It looks like the neuralnet package allows you to create your fitness function, which should be differentiable. I'll have to think about it... What do you think?
The second link says that you need to rewrite the code in the nnet package
You can write your own error at the first link, but we still need to know the previous weights to implement the concept that you suggest, or did I miss something? To be honest, I have a very vague idea of how a neural network works
Backward propagation is usually done as follows: a training example is fed into it, the result is determined for it, checked against the desired result, and the error is calculated. Next, the task is to reduce this error. Since all calculations in the neural network are just an order of additions and multiplications of input data with coefficients, you can calculate exactly how much you should change coefficients to reduce the error. In general you can find coefficients which will reduce error to zero at the first step, but nobody does it, because it will reduce error of one concrete example, but knowingly will increase it for all other examples.
This gradual reduction of the error is done one by one using all training examples, passing them over and over again. That is, you cannot use some Sharpe ratio as a required trade result. The error can be calculated for each training example separately. You can use your own function to calculate the error, but it will also be calculated separately for each training example and not for all examples at once. I can't think of a way to split the estimation of the whole trade for all separately taken examples.
I agree that trading by zigzag or bar close prices is not an optimal trade. It would be much better to open and close trades considering also spread, drawdown, time period during which the trade is open. We could make an Expert Advisor that uses moving averages or other indicators, optimize it for maximal profitability ratio and use these deals in training data as a required result. But I will do something like that after I achieve stable results at least on a zigzag.
Backward propagation is usually done as follows: a training example is fed into it, the result is determined for it, checked against the desired result, and the error is calculated. Next, the task is to reduce this error. Since all calculations in the neural network are just an order of additions and multiplications of input data with coefficients, you can calculate exactly how much you should change coefficients to reduce the error. In general you can find coefficients which will reduce error to zero at the first step, but nobody does it, because it will reduce error of one concrete example, but knowingly will increase it for all other examples.
This gradual reduction of the error is done one by one using all training examples, passing them over and over again. That is, you cannot use some Sharpe ratio as a required trade result. The error can be calculated for each training example separately. You can use your own function to calculate the error, but it will also be calculated separately for each training example and not for all examples at once. I can't figure out how the estimation of the entire trade can be divided by all separately taken examples.
I agree that trading by zigzag or bar close prices is not an optimal trade. It would be much better to open and close trades considering also spread, drawdown, time period during which the trade is open. We could make an Expert Advisor that uses moving averages or other indicators, optimize it for maximal profitability ratio and use these deals in training data as a required result. But I will do something like this after I achieve stable results at least on zigzags.
Yes, the NS works the way you describe. For each example, the error is calculated, then, after passing through the entire set, we get the value of the fitness function: root mean square error, or median error, or mean absolute error. Based on this value, the weights are updated accordingly to the gradient calculated in the backpropagation algorithm.
Here we are discussing that we should put our own function instead of the mentioned fitness function based on the simulation of trading by machine signals. So, for each example processed by NS, you can create a virtual trade (if the signal at the output breaks a predefined threshold). And at the end of all examples to calculate some integrated statistics, for example, Sharpe. And already from it the weights will be processed by the inverse distribution.
Thank you for the detailed clarificationDr.Trader!
You know probably the best and most correct would be to teach the reversals themselves, even the same zigzag, that is, to give three states 1) reversal up
2) down reversal
3) not a turn
But whether to learn, it is quite difficult to catch reversals, plus the skew in the number of observations, classes "not reversal" will be ten or maybe hundreds of times more
And what predictors do you use and what are the results?
I have just started to use spectral analysis, the first tests were much better than with the indicators, I ran it with rattle, the error of learning and testing was 6%, but when I started to translate the code into R the error rose to 30% if I am not mistaken, San Sanych says that it was overtraining, so I still do not understand much
There is also a way to find out which periods dominate in the market through spectral analysis and then you can substitute these periods in the indicators, the result will be adaptive indicators, but not adjusted to the history.
The second link says that you need to rewrite the code in the nnet package
You can write your own error at the first link, but we still need to know the past weights to implement in the concept that you propose, or did I miss something? I honestly have a very poor idea of how a neural network works
Well, first of all, I'm not suggesting it. I'm translating your intuition into a form that can be encoded. You suggested to set a task for the machine to keep the curve of trade within reasonable limits. ) That's the way to do it.
Second, the first link shows just the way to do it. The only limitation, which is also mentioned in the documentation. Besides, we don't need weights, we needed them specifically for the person who asked the question. You can't just get weights in a function. What do you need weights for? Why are you talking about them at all?
You can use basically every error function that can be differentiated.