Machine learning in trading: theory, models, practice and algo-trading - page 8
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Hold on. I will also run your data for dependencies.
One question before I start. Does your data include all bars in a row, or was there thinning of bars before sampling?
R1, R2, R3 in a row and within them also in a row is a time series on H1.
I see. Well with your data I approximately understood how to work.
I also want to understand Dr.Trader.
I also have H1, the sample includes data from the last 4 bars. open[1]-high[1]-close[1]-...-open[2]-high[2]-close[2]-... There data from MqlRates and MqlDateTime structures, plus a couple of indicators and closing prices from higher timeframes, all normalized in the interval [0...1]. The result of 1 or 0 is increase or decrease of price on the next bar. But all this is obviously not enough.
The problem is that the required number of bars is unknown. It takes only 3 bars for the neuron to retrain to an error of 0% (and therefore 50% will be in the fronttest). Or, if you train it with control and stop it in time, fronttest error will be a couple of percent less. But, it is possible to take hundreds of bars, and achieve approximately the same result, although each new bar in the sample is a huge amount of garbage and correspondingly lowering the quality of the model.
I also have H1, the sample includes data from the last 4 bars. open1-high1-close1-...-open2-high2-close2-... etc. The data from MqlRates and MqlDateTime structures. The result of 1 or 0 is increase or decrease of price on the next bar.
A little addition to the previous post. No, there are no deltas. I'll have to give it a try.
I also have H1, the sample includes data from the last 4 bars. open[1]-high[1]-close[1]-...-open[2]-high[2]-close[2]-... There data from MqlRates and MqlDateTime structures, plus a couple of indicators and closing prices from higher timeframes, all normalized in the interval [0...1]. The result of 1 or 0 is increase or decrease of price on the next bar. But all this is obviously not enough.
The problem is that the required number of bars is unknown. It takes only 3 bars for the neuron to retrain to an error of 0% (and therefore 50% will be in the fronttest). Or, if you train it with control and stop it in time, fronttest error will be a couple of percent less. But, you can take hundreds of bars at all, and achieve approximately the same result, although each new bar in the sample is a huge amount of garbage and correspondingly lowering the quality of the model.
Well, you have to do feature engineering. It is not necessarily the use of standard indicators. What is needed is imagination and understanding of the process. Just giving the black box data about the bars may really not be enough. And the depth from which the information is taken is important. I've seen in my research, for example, that you need to feed a symmetrical depth at the input relative to the depth at the output. Predict for 3 hours ahead - give data for 3 hours ago and later.
Here is an example of a predictor development scheme that is partially reflected in my research as well. This data already gives the machine a lot of information about what was:
http://blog.kaggle.com/2016/02/12/winton-stock-market-challenge-winners-interview-3rd-place-mendrika-ramarlina/comment-page-1/
A little addition to the previous post. No, there are no deltas. I'll have to give it a try.
That's a must.
You know what works well? The difference between the last known price and the moving average. The window of the average should be different.
I look at the target variable as a buy/sell qualifier, without the specifics of how long to keep the trade open or what price level to wait for. The Expert Advisor opens a trade at the beginning of the bar, and waits for the next bar to make the next decision.
The logical sense is somehow more difficult :)
I wanted to train the neuronics to find patterns and shapes on the chart (head and shoulders, etc.). And it had to learn to find some figures by itself, without my participation; I used the direction of zigzag as a target variable. The meaning of the target variable according to my plan should have been "pattern found, trend is going to go up, I must buy" (with the result = 1). At the same time the result = 0 would mean that the trend will now go in the opposite direction, and I should sell. I tried to consider the strength of the signal and not to trade when the result is close to 0.5.
My thoughts out loud, I did not think about it before: it seems that my model was wrong, I had to teach 3 classes to buy/sell/close_all_deals_and_not_trade. And do either 3 exits from one network, or train 3 separate networks. Not one network with one exit.
Then I realized through experimentation that I could take only 5 bars instead of 200 and nothing would change, the result would remain the same. I do not think the model learns to find patterns in such a configuration but rather it finds some regularities in time. I have gradually decided to teach the model not on a zigzag, but simply on the close price of the next bar. In this case I do not need to filter the result of neuronics, less problems with the same result, somehow not bad. Here the logical sense is easier - the target variable is 0/1 - the price will fall/rise in 1 bar.
I've been studying the forest for the last few days, and the examples from this forum. If you take the same 5 bars, then the forest will not learn some abstract things like neuronics, but will derive quite specific rules. I think it is much more promising with such dataset, I will learn to use it. The meaning of the target variable is the same - 0/1 - price will go up or down in 1 bar.
I don't know as much about the impact of predictors as I would like. Time should definitely be used, hour/minute/day of the week, probably one of those (hour?). You can't use predictors with a small set of values going in a row without looping. For example, if the training sample contains data for a year, you can't use a "month" predictor. The model can simply divide all the data into 12 chunks according to months, and train some different logic for each chunk. And the logic of January 2015 certainly won't fit in a year for the logic of January 2016. But if the training data for 5 years, then "month" can already fit. Also, I'm not sure that the use of indicators is justified, the advisors on the standard indicators are losing money, it would be strange if the trained model extracts something useful from this data. I think that the forest itself also makes some calculations and may create its own internal indicators during training. The price should also be used, although I do not really trust the open and close prices, I prefer high and low.
All these predictors show a kind of current market condition and the model's purpose is to define those same conditions using predictors and where the price goes in such cases. A model should use as little input data as possible, by the principle of Occam's blade, then there is a chance that it has described some dependence and not fit the examples.
I look at the target variable as a buy/sell qualifier, without the specifics of how long to keep the trade open or what price level to wait for. The Expert Advisor opens a trade at the beginning of the bar, and waits for the next bar to make the next decision.
The logical sense is somehow more difficult :)
I wanted to train the neuronics to find patterns and shapes on the chart (head and shoulders, etc.). And it had to learn to find some figures by itself, without my participation; I used the direction of zigzag as a target variable. The meaning of the target variable according to my plan should have been "pattern found, trend is going to go up, I must buy" (with the result = 1). At the same time the result = 0 would mean that the trend will now go in the opposite direction, and I should sell. I tried to consider the signal strength and not to trade when the result is close to 0.5.
My thoughts out loud, I did not think about it before: it seems that my model was wrong, I had to teach 3 classes to buy/sell/close_all_deals_and_not_trade. And do either 3 exits from one network, or train 3 separate networks. Not one network with one exit.
Then I realized through experimentation that I could take only 5 bars instead of 200 and nothing would change, the result would remain the same. I do not think the model learns to find patterns in such a configuration but rather it finds some regularities in time. I have gradually decided to teach the model not on a zigzag, but simply on the close price of the next bar. In this case I do not need to filter the result of neuronics, less problems with the same result, somehow not bad. Here the logical sense is easier - the target variable is 0/1 - the price will fall/rise in 1 bar.
I've been studying the forest for the last few days, and the examples from this forum. If you take the same 5 bars, then the forest will not learn some abstract things like neuronics, but will derive quite concrete rules. I think it is much more promising with such dataset, I will learn to use it. The meaning of the target variable is the same - 0/1 - price will go up or down in 1 bar.
I don't know as much about the impact of predictors as I would like. Time should definitely be used, hour/minute/day of the week, probably one of those (hour?). You can't use predictors with a small set of values going in a row without looping. For example, if the training sample contains data for a year, you can't use a "month" predictor. The model can simply divide all the data into 12 chunks according to months, and train some different logic for each chunk. And the logic of January 2015 certainly won't fit in a year for the logic of January 2016. But if the training data for 5 years, then "month" can already fit. Also, I'm not sure that the use of indicators is justified, the advisors on the standard indicators are losing money, it would be strange if the trained model extracts something useful from this data. I think that the forest itself also makes some calculations and may create its own internal indicators during training. The price should also be used, although I do not really trust the open and close prices, I prefer high and low.
All these predictors show a kind of current market condition and the model's purpose is to define those same conditions using predictors and where the price goes in such cases. A model must use as little as possible input data, by the principle of Occam's blade, then there is a chance that it describes some dependence and not fit the examples.
In many ways our thoughts overlap.
But in the end, I'm stuck with the noise predictors. The presence of noise predictors makes the model over-learn and all other reasoning becomes meaningless.