Machine learning in trading: theory, models, practice and algo-trading - page 1780
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Reports of not having participated in the training.
Then the result is more than good.
high accuracy is not 70%
I agree, I made a decision for myself not lower than 95%.
Strategy... So after MO, you still need a whole strategy? I thought the goal of MO was to eventually give you advice on whether to trade (long/short) or not.
Well, the target isn't mine, and it's just interesting to me as an opportunity to look at the effectiveness of predictors on another target.
I agree, I made a decision for myself at least 95%.
This would be a very good result for my main target, as it works on trends.
drop it off, let's see .... But I will definitely not look today, sorry, now with his head in another topic
All right, I'll send it to you later.
All right, I'll post it later.
Yes, you do, I just said that you can not see at once
high accuracy is not 70%
What are you talking about? 70% is a very high entry accuracy. If trades without waiting for losses to subside, then a much higher result is unlikely to be achievable. And 95% is something out of the realm of fantasy. I have now counted my trades for the sake of interest when making deposits. So, in my first attempt to do 42 trades of which 6 are losing. In the second attempt 67 trades were executed, 10 of them were losing. So, even with such a modest, in your opinion, entry accuracy, accounts have increased tenfold. I forgot to say that situations in which you take +1 point, and lose -200 points, I haven't considered. Of course you can achieve 95% entry accuracy.
I didn't consider the one where you take +1 pip and lose -200 pips. Of course it is possible to achieve an entry accuracy of 95%.
To begin with, to take 1 and give up 200 pips is not initially acceptable for me, mostly because no broker will not allow itself in such a mode for a long time, and the ratio of continuous gains should be as you have"67 of them 10", personally I have never achieved such results. If I'm not mistaken, I will try to stay with you for a long time, but if I stay with you for a long time, I will probably stay with you for a long time. All trades were made when the system predicted accuracy of more than 95%.
I want to say right away, your MO can predict the 100% probability of entering the market, but what will actually happen and how it will be, I think it's not the task of MO, but of AI, which will take into account the factors of the verbal environment along with the mathematical statistics.
Yeah, you can drop it off, I just said we can't see it right away.
Ok. There are two files in the archive - a sample for training and a sample for testing the trained model; we need predictors for both of them.
Such a peculiarity is that each new line is a new bar - the readings are taken at the moment of bar opening, the readings themselves are for the previous bar, so if the predictor uses the opening data from the zero bar, then take this indicator from the next bar.
Okay. There are two files in the archive - a sample for training and a sample for testing the trained model; we need predictors for both of them.
Such a peculiarity that each new line is a new bar - the readings are taken at the moment of bar opening, the readings themselves are for the last bar, so if the predictor uses the opening data from the zero bar, then take this indicator from the next bar.
Well, the target could also attach their own, otherwise how do you compare?
And why did you remove the date?
400 000 thousand lines in the train? are you serious? did you train the model for 400 000 lines?
UPD=====
Sorry, but my old laptop just screws me... when I just try to manipulate the data, if I do only 100 features it turns out to be a 100*400 000 matrix and keeping such a matrix in RAM I need to train the model simultaneously, the laptop just dies just trying...
i need a data set for 50 000 or so, don't take a minute to create these big data sets, 5 minutes or even an hour will be enough. don't distort the data, give them in the same format as they should be date time OHLCV , also add your target, on which you have trained your model so you can compare errors later
Well, the target could also attach theirs, otherwise how do you compare?
And why did you delete the date?
400 000 000 lines in the train? Are you serious? Did you train the model for 400 000 000 lines?
I didn't delete the date, I just didn't save it, so it would take up less space in the file.
Yes, I trained on minutes over 2 years.
UPD=====
Sorry, but my old laptop just pisses me off... when I just try to manipulate the data, if I do only 100 signs it turns out the matrix is 100*400 000 and keeping such a matrix in RAM should also train the model at the same time, the laptop just dies trying...
And it is impossible to save successively in the beginning the predictors in a file, and then simply load them for training?
I have a sample for training now under a gigabyte - CatBoost easily copes, but the genetic tree in R I would not risk to build now...
do dataset for 50 000 or so, don't need to take a minute to create these huge datasets, 5 minutes or even an hour is enough, also don't distort the data, give them in the format they should be date time OHLCV , also add your target, on which you trained your model so you can compare the errors then
50,000 is too few observations for continuous such observations, it will be only about 300 ZZ segments. My main predictors are sharpened on minutes, there are predictors from the upper TF, but they may not be enough.
Are you using volume or it's just convenient with volume?
You don't need ZZ parameters to adjust the predictors?
I don't understand about data distortion - do you need to shift the data so that you would know all of the bar data on the zero bar? If so, do you not have a peek at the zero bar?