Machine learning in trading: theory, models, practice and algo-trading - page 255
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
1. why do you normalize manually? there is a scale()
2. Why on the correlation -1 and 1 is good? only if 1 is good -1 is very bad, if i understand the idea correctly, -1 is the inverse correlation
3. Have you tried to monitor the error of a trained model in a sliding window, and if it does not suit you, retrain the model and see what will happen
4. And the global idea of why everything works so badly, the market is not stationary, you need to work out some other concept of features formation, maybe completely switch to the paradigm of logical rules, i think you need to move away from numbers almost completely, or to study spectral analysis)))))
1) I still couldn't get into scale(), it scales and centers the wrong way. Ideally it would be better to use caret package for preprocessing, it scales/centers everything nicely, but using another package would be too cumbersome for such a simple example
2) correlation near zero means that there is no correlation, this is the worst case. The profit on the test in this case says nothing about the potential profit on new data.
-1 is when the high accuracy in the training will give consistently bad results on the new data. But at the same time, a low result in the training will mean a better result on the new data :) This can happen if the model is easily adjusted to the data and retrained, and with a low result in training it simply does not have time to memorize the data and somehow miraculously trades on the plus side. For example, accuracy on training data may vary from 0.9 to 1 and in this case 0.9 will be "low" and accuracy on new data will vary from 0.5 to 0.6 where 0.6 will be "high" result. I.e. the model with a worse result is not retrained, and has a better generalizing logic, and as a result the result on new data is also better.
Although all this is beautiful in words, but in reality I have never seen a stable negative correlation. It's more comfortable and easier to go towards +1.
3) While I will understand that the model does not suit me - some time will pass during which it will lose so much that a normal model will not work then. You can trade new models first on a demo account - but by the time the model is profitable and I will put it into a real account - it may be obsolete. I haven't tried it. I'd rather check beforehand that the entire learning algorithm works and gives consistently profitable models, and then place something that I trust in the real account.
If the algorithm for model training is good, it is better to update the old working model with new data than to create models from scratch every time.
4) feature engineering is good. I for example use indicators from mt5 instead of bare price.
How accurately does your good model predict the color of a trailing candle on new data in real conditions? You trade on the daily chart, right?
I have H1, the target is the color of the next candle. The prediction accuracy is only 55%-60%, but it's enough. Even in a trend, the price does not always go up, but constantly jerks up and down on the next bars, so these jerks noticeably spoil the accuracy. The main thing is that the model itself does not twitch, and once entered the trade, it would sit in it until the end of the trend.
The main thing is that the model does not twitch itself, and if it entered the trade, it would sit in it until the end of the trend. The main thing is that the model itself does not twitch, and if it entered the trade, it would stay in it until the end of the trend.
That feeling when you start to think you understand the market...
very interesting video.... about feature selection, algorithms, and even a little bit about markets
https://www.youtube.com/watch?v=R3CMqrrIWOk
very interesting video.... about feature selection, algorithms, and even a little bit about markets
https://www.youtube.com/watch?v=R3CMqrrIWOk
It is really interesting. It is a pity that there is very little about markets, something like "If I start to predict the price of oil, I'll probably be killed" :) (the quote is inaccurate, I wrote it from memory)
The first half an hour of video - here Alex wrote about it in the thread. Even R code was attached. I can't find it now, I have to flip through first dozens of pages, plus there was a link to his article on hbr about it.
To me this method of selecting predictors for forex unfortunately did not help, all my predictors were too uninformative, I think this algorithm is suitable for more stationary data. Or we need more new predictors.
Gentlemen, can someone please give me an example of a neural network working on the input with arrays such as m1[1000,1000] m2[1000,1000] etc. I apologize for stupidity if anything.
I haven't worked with neuronics yet, I want to practice. I don't really understand how parameters are set. I would be very grateful.
Perhaps someone has considered the predictor corrector method
on the input with arrays e.g. m1[1000,1000] m2[1000,1000] etc.
You want to feed two arrays with 1000 training examples in each and 1000 inputs for the network? That won't work, you will have to merge them into one. Or do you mean something else?
Each array[][] is a set of single-quality information, that is, for each input, a separate array[][]. I want to feed a lot of arrays, I'm ready for now 4, I'm planning to create more, each array describes the state of the price, it turns out from different angles, something like that.
Each array contains 1000 rows and 1000 columns, in general I have a three-dimensional, it turns out that the K-dimension is the new two-dimensional