Machine learning in trading: theory, models, practice and algo-trading - page 2627
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I was comparing several ways of assessing the importance of the attributes. I took the most resource-intensive as a benchmark: learning the model by removing the features one by one.
The fast methods do not coincide with the benchmark. And they do not match each other. fselector is even faster, I think it won't match anything either.
The importance of signs in the moving window (indicators and prices)
At one moment the indicator may be 10% important and at another moment it may be 0,05% important, such is the truth of life)
If you think that crossvaluation solves everything, you should blush, it's time ...
The data in the sliding window is used for each model.
Cross validation is used to match the training results of multiple models trained on different pieces of data.
Models on non-sliding window data can also be trained on different chunks of that data and get cross validation as well.
Not clear, what does cross validation have to do with it?
The data in the sliding window is used for each model.
Cross validation is used to dock the training results of multiple models trained on different pieces of data.
Models on non-sliding window data can also be trained on different chunks of that data and get cross validation as well.
The idea here is that a sliding window with the same width does not solve the problem. The good idea is to increase the runs per dimension, changing the width of the window at each step. There's the curse again)))
What does cross validation have to do with it?
The data in the sliding window is used for each model.
Cross validation is used to match the training results of multiple models trained on different pieces of data.
Models on non-sliding window data can also be trained on different chunks of that data and get cross validation too.
Cool...
What is the purpose of the importance score? So that by removing the unimportant ones, it is possible to train the model faster in the future, without losing quality. It's just tuning the data and model that are already working. And neither you nor I (as I assume) have anything to tune yet.
So just teaching the model. The model will use the important ones and not the unimportant ones.
Not awake yet?))
I disagree.
Cross validation is the ability to throw out a model that happens to be successful on one piece of history. Testing it on a few chunks of history, might show that it won't work there.
Just cross validation shows that the signs and model are floating.
This "float" is shown to you by another method, cross validation to me.
The idea here is that a sliding window with the same width does not solve the problem. The good idea is to increase the runs per dimension, changing the width of the window at each step. Damn it again)))
Damn it all, the sun is outside, it's time to put on swimming trunks and go to the garden
The test on small data shows that fast methods don't work well.
What's the purpose of the importance test? So that by removing the unimportant ones you can train the model faster in the future, without losing quality. It's just tuning the data and model that are already working. And neither you nor I (as I assume) have anything else to tune.
So I simply teach the model. The model itself will use the important ones and not use the unimportant ones.
What if I want to create a neuron that generates a qualitative output?
As for cross validation (valving forward), you still haven't explained why it's bad. My experiments show that it's a working method for weeding out bad models/ideas.