Machine learning in trading: theory, models, practice and algo-trading - page 1263
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Rolling regression, which beats the same ARIMA
You can't learn everything, and all MO methods are about equal. You can find something suitable practically in any of them, and then you can try the others. But if, say, both Bayesian and NS do not give results, then it is only a waste of time to try the others. All this can be done later, if needed.
You can't learn everything, and all MO methods are about equal. You can find something suitable practically in any of them, and then you can try the others. But if, say, both Bayesian and NS do not give results, then it is only a waste of time to try the others. All this can be done later.
Well together they give very good results, it's a question of realization.) Sampling examples via MCMC, training with NS is the best solution so far
to pick up an asset or group of assets for this, then the regression may be useful through MCMCWhat's interesting there is variation problems and Theano.
I keep meaning to use variational methods to tune the system, but I haven't found the approaches yet.
Looking for the same thing :)
well, together they very much do, it's a question of implementation ) Nasample examples through MCMC, i can not think of a better way to train on this nc
Well, it's not the MO, so it's not together.) You don't need libs for Karla).
Well, it's not the MoD, and therefore not together.) For Carla and the libs are not needed.)
Well, while floating on how to put it all together. Through a trivial enumeration of options get results, why exactly get good or not good in one case or another, it is difficult to understand.
I will have to visualize it with similar libs - see.
Well, I'm still floating on how to put it all together. Through a trivial enumeration of options get results, why exactly get good or not so good in one case or another, it is difficult to understand
Well, we all swim. Only I rarely change options, and more on the couch, either reading (a tablet is a good thing), or think - what to do.) Before doing, well, it would be nice to scroll through it all in my head, and then how to ...
Comparisons show that there is not much difference... the forest is a classic. In alglib it is perfectly natively present in mt5. I want to update to a new version, but I have troubles with it.
I can, of course, connect a dll, but then how do you make people happy?If I'm not mistaken - the only difference is the learning speed. Otherwise it should still retrain the same. At least the description has not changed, and limitations on depth, errors, etc. are not added.
And the forest is one of the fastest learning methods, especially compared to the NS.
And the forest is one of the fastest learning methods, especially compared to the NS.
Yes, but also the classification of the forest is very peculiar. NS or Bayes is closer to fuzzy logic, and to data generalization.
If I'm not mistaken - the only difference is the learning speed. Otherwise it should still retrain the same way. At least the description has not changed and depth limitations, errors, etc. have not been added.
And the forest is one of the fastest learning methods, especially compared to NS.
The learning speed is good, the response time when using and the download time of the structure are bad, because the forest files are large. I had up to 300 mb.
There is something wrong with serialization. The forest is trained and saved faster than it is loaded back from the file.
If it says that the forest now generates orders of magnitude smaller files, that's a very big speedup
NS, on the contrary, takes longer to learn, but the response is instantaneous. There is no difference in the quality of classification. You can use anything, but the woods work out of the box, and the NS needs to be adjusted