Machine learning in trading: theory, models, practice and algo-trading - page 2560
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I think I understand what I need - the ability to set a custom flash file. But this function HMMFit() does not support this possibility, because it implements the Baum-Welch with a firmly sewn into it - LLH. You can only set some Baum-Welch parameters.
You need another package where you can set a user-defined f.f.
The funny thing is that I haven't seen any such packages with AMO where you can use your phd...
You either set X,Y (date, target) or just X (date).
But it's always possible to get into the "guts" of AMO and there move them and see what happens in terms of f.f..
I train neuronics this way, I also trained Forest, and now I want to do more SMM.
The funny thing is that I have not encountered such pacts with AMO where you can use your ff...
You either set X,Y (date, target) or just X (date).
But it's always possible to get into the "guts" of AMO and there move them and see what happens in terms of f.f..
I train neuronics this way, I also trained Forrest, and now I want to do more SMM.
In LightGBM you can set your own, but more often than not there is no such opportunity.
In LightGBM you can set your own, but most often there is no such opportunity.
Do you want me to tell you again what metrics I use and by what criteria I select models?
After all, this is the most important thing in MO, the fundamental question :-)
Probably, we should return to simple definitions generally accepted.
Regarding the definition of stationarity - here is clearly an abstraction, because either it is a single point without fluctuations, and then the measurement window does not matter, or it is still fluctuations with a minimum window or with a range of windows to measure.
Regularity, on the other hand, can generate stationarity just the same - since it is the state of a single point, not their measurement window.
Accordingly, stationarity directly affects predictability, and hence learning, if that stationarity has information about the target.
As I wrote earlier, just now I use approach of selection of predicates through estimation of their stationarity with given measurement window.
In LightGBM you can set your own, but most often there is no such a possibility.
Also xgboost can, but it is very difficult to write your own function. You have to output the formulas.
http://biostat-r.blogspot.com/2016/08/xgboost.html - 6th paragraph.
Regarding the definition of stationarity - here is clearly an abstraction, because either it is a single point without fluctuations, and then the measurement window does not matter, or it is still fluctuations with a minimum window or with a range of windows to measure.
Regularity, on the other hand, can generate stationarity just the same - since it is the state of a single point, not their measurement window.
Accordingly, stationarity directly affects predictability, and thus learning, if that stationarity has information about the target.
As I wrote earlier, just as I am now using the approach of selecting predicts through estimating their stationarity with a given measurement window.
I don't understand anything at all
statinoir should be noise after the model is built, it is not needed anywhere elseI don't understand anything at all.
Do you want to understand?
Are you sure you're not confusing ff with custom metrics?
I don't think so - the example is in python.
statistical should be noise after model building, it is not required anywhere else
Right, that's exactly the connection between the predictor and the target I'm talking about.
Now, I'm not aware of a method for building a model that gives an estimate of "stationarity" at different sample intervals with splitting or some other mechanism for combining predictors. All models make a fit to sample plots, estimating only the quantitative improvement rate, but you need to estimate it across intervals, then the model may be more robust.