Machine learning in trading: theory, models, practice and algo-trading - page 2575
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In the article, Kalman is tested on generated data. I'm not sure it will be better than the sliding version of LS on real data.
No, no, on real data, it's honest Y_.
Here's the mu and gamma on the Y_ data.
and the backtest with the Y_data.
But that's not the point, in theestimate_mu_gamma....blabla
Regression and rolling regression are divided into a trace and a test, it's like there is a model to predict new data (new spread), but for Kalman it's not there, I don't understand how it works inside, how to build a spread on new data using Kalman. The code is so unclear that my eyes are bleeding.
I don't understand a thing about this kalman.
It's probably easier to unlock the second strategy before kalman - it has the same principle - adaptivity in time, but it's simpler.
There in any case, all three strategies to crack, probably easier before the Kalman to crack the second - it has the same principle - adaptivity in time, but it is easier.
No Andrei, the second one works very badly.
There are also very good pairs taken ... If we take reality, God forbid that Kalman should show something
No Andrei, the second rollingLS is very bad
There are also very good pairs taken ... If we take reality, God forbid that Kalman shows something.
So this picture is a comparison on simulated data. On the real data there at the end and on their first half of the Kalman is even slightly worse.
Roughly speaking, some a priori assumptions are made for kalman and if they are true in reality, kalman will be much better and vice versa.
Roughly speaking, some a priori assumptions are made for the Kalman and if they are true in reality, the Kalman will be much better and vice versa.
I don't think so. He was just simulating the data for fun.
Here's the training of the models on the real Y_ data.
Then you get the spreads.
then the backtest.
Kalman didn't train the models on synthetic data before the real backtest.
I don't think so. He was just simulating the data for fun...
Here's the training of the models on the real Y_ data.
and then getting the spreads.
then the backtest.
You didn't train Kalman on synthetic data before the real backtest.
The a priori assumptions are firstly a linear model stored in the package (described at the beginning of the Kalman section) and secondly the initialization parameters of this model are taken, generally speaking, from the ceiling.
No Andrei, the second rollingLS is very bad.
Not really. If you look at the previous graphs, you can see that the actual "rolling" is activated after passing ~ a third of the sample. on real data, if there is a history of such a problem will not be.
But taki Kalman is probably still better, but I still think it is better to split from the stove.
Yeah... especially if you're a humanitarian.
It's not a forrest on irises.)
I don't understand a thing about this calman((
MAshku (aka Kalman) counts on the obtained spread, smoothing out the "noise", of course
https://datascienceplus.com/kalman-filter-modelling-time-series-shocks-with-kfas-in-r/
Kalman is not a mashka!https://datascienceplus.com/kalman-filter-modelling-time-series-shocks-with-kfas-in-r/
We already went through this with Rena and the tractor, with examples of their predictions at 1 bar)))) I'm laughing
One way it'll be ahead, the other way it'll be behind. 50/50 in total.