Machine learning in trading: theory, models, practice and algo-trading - page 3341
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
After all, you weren't banned from google, were you? You can read how statistical inference differs from causal inference, right?
You, of course, are the greatest guru and can afford to give people points, but I can't, once again writing specifically in the expectation that you will respond specifically as well.
What other section of the book do you disagree with, or rather what else did you not understand from the book?
I have never written anywhere that I disagree.
I am against new labels for well known concepts
It's all there and putting fog on known things is very unhelpful.
The author bothered to state the limits of applicability of linear regression. Minus a point.
Did not see in the text:
1. linear regression is applicable to stationary random processes
2. the residual from fitting linear regression must be normally distributed.
If this is not the case in his examples, and the converse is not stated, then a pittance of all his reasoning.
All the reasoning in the book about cause and effect is the usual "false correlation" reasoning
Meta Learnersis not an ensemble of models, minus the score.
According to the text of the book,"Meta learners" are the result/outcome of fitting/predicting conventional models. Had the author not once again labelled the most ordinary concepts with new labels, there would have been an opportunity for me to express my thoughts more accurately.
So I will clarify.
The ensemble of models is an old and well established idea. The input is the output of lower level models, the output is a signal. There are many methods of combining the results of lower level models - combining "meta learners". The author considers three variants of combining the results of fitting, the second and third variants combine the results of the gradient bousting model. In the third variant, the outputs of the first level are combined according to the
Here is this unintelligible text is the meaning, the novelty in all this:
Well that's the question, there's an asociation on the face of it...
How do you know if it's just an asociation or if it's actually AB causing C
It's not clear if these lines are immediately known or if they appear one letter at a time. What causes these letters to appear. If it is just a sequence of patterns, the task does not look very formalised. Why was the string length chosen and all that. Maybe the data is not represented in the right form at all.
read the book, maybe you'll find the answer.
Sanych, kozul is a complex topic, not only everyone can understand it at a glance. If you do not understand something, it does not mean that there is something wrongly written there.
Don't suffer if you don't want to. Otherwise it turns out like in the parable about beads.
Sanych, kozul is a complex topic, not only everyone can understand it at a glance. If you don't understand something, it doesn't mean that there is something wrongly written there.
Don't suffer if you don't want to.
Kozul - this is an advertising move and the whole book is nothing more than an advertisement of the unusual novelty of the novelty of the most usual provisions of mathematical statistics. But mathematical statistics is a really difficult subject.
Here is the result of hundreds of pages of text:
As far as I understand programming, the code given is NOT working code: functions that don't know where they come from, results are not assigned to anything, function arguments are from scratch.
Maxim is hopelessly incapable of having a substantive discussion.
Is there anyone on the forum who understands the copy of the code from the book I have given?
I myself apply different approaches to combining the results of several models, I know but do not apply some more, but I have not seen something similar, maybe because of the incomprehensibility of the code.
A marvellous section in the appendix to the books, speaking of the utter uselessness of all this cajuela:
"
Why Prediction Metrics are Dangerous For Causal Models".
and the conclusion of the section:
In other words, predictive performance on a random dataset does not translate our preference for how good a model is for causal inference .
Yandex translation
Why predictive performance is dangerous for causal models
In other words, prediction performance on a random dataset does not translate our preference for how good a model is for causal inference
I.e. for the author the most important thing is the causal inference itself, and trying to use it spoils the beauty of the whole construction.
Cheat!