Machine learning in trading: theory, models, practice and algo-trading - page 3507
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I won't get into arguments, I just recommend reading at least the wiki about greedy algorithms. Trees are always built by greedy algorithms.
That's what I'm writing about - that the principle of greedy choice is not optimal, because it worsens the choice at the next steps.
Applicability conditions
There is no general criterion for evaluating the applicability of a greedy algorithm for solving a particular problem, but the problems solved by greedy algorithms are characterised by two features: firstly, the GreedyChoice Princip le is applicable to them, and secondly, they have theOptimality propertyfor subproblems.
The Greedy Choice Principle
The Greedy ChoicePrinciple is said to apply to an optimisation problem if a sequence of locally optimal choices yields a globally optimal solution. In a typical case, the proof of optimality follows this scheme:
You need to read carefully all the definitions, what are the methods of data analysis and make up your mind. Quantisation is not one of them.
You thought you understood my work, but it turned out that you only invented it for yourself. Sometimes you need to make more effort to understand how things work - it's not like calling functions from the library.
You thought you understood my work, but it turned out that you only made it up for yourself. Sometimes it takes more effort to understand how things work - it's not like calling functions from the library.
😀😀😀😀 it's easier to communicate with chicks
Why would I want to have a pointless conversation with someone who misrepresents facts, who doesn't want to understand others, who thinks he knows...
No, I regret that I have wasted a lot of time flaunting my explanations here - there are no listeners here.
Why would I want to have a pointless conversation with someone who misrepresents facts, who doesn't want to understand others, who thinks he knows...
No, I'm sorry I wasted a lot of time flaunting my explanations here - there are no listeners.
chicken pole wheel = 10
what is it? should it be 2 + 8 =10 or 5+5=10 or something ?
well, yeah, maybe it should be, but I'm not sure.
So what's it supposed to be? And why don't you write in a language that everyone can understand? How can anyone understand you?
I'm writing normally, you just don't want to understand and I'm just talking for nothing.
Okay.
chicken pole wheel = 10
what is it? should it be 2 + 8 =10 or 5+5=10 or something ?
well, yeah, maybe it should, but I'm not sure.
So what's it supposed to be? And why don't you write in a language that everyone can understand? How can anyone understand you?
I'm writing normally, you just don't want to understand and I'm just talking for nothing.
Okay.
Looks like you're the only one who didn't understand the algorithm...
You seem to be alone in not understanding the algorithm.....
That's what I'm writing about - that the principle of greedy choice is not optimal, because it worsens the choice at subsequent steps.
Conditions of applicability
There is no general criterion for evaluating the applicability of a greedy algorithm for solving a particular problem, but the problems solved by greedy algorithms are characterised by two features: first, the GreedyChoice Principle is applicable to them, and second, they have the property ofOptimality for subtasks.
The Greedy Choice Principle
The Greedy ChoicePrinciple is said to apply to an optimisation problem if a sequence of locally optimal choices yields a globally optimal solution. In a typical case, the proof of optimality follows the following scheme:
Of course, a greedy algorithm does not guarantee a global optimum, so they have been looking for a long time for non-greedy ones for trees. But so far it is only academic research, not working packages. You need concrete arguments in favour of the fact that you are using non-greedy ones.
The point is not to compare my method in the end (it's not perfect), but to show in experiments that with each iteration the probability of choosing an efficient split (which will be with the same bias vector on new data) decreases and this is, in fact, the reason for unstable models.
In doing so, I detailed the situation on an iteration-by-iteration basis for each predictor, and showed that there are ranges in which the predictor gives quantum splits with a high probability of stability on new data. This is different for all predictors - hence the conclusion that it is important to be consistent in selecting predictors for the split.
The question is how one can influence the probability of selecting an efficient split (in my case double split - quantum cutoff). If one can increase the probability, then there will be fewer erroneous splits.