Machine learning in trading: theory, models, practice and algo-trading - page 3507

 
Aleksey Nikolayev #:

I won't get into arguments, I just recommend reading at least the wiki about greedy algorithms. Trees are always built by greedy algorithms.

That's what I'm writing about - that the principle of greedy choice is not optimal, because it worsens the choice at the next steps.

Applicability conditions

There is no general criterion for evaluating the applicability of a greedy algorithm for solving a particular problem, but the problems solved by greedy algorithms are characterised by two features: firstly, the GreedyChoice Princip le is applicable to them, and secondly, they have theOptimality propertyfor subproblems.

The Greedy Choice Principle

The Greedy ChoicePrinciple is said to apply to an optimisation problem if a sequence of locally optimal choices yields a globally optimal solution. In a typical case, the proof of optimality follows this scheme:

  1. It is proved that the greedy choice at the first step does not close the path to the optimal solution: for every solution there is another one, consistent with the greedy choice and not worse than the first one.
  2. It is shown that the subproblem arising after the greedy choice at the first step is similar to the initial one.
  3. The reasoning is completed byinduction.
 
Maxim Dmitrievsky #:
You need to read carefully all the definitions, what are the methods of data analysis and make up your mind. Quantisation is not one of them.

Since you yourself can't decide what you are doing, I don't want to get into the mess any further.

You thought you understood my work, but it turned out that you only invented it for yourself. Sometimes you need to make more effort to understand how things work - it's not like calling functions from the library.

 
Aleksey Vyazmikin #:

You thought you understood my work, but it turned out that you only made it up for yourself. Sometimes it takes more effort to understand how things work - it's not like calling functions from the library.

😀😀😀😀 it's easier to communicate with chicks.
I understand your "labour" better than you because you don't even write the code yourself. At least understand the definitions if you want to communicate about something.

Until there are proper definitions, the conversation will go no further.
 
Maxim Dmitrievsky #:
😀😀😀😀 it's easier to communicate with chicks
I understand your "labour" better than you because you don't even write the code yourself. At least understand the definitions if you want to talk about something.

Until there are proper definitions, the conversation will go no further.

Why would I want to have a pointless conversation with someone who misrepresents facts, who doesn't want to understand others, who thinks he knows...

No, I regret that I have wasted a lot of time flaunting my explanations here - there are no listeners here.

 
Aleksey Vyazmikin #:

Why would I want to have a pointless conversation with someone who misrepresents facts, who doesn't want to understand others, who thinks he knows...

No, I'm sorry I wasted a lot of time flaunting my explanations here - there are no listeners.

That goes for you more than anyone else. Again with the substitution of ideas.
 

chicken pole wheel = 10

what is it? should it be 2 + 8 =10 or 5+5=10 or something ?

well, yeah, maybe it should be, but I'm not sure.

So what's it supposed to be? And why don't you write in a language that everyone can understand? How can anyone understand you?

I'm writing normally, you just don't want to understand and I'm just talking for nothing.


Okay.

 
mytarmailS #:

chicken pole wheel = 10

what is it? should it be 2 + 8 =10 or 5+5=10 or something ?

well, yeah, maybe it should, but I'm not sure.

So what's it supposed to be? And why don't you write in a language that everyone can understand? How can anyone understand you?

I'm writing normally, you just don't want to understand and I'm just talking for nothing.


Okay.

Looks like you're the only one who didn't understand the algorithm...

 
Aleksey Vyazmikin #:

You seem to be alone in not understanding the algorithm.....

Okay.
 
Aleksey Vyazmikin #:

That's what I'm writing about - that the principle of greedy choice is not optimal, because it worsens the choice at subsequent steps.

Conditions of applicability

There is no general criterion for evaluating the applicability of a greedy algorithm for solving a particular problem, but the problems solved by greedy algorithms are characterised by two features: first, the GreedyChoice Principle is applicable to them, and second, they have the property ofOptimality for subtasks.

The Greedy Choice Principle

The Greedy ChoicePrinciple is said to apply to an optimisation problem if a sequence of locally optimal choices yields a globally optimal solution. In a typical case, the proof of optimality follows the following scheme:

  1. It is proved that the greedy choice at the first step does not close the path to the optimal solution: for every solution there is another one, consistent with the greedy choice and not worse than the first one.
  2. It is shown that the subproblem arising after the greedy choice at the first step is similar to the initial one.
  3. The reasoning is completed byinduction.
Of course, a greedy algorithm does not guarantee a global optimum, so non-greedy ones for trees have long been sought. But so far it's only academic research, not working packages. You need concrete arguments in favour of using non-greedy ones.
 
Aleksey Nikolayev #:
Of course, a greedy algorithm does not guarantee a global optimum, so they have been looking for a long time for non-greedy ones for trees. But so far it is only academic research, not working packages. You need concrete arguments in favour of the fact that you are using non-greedy ones.

The point is not to compare my method in the end (it's not perfect), but to show in experiments that with each iteration the probability of choosing an efficient split (which will be with the same bias vector on new data) decreases and this is, in fact, the reason for unstable models.

In doing so, I detailed the situation on an iteration-by-iteration basis for each predictor, and showed that there are ranges in which the predictor gives quantum splits with a high probability of stability on new data. This is different for all predictors - hence the conclusion that it is important to be consistent in selecting predictors for the split.

The question is how one can influence the probability of selecting an efficient split (in my case double split - quantum cutoff). If one can increase the probability, then there will be fewer erroneous splits.