Machine learning in trading: theory, models, practice and algo-trading - page 2644

 
secret grail, in response they will start explaining to the author what a fool he is)

I have never seen grails here, but there are many vague hints about possessing the Great Mystery of the Grail) I have already commented on this topic and the term "fool" here is practically a medical term, not a swear word).

 
mytarmailS #:

Let's get some research published, some ideas.

I am thinking about the possibility of combining my idea with the idea of the PRIM algorithm. I don't have much to brag about.

 
mytarmailS #:

Let's publish some researches, ideas...

There's already an article about clustering :) it's useless, though.

I have some thoughts on how to make a sensible one, but I haven't done it yet. And I lost all the sources

The pros have already voiced - stable on new data. On the downside - the marks are mediocre. But it is possible to squeeze out some compromise.
 
secret grail, in response they will start explaining to the author what a fool he is).
Ahahahaha... It's true.
 
Aleksey Nikolayev #:

I am thinking about the possibility of combining my idea with the idea of PRIM algorithm. I don't have much to brag about.

What is the freshness of Prim compared to others I have not understood yet
 
Maxim Dmitrievsky #:
There is already an article about clustering :) it is useless though

There are some thoughts on how to make a sensible one, but I haven't done it yet. And I lost all the sources.

The pros have already voiced - stable on new data. On the downside - the marks are mediocre. But you can squeeze out some compromise.
If you find clusters in which one class label strongly dominates over the other, then such a cluster retains statistics on new data in contrast to any training. With a teacher.
Try to find such a cluster, you'll be pleasantly surprised.
 
mytarmailS #:
What is the freshness of Prim compared to others, I still don't understand

I found it suitable as a basis for an algorithm for selecting a working region in a set of predictors. Roughly speaking, I build initial approximations for cube regions based on my idea, and then try to adjust them more finely.

Well, optimising only for profit, which leads to trying to improve the system by artificially increasing false negatives.

There is no rigorous theory and it is hardly possible.

 
Aleksey Nikolayev #:

I found it suitable as a basis for an algorithm for selecting a working region in a set of predictors. Roughly speaking, based on my idea, I build initial approximations for cube regions and then try to adjust them more finely.

Well, I optimise only by profit, which leads to an attempt to improve the system by artificially increasing false negative cases.

There is no rigorous theory and it is unlikely to be possible.

I don't understand...
If you just train the Random Forest and choose the best rules from it according to the necessary criterion, what is the difference?
The rule is already a special case of some situation and these cubes are already taken into account by the rule.
 
mytarmailS #:
I don't get it...
If you just train a Random Forest and select from it the best rules according to the required criterion. what is the difference?
The rule is already a special case of some situation and these cubes are already taken into account by the rule.

It may well be so. But it seems to be a more interpretable approach to comparison/selection of features and optimisation of metaparameters.

 
Aleksey Nikolayev #:

That may well be the case. But it seems to be a more interpretable approach to comparison/selection of features and optimisation of metaparameters.

What happened with the asociative rules?