Machine learning in trading: theory, models, practice and algo-trading - page 3178

 
Aleksey Vyazmikin #:

The problem with sectarians is their fear of having their religious tenets tested.

There are always many patterns - it is a question of choosing the right one.

At least I've tried.

 
Aleksey Vyazmikin #:

Can you elaborate - I don't get it.

A known meaningless task can be obtained simply by randomly mixing labels (or randomly generating them with probabilities equal to the frequency of classes).

The idea is still the same - get a large sample from the results of a large number of intentionally meaningless problems to compare with the result of the real problem. If the real result is not in the tail of this sample, the method is rather poor.

 
Maxim Dmitrievsky #:

At least I tried.

You understand, for what in CatBoost there is a possibility of use of different methods of quantisation of indicators of predictors?

Do you think, that simply programmers have left such possibility for those who have not enough operative memory?

Or developers realise that the result of training directly depends on these tables?

And in the end - take yourself and rearrange the table settings and look at the variability of the result.

Then you will think why this happens and maybe you will start to understand me better.


And any statements in the style of a preacher/prophet/jurodic are not informative. I interpret them as a desire to show off my person.

 
Aleksey Nikolayev #:

You can get a meaningless task by simply shuffling the labels randomly (or randomly generating them with probabilities equal to the frequency of the classes).

The idea is the same - to get a large sample of the results of a large number of obviously meaningless tasks to compare with the result of the real task. If the real result is not in the tail of this sample, the method is rather poor.

Perhaps it is better to "mix" to preserve the proportion of zeros and ones.

 
Aleksey Vyazmikin #:

Do you understand why CatBoost has the option of using different methods of quantising predictor scores?

Do you think, that simply programmers have left such possibility for those who have not enough operative memory?

Or developers realise that the result of training directly depends on these tables?

And in the end - take yourself and rearrange the table settings and look at the variability of the result.

Then you will think about why it happens like this, and maybe you will begin to understand me better.


And any statements in the style of a preacher/prophet/jurodic are not informative. They are interpreted by me as a desire to show off their person.

It is suggested to ask the developers in their cart, because I do not know what they do
 
Maxim Dmitrievsky #:
It is suggested to ask the developers in their cart

Don't. In case they answer incorrectly)

 
Aleksey Nikolayev #:

Don't. In case they answer wrong)

😁😁
 
Maxim Dmitrievsky #:
It is suggested to ask the developers in their cart, for I don't know what they are doing

Ask them, since you don't understand.

Moreover, some boosters do quantisation of predictor after each split, quantising the remainder.

Well, I'm not the only one who uses it, contestants also sometimes mention work in this direction.

Anyway, I won't force-feed you any more.

 
Aleksey Vyazmikin #:

Ask if you don't understand.

Moreover, some boosters do quantise the predictor after each split, quantising the remainder.

Well, not only I use it, participants of contests also sometimes mention work in this direction.

Anyway, I won't force-feed you any more.

and why should I ask, if the conversion from fleets to ints is needed mainly for acceleration on very big data

The bonus can be a small calibration of the model for better or worse, as luck would have it.

they will just give you the same answer, so you're probably afraid to ask because it will devalue all your years of hard work :)

it's rummaging through the underwear of the algorithm.
 
Maxim Dmitrievsky #:
Behold, for if you are not given a pattern in the original series, the Hilbertian path will not lead you to your cherished goal. Your endeavours will turn into devilishness, and you will find an ignominious slaughter instead of a paradise.

😁😁