Machine learning in trading: theory, models, practice and algo-trading - page 3503

 
Maxim Dmitrievsky #:

Similarity in dividing into groups and analysing those groups afterwards.

If you mean quantisation, then yes - that's how it used to be....

However, it is the same as to compare and CatBoost - the same happens - quantisation and then estimation.

This is a superficial evaluation of the algorithm.

 
Aleksey Vyazmikin #:

If you mean quantisation, then yes - that's the way it used to be....

However, it is the same as comparing and CatBoost - the same happens - quantisation and then estimation.

This is a superficial evaluation of the algorithm.

So are you evaluating the algorithm or analysing the data?

 
Aleksey Nikolayev #:
Radicals make a lot of noise, they are well heard and so they are credited with a lot of things. Real progress is created in silence. The recent surge in AI is good evidence of this - the hype came just when real progress was on hold.

I don't know how you have radical making noise - radical is more of an outlier in a normal distribution - yes it makes noise with its irregularity, some of it will remain an outlier and a small fraction will change history....

 
mytarmailS #:
It's not clear to anyone again.

1 Make a reproducible code example in the format - it was like this, did this, became like this.....

Those who want to understand - study and ask questions.

The main code is about 3000 lines, I don't see any sense in publishing it.

I am sharing the results of the research - you may believe or not believe them - everyone is free to decide for himself.

 
Aleksey Vyazmikin #:

Those who want to understand - study and ask questions.

The main code is about 3000 lines, so I don't see any reason to publish it.

I am sharing the results of the research - you can believe or disbelieve them - everyone is free to decide for himself.

))))
What to believe or not to believe if nobody knows what you're talking about.

I asked you a question, what did I get in response?
That's it, the conversation is over.
 
Maxim Dmitrievsky #:

are you evaluating the algorithm or analysing the data?

I am developing an algorithm to analyse the data.

Clustering is a good method, there are similar features with quantisation - dividing the sample into subgroups.

Is it possible to use clustering instead of quantisation in my method - it is possible, but the result is not known at the moment.

 
mytarmailS #:
))))
What to believe or not believe if no one knows what you're talking about.

I asked you a question, what did I get in response?
That's it, this conversation is over.

Where do you see the question mark in the last post? There are no questions there.

Maxim Dmitrievsky thinks he understands my algorithm - maybe he can explain it better?
 
Aleksey Vyazmikin #:

I'm developing an algorithm to analyse the data.

Clustering is a good method, there are similar features with quantisation - dividing a sample into subgroups.

Is it possible to use clustering instead of quantisation in my method - it is possible, but the result is not known at the moment.

Quantisation is not an algorithm for analysing data and dividing it into subgroups, but an algorithm for reducing its size.

For example, there are quantised LLMs so that you can run them on your computer.

 
Maxim Dmitrievsky #:

Quantisation is not an algorithm for analysing data and dividing it into subsets, but an algorithm for reducing its size.

For example, there are quantised LLMs so that you can run them on your computer.

You can read my articles on this topic.

 
Aleksey Vyazmikin #:

Imho, we can say about the dual classification problem. At the first stage - ordinary binary (possibly ternary) classification by a decision tree.

At the second stage - classification of leaves obtained at the first stage according to their resistance to change with time, possibly through regression, since probabilities are mentioned.

Reason: