Machine learning in trading: theory, models, practice and algo-trading - page 3323

 

Still trying to put a drop of iodine in your absinthe when you were drinking a different drink. Like, try it, it tastes better.

You want me to find what I was writing about and point it out to you?

 
Maxim Dmitrievsky #:

Still trying to put a drop of iodine in your absinthe when you've had a different drink. Like, try it, it tastes better.

You want me to find what I was writing about and point it out to you? I like to play with my food for a long time before I eat it.

I don't know who you're talking to, but go ahead and look if you have nothing better to do. Just don't take it out of context please, no one is going to scroll through hundreds of pages of posts looking for it. You had the opportunity to confirm or deny in a live chat today, but you couldn't. Are you going to wave your fists after a fight? - go ahead.

 
Andrey Dik #:

It's not clear who you're talking to, but go ahead and look it up if you have nothing better to do. Just don't take it out of context, please, no one is going to scroll through hundreds of pages of posts to find out if it's true. You had the opportunity to confirm or deny in a live chat today, but you couldn't. Are you going to wave your fists after a fight? - go ahead.

What makes you think you won?

The real story is that you attacked and got yourself killed.

I didn't come to you about anything.
 
Maxim Dmitrievsky #:

Why do you feel like you've won?

The real story is that you attacked and got yourself beaten.

I didn't address you on any issues at all.

And who exactly did you address when you said "optimisers don't understand"? List them by name. Who do you consider "optimisers" and why don't you consider yourself one? - we have already found out, you do optimisation perfectly well without realising it and without wanting to admit it.

Even now you wrote without addressing anyone in particular, like not me? - so you're trolling or what? Then you say you didn't address anyone, didn't ask anyone questions....

 
Andrey Dik #:
Who exactly were you referring to when you said "optimisers don't get it"? List them by name. Who do you consider "optimisers" and why don't you consider yourself one? - We've already established that you do optimisation perfectly well without realising it and without wanting to admit it.

There were 2 approaches described. Research and p-hack, optimisation refers to the second one. All those who were in favour of the second one are optimisers.

When the results are chased under FF.
 

For you, "everything is optimisation". You might as well say that everything is atoms, or everything is nothing.

Another person wrote that optimisation is research. Also a substitution of concepts.

 
Maxim Dmitrievsky #:

Two approaches have been described. Research and p-hack, optimisation refers to the second one. All those who were in favour of the second one are optimisers.

When the results are chased under FF.

Have you forgotten what break criteria you wrote about? You're not recognising me again? Come on, stop it, stop it.

Both approaches use optimisation with some kind of stopping criterion.

 
Andrey Dik #:
You've forgotten what you wrote about stopping criteria? You're not recognising me again? Come on, stop it, stop it.

You're the one who's pulling all the strings to make yourself look important.

I didn't write about any stopping criteria in the first place.

You have the nerve to attribute things to me that I didn't say.

Basically, you're just rambling and entertaining your audience.

and I have to guess what random processes are going on in your head and in which direction they are moving.
 
You just throw in random information and demand that everyone agree with it.
 
Andrey Dik #:

Yes, the question is always to ensure the robustness of the model on new data. That is why I said that finding such a criterion is one of the most important and difficult.

I mean that the result evaluation metrics we are used to using in trading and machine learning are only a part of evaluating the quality of the resulting model/tuning/approximation.

What is important is under what conditions we achieved this. How much information was required to achieve this. We need to assess the stability of the observations over time. The contribution of each predictor.

The problem with complex models with a large number of predictors and decision rules (whether they are trees or neurons) is that they create complex patterns that are unlikely to repeat in their entirety, hence the bias in the probability of assignment to one of the classes. Earlier I posted a picture of "what the trees are buzzing about" which showed that most leaves simply don't activate on new data.

All this is from the fact that we are dealing with a "function" (actually their sum) that cannot be fully explored to approximate it. Which means we need to pay special attention only to those that are more understood/known. It is better to let the model "keep silent" on new data, as it is not familiar with the situation, than to operate on single cases from the past.

So the question arises - how to make the model silent if it is not sure, and to give confidence if the probability of favourable events is high.

We need methods of correcting ready-made models. They can be implemented by influencing the model after training, or by applying models of two classes - one of the bousting type and the other of the nearest neighbours type.