Machine learning in trading: theory, models, practice and algo-trading - page 2838

 
Andrey Dik #:

Instead of lashing out at people, you can try to understand. If you understand, good. If you don't understand, walk on by.

My attacks were about the fact that you can't compare different AOs on equal terms and decide which one is good and which one is bad....

Each AO has its own optimisation surfaces.

SELECTION of AO depends on the optimisation surface, not on subjective likes.


if the surface is smooth and has one minimum a person applies a genetic algorithm or swarm or annealing or any other global optimisation algorithm to it, he is a fool who does not understand what he is doing, because gradient descent will solve this problem 100 times faster.

If the surface is complex, noisy, has many minima, and a person uses gradient descent, then again this is a fool , because the algorithm will get stuck in the longest minimum.


If a person decides to compare, for example, gradient descent with genetics, is he a fool to continue?

It's a misunderstanding of elementary things.

 
mytarmailS #:

my attack was that you can't compare different AOs on an equal footing and decide which one is good and which one is bad.

each AO has its own optimisation surfaces.

The choice of AO depends on the optimisation surface, not on subjective sympathies

I compare algorithms with three completely different test functions, so you can see the specific advantages of each algorithm in separate tests, hence you can see where they are strong, and therefore you can choose the best one for the specific tasks of the researcher. there is no subjectivism in the tests, on the contrary, they are as objective as possible.

Most algorithms specialised for networks have some form of smoothing in their logic, or moments. they are oriented on the application of smooth derivatives of the target problem functions. it will be seen where they are strong and where they are not so strong.

 
Andrey Dik #:

I compare algorithms with three completely different test functions, so you can see the specific advantages of each algorithm in separate tests, hence you can see where they are strong, and therefore you can choose the best one for the specific tasks of the researcher. there is no subjectivism in the tests, on the contrary, they are as objective as possible.

Most algorithms specialised for networks have some form of smoothing in their logic, or moments. they are oriented on the application of smooth derivatives of the target problem functions. it will be seen where they are strong and where they are not so strong.

you can't compare different types of AO in the same conditions because they solve different problems, that's my message

 
mytarmailS #:

You can't compare different types of AOs in the same conditions because they have different tasks to do, that's my message

I guess I didn't understand what I said last time.... Once again, it is possible to compare, that's why different test problems are used, to compare algorithms adequately to the specifics of the task. the tests show, for which tasks the use of AO is optimal, so you can choose among them.

For example: if ADAM shows superiority on smooth functions - great! - then that is how it should be used, otherwise it is better to choose another algorithm. But if ADAM shits on all tests, well, then we should choose something better, that's all. nowadays, most people just choose something specific according to "fashion", not knowing whether they have made the best choice or not.

 
Perfect class markings and so are in perfect balance. The oil is oily. It's impossible to improve anything there.

And selecting models by custom metrics can be useful sometimes, I guess. But it's all bling, by and large
 
Maxim Dmitrievsky #:
Perfect class markings and so are in perfect balance. The oil is oily. It's impossible to improve anything there.

And selecting models by custom metrics can be useful sometimes, I guess. But it's all bling, by and large.

Yeah. But it's good enough for a mate to understand why derivative problems are needed.
Ideally, one should have a complete set of all sets of model parameters (a complete enumeration) and classify the sets by stability on the oos. this is in theory, but in practice it is not a feasible task.

 
Clearly not directly comparing algorithms that are for different things. It is just interesting to see how they converge, maybe there are newer ones. I've heard about all sorts of author's architectures of NS on other learning principles, but I haven't seen them
 
Maxim Dmitrievsky #:
And selecting models by custom metrics can be useful sometimes, I guess. But it's all bling, by and large

My intuition tells me that soon it will become a common place for MO in trading.

Not that it will be a guarantee of profit, but not using it will be considered a guarantee of failure).

 

San Sanych is right about the problems of applicability of optimisation results on history due to the non-stationarity of the market. The problem is that such optimisation is the only thing we have. For example, his own approaches to feature selection are also optimisation on history, albeit more tricky).

Or some kind of crossvalidation, for example - this is also optimisation on history.

 
Aleksey Nikolayev #:

My gut feeling is that this will soon become common place for MOs in trading.

Not that it will be a guarantee of profit, but not using it will be considered a guarantee of failure).

Please, take the MT5 optimiser and use it. Long time ago as a general method already 😀

However, TCs earn on other principles mostly (differently found). So optimisation will not increase in its importance, I would not even devote so much time to such nuances

A simple analogy: optimising a random TS in MT5 according to different criteria does not lead to success. But optimisation of initially good TS leads to success at any criterion.

All these peaks and plateaus have nothing to do with the presence or absence of regularities. They may coincide randomly on the traine and test, or they may not coincide. This is not the subject of Andrei's research.
Reason: