Machine learning in trading: theory, models, practice and algo-trading - page 1512

 
Kesha Rutov:

In fact, sometimes I envy Max Denisenko, his dependant position, sometimes you get so fed up at work that you get nervous tics, you go home at 3 am and you have no time for your wife or children, you just go to bed and pass out, and you get up again at 8 and so on in a circle...

All right, you should not breed and you should not have women yet. You'll have to grow into a man first. When you grow up, you won't need to write nonsense anymore. But you're more likely to end up in jail or in an insane asylum for trying to change things by any means. Because it's so obvious in your case. And yes, don't create idols for yourself yet, or Sanych is getting tired of hiccups.

 
What is the simultaneous training of the grid over multiple symbols, that is, when the same grid parameters (weights) are used on multiple symbols?
 
Andrey Dik:
What is the name of the grid training simultaneously on several symbols, i.e. when the same grid parameters (weights) are used on several symbols?

I would call it "drawdown learning", or "drawdown learning", for importance, wait for an article on "drawdown learning" by Pereverenko or Denisenko, with advanced OOP (>5 inheritance depth), 90% acuracy and the same (equal) ratio of profit to drawdown in the test, or as in the good old days without any test, all on Lern and with martin, a pure exponent)))

 
Kesha Rutov:

I would call it "drawdown learning", or "drawdown learning", for importance, wait for an article on "drawdown learning" by Pereverenko or Denisenko, with advanced OOP (at least 5 inheritance depth), 90% acuracy and the same (equal) ratio of profit to drawdown in the test, or as in the good old days without any test, all on Lern and with martin, a pure exponent)))

And the point?

 
Andrey Dik:

but in business?

In business it usually should be so, the input is a "bundle" of BPs somehow pre-processed, the output is a vector of future properties for each BP. But this requires synchronized series, one cannot get them from DC, one should do it oneself, slight de-synchronization and he gets a tester grail, but the real deal is gone.

 
Andrey Dik:
What is the name of the training of the grid simultaneously on several symbols, that is, when the same parameters (weights) of the grid are used on several symbols?

Transfer learning mb

 
Maxim Dmitrievsky:

Transfer learning mb

I need to write an article on"drawdown learning" right away.

Transferlearning is when selected (usually the first 1-2 layers) neurons / layers trained on one dataset or algorithm are used in another grid as a spare part, it is used for example for styling pictures.

 
Kesha Rutov:

In fact, it usually should be so, the input is a "bundle" of BPs somehow pre-processed, the output is a vector of future properties for each BP. But for this you need synchronized rows, you can not get it from DC, you need to build yourself, a slight desynchronization and the gift is a tester's grail, and in the real world it is a drain.

This is not a problem with syncronization of series, because there is no link to specific TP points, at least I do.

 
Maxim Dmitrievsky:

Transfer learning mb

The point of this action is to isolate stable patterns (or whatever you want to call them), and they are stable because they work on different BPs, my timid experiments in this area show that this is in principle possible... and as a consequence the robustness increases (decreasing the degree of fitting)

 
Kesha Rutov:

Urgent to write an article on"drawdown learning".

Transferlearning is when selected (usually the first 1-2 layers) neurons / layers trained on one dataset or algorithm are used in another grid as a spare part, it is used for example for stylization of pictures.

no snotty ice

Reason: