Machine learning in trading: theory, models, practice and algo-trading - page 3297

 
Andrey Dik #:
I can't help but share the stunning news (so accurate for me), an even stronger algorithm than SSG has been found.

This is a good thing indeed.

 
Training is, of course, a broader concept than optimisation. And it uses its own evaluation criteria.

The topic is called: MOE.
 
Maxim Dmitrievsky #:
You're confusing entities. You are trying to fit optimisation to approximation, or vice versa.

Approximation and optimisation are different approaches to solving machine learning problems.

If I understood correctly, approximation in algo-trading is the creation of the TS itself. I want martin - created, I want scalper - created, I want patterns - created, etc. You can have MO methods create something.

And optimisation - adjustment/study of the already created TS.

Since, unlike a human, MO is also involved in the creation of TCs through the number cruncher, we can combine approximation and optimisation. Got it right?

 
fxsaber #:

If I understood correctly, in algo-trading, approximation is the creation of the TS itself. I want martin - created, I want scalper - created, I want patterns - created, etc. You can instruct MO methods to create something.

And optimisation is tuning/studying the already created TS.

Since, unlike a human, MO is also involved in the creation of TCs through the number cruncher, we can combine approximation and optimisation. Is that right?

Exactly so
 
Approximation by a polynomial of high degree leads to overtraining. The error variance decreases, but the bias on the new data increases. It's the same as adding a lot of features. And just the basics.
You can't tune a retrained model, it doesn't generalise well. You can't do kozul inference because there are no comparisons on test and control samples. The model is wrong everywhere on the test sample, it is impossible to derive a correction. It's easier to throw the model away.
 
Maxim Dmitrievsky #:
Exactly

Interestingly, in terms of having a quantity of data (quotes), a human brain (as a neural network) compared to an MO is like an infusoria compared to a human.

However, so primitive humans have proven that they can create pretty good working TCs. It turns out that it doesn't require such a huge amount of data to create a working TC.

It is a mystery to me how, for example, man got to the point of working scalper models. It was done almost entirely without number crunchers.


The scenario for this must have been something like this:

  1. I often see some kind of flattening (stupidly on the screen for a few days).
  2. I will try to make money on it with a primitive TS.
  3. It does not drain much. I should refine the TS a bit. I looked at the trading history - it looks like something can be improved.
  4. It started to plus a little. I repeat point 3.
No number cruncher. I just looked at point 1 and started doing it. The probability of this approach seems about zero, but somehow it works. Some kind of working mad poke method.


Apparently, on some subconscious human brain is still able to find "patterns" on extremely small amount of data. You can't call it luck. It's a mystery.

 
fxsaber #:

Interestingly, in terms of having a quantity of data (quotes), the human brain (as a neural network) compared to MO is like an infusoria compared to a human.

However, so primitive humans have proven that they can create pretty good working TCs. It turns out that it does not require such a huge amount of data to create a working TC.

It is a mystery to me how, for example, man got to the point of working scalper models. It was done almost entirely without number crunchers.


The scenario for this was apparently something like this:

  1. I often see some kind of flattening (I've been stupidly grinding on the screen for a few days).
  2. I will try to make money on it with a primitive TS.
  3. It does not drain much. I should refine the TS a bit. I looked at the trading history - it looks like something can be improved.
  4. It started to plus a little. Repeat point 3.
No number cruncher. I just looked at point 1 and started doing it. The probability of this approach seems to be about zero, but somehow it works. Some kind of working mad poke method.
One-shot-learning. When a large pre-trained NS (brain) is pre-trained on left data with just a few examples. If the model has initially learnt the laws of the world, it easily clicks a new task with a cursory glance.

This is how large language models, in particular, are pre-learned, for new tasks. But if you force it to learn these new examples for a long time, it will start to forget previous experience and become biased towards new data.
 
fxsaber #:

Interestingly, in terms of having a quantity of data (quotes), the human brain (as a neural network) compared to MO is like an infusoria compared to a human.

150 billion neurons, and not 1 output each, but many. AI will not grow to such a level soon or ever.
NS is compared by the level of intelligence to a cockroach - run, bite - run away.

 
Maxim Dmitrievsky #:
One-shot-learning. When a large pre-trained NS (brain) is pre-trained on left data with just a few examples. If the model has initially learnt the laws of the world, it easily clicks a new task with a cursory glance.

here, you yourself have shown that a pre-trained brain with left data solves specific problems that it did not know before. and you say that extra "knowledge" is not needed.

 
fxsaber #:

Apparently, on some subconscious level, the human brain is still able to find "patterns" on an extremely small amount of data. You can't call it luck. It's a mystery.

In fact, a trader processes much more information in different ways at the same time than MO-models do in relation to trading. and besides, the brain is armed with various knowledge that is not related to trading, but helps to solve trading tasks.

Reason: