Machine learning in trading: theory, models, practice and algo-trading - page 3049

 

Read the article.

What good is this article to us?

It's completely dubious.

And the reason is the following.

Now I can't remember any publications on AI (MO) that would Predict the future. They teach a model to write handwritten letters. Then they try to recognise these handwritten letters. But the model is PRINCIPALLY not taught to predict what letter will be written next.

That's the problem we have.

We are trying to use MO (the same work we did with candlestick combinations) to find some patterns in the predictors that will give the correct prediction. But there is no guarantee that the patterns found will give correct predictions in the future. "Right" patterns will give an error and "wrong" patterns will predict the right thing. The reason lies in the classification algorithms themselves, which give the VERABILITY of a class, not its value. We use the most primitive regularisation of 0.5 to divide into classes. And if during training the probability of a "correct" pattern = 0.5000001, why do we take this probability as the class value?

To get away from this, we start to grow a vegetable garden, but the values of predictors, among which we look for patterns, are either NOT stationary or pseudo-stationary, and have a remote relation to the price!

 
Aleksey Nikolayev #:

We have the influence of the environment on the agent, but is there an influence of the agent on the environment? Probably, it is possible to introduce this second influence artificially, but does it make sense?

It all depends on the task at hand.

If we forecast a ready-made target, as the overwhelming majority does, then we have no influence on the environment, and there is no need for RL.

But if, for example, the task of position management, stops, take-outs...

Asset management is the environment that we (agent) manage.


The agent decides to place an order or not to place an order.

at what price,

and when to take it out or move it,

and what will he do when the current loss on the current position exceeds n pips,

and what will he do if he has five losses in a row?


You see, this is a completely different level of the task, with a lot of states (outputs), not a primitive up/down on the classification


To make it even simpler - we can't manage the market, but we can manage position and risk, because we can!


Aleksey Nikolayev #:

Smoothness of some analogue of profit, probably, can be achieved somehow (for example, by something like kernel smoothing). But I doubt very much about monotonicity.

I don't really understand what is meant by "smoothness" and why "smoothness" is necessary....

Maybe we can use multi-criteria optimisation to find the best solution for this.

 
mytarmailS #:

it all depends on the task at hand.

If we forecast a ready-made target, as the vast majority of people do, then we have no influence on the environment, and there is no need for RL itself....

But if, for example, the task of position management, stops, take-outs ...

The asset management is the environment, which we (the agent) manage...


The agent decides to place an order or not to place an order.

at what price,

and when to take it out or move it,

and what will he do when the current loss on the current position exceeds n points,

and what will he do when he has five losses in a row...


You see, this is a completely different level of the task, with a lot of states (outputs), not a primitive up/down on classification


To put it even more simply - we can't manage the market, but we can manage position and risk, we can!


I don't really understand what goes into "smoothness" and why you need it....

Maybe we can apply multi-criteria optimisation to the search.

We could average out the agent.
 
Valeriy Yastremskiy #:
It is possible for an agent to average.
https://www.mql5.com/ru/code/22915
 
СанСаныч Фоменко #:

The basis of the financial result of trading is the price movement - a non-stationary random process.

Are we trying to turn a non-stationary random process into a smooth and monotonic one by means of some tricks? Maybe we are walking wide? Especially if we take into account that classification error less than 20%(!) outside the training set is extremely difficult to achieve. Maybe we should start by working on reducing the classification error?

I was talking about the properties of the loss function minimised by training the model. More precisely, about its ideal form.

 
mytarmailS #:

I don't really understand what goes into smoothness and why smoothness is necessary.

It's the basics of optimisation. Smoothness makes it possible to optimise by means of a gradient. Otherwise, only brute-force algorithms remain.

 
Aleksey Nikolayev #:

It's the basics of optimisation

Yes, I know that, but recently they were talking about the smoothness of the capital curve itself, so I didn't really go into these discussions, that's why I'm asking what you mean.

 

Off-topic.

one of the best deals ever.

gold

risk/reward 1 in 24

 
mytarmailS #:

Offtopic...

one of the best deals ever.

gold

risk/reward 1 in 24

"Water body" is a great one)
 

Here's what I think, we need to identify the patterns that different instruments have. I.e. it is either a rule from a number of predictors, or a quantum segment of the predictor range. Then, we generate a random graph and look for these patterns there. You can generate 100 graphs (more than one to reduce the probability of error). If a pattern does not occur or occurs very rarely, we consider it plausible, otherwise we consider that the rule was obtained randomly, i.e. it does not describe a long-term pattern.

Thus, we get a set of rules+predictors that can detect a regularity on our time series.

Will someone try to do this in R or Python?

Reason: