Machine learning in trading: theory, models, practice and algo-trading - page 2062

 
Maxim Dmitrievsky:

Try to make a filter by time, for example in the Asian / Pacific session, if it works in the flat

No, it's not working, it's not working (...)

Anyway, the way I see our problem ... the settings of our TS are not adequate to the new price movements ...


1) we need to develop a module that will take objective characteristics of the market "module ОХ" ...

2) rules of behavior should be worked out for each "module ОХ" state, adequate to the current state

It is possible to form the following data

Х - " module ОХ" state

Y(target) - adequate behavior

3) train the model to produce adequate behavior for each state


Looks like a classic RL?

 
mytarmailS:

No, it's not working, it's not working ((

So I see our problem ... the settings of our TS are not adequate to the new price movements ...


1) we need to develop a module that will take objective characteristics of the market "module ОХ"

2) rules of behavior should be worked out for each "module ОХ" state, adequate to the current state

It is possible to form the following data

Х - " module ОХ" state

Y(target) - adequate behavior

3) train the model so that it will produce adequate behavior for each state


Looks like a classic RL ?

RL doesn't work in a random environment. It is necessary to look for sets, to find patterns such as seasonal
 
Maxim Dmitrievsky:
RL does not work in a random environment. It is necessary to look for sets, to find seasonal patterns.

The intraday volatility fluctuations within a day interfere with the search for intraday patterns. You have to get rid of them somehow. Possible ways:

1) Re-adjusting increments to account for intraday volatility.

2) Switching to a new intraday time, in which the variance grows evenly.

3) Using a zigzag. The values of the knees do not depend on volatility fluctuations. The time tops of course depend on volatility (they are more frequent where it is high), but in the transition to a uniform time these clusters disappear.

 
Aleksey Nikolayev:

1) Recalibrating the increments taking into account the volatility of the time of day.

How do you see it?

 
mytarmailS:

how do you see it ?

We look for Di - the average square of the increments for the i-th minute of the day. Then divide all increments by their corresponding di=sqrt(Di). We sum up the squared increments and look for deviations from the SB in the new series. The price is distorted, but the time does not change.

 
Aleksey Nikolayev:

We look for Di - the average square of the increments for the i-th minute of the day. Then divide all increments by their corresponding di=sqrt(Di). We sum up the squared increments and look for deviations from the SB in the new series. The price is distorted, but the time does not change.

Show the code and the result on the charts, because it is not very clear

 
Aleksey Nikolayev:

We look for Di - the average square of the increments for the i-th minute of the day. Then divide all increments by their corresponding di=sqrt(Di). We sum up the squared increments and look for deviations from the SB in the new series. The price is distorted, but the time does not change.


But the result will not change depending on the number of samples in the calculation of the mean?

 
mytarmailS:

Show the code and the result on the graphs, because it is not very clear


I understand that you're counting the average for specific minutes in time, but the average will be different - for a week, month, year.

 
Evgeniy Chumakov:


Wouldn't the result change with the number of samples in the average calculation?

Of course it will. We calculate on the interval we are interested in, but not too small (from two months).

 
mytarmailS:

Show the code and the result on the graphs, because it's not very clear

It's not difficult, I'm sure you can do it if you want. The only thing - it's better to take close[i]-open[i] instead ofclose[i]-close[i-1] as increments to cope with gaps and quotes dropping out.