Machine learning in trading: theory, models, practice and algo-trading - page 2778

 
Valeriy Yastremskiy #:

I don't get it. Is it possible to isolate strong bounces by subtracting the right part from the left part relative to the centre and divide them into groups? Why longer correlations? These are short processes. More obvious, stronger ones, maybe?

In general, it may work with time or event binding. But the window size should be trained to get it somehow.

And formalisation of events is not solved. Only time, apparently.

What do you see on this piece of graph, what pattern?


 
Valeriy Yastremskiy #:

The methods of averaging parameters of a series are generally understandable, the logic of their creators is also clear (not always and not completely of course))). ), indicators are created on this basis. The reason for the lag is clear. And the possibility to remove the lag is not clear.

I do not know, there was an idea here (not mine))) ) to generate rules from price indicators and indicators and see the result by signals for trading.

But this is not a meaningful search / selection of signs.

Maybe make signs from prices and their averages of neighbouring currencies.

In general, I don't understand the selection algorithm yet.

Obviously, these are levels of extremums, tick speeds, trend, width of the trend corridor, frequency of price returns to the corridor boundaries, on different time scales.....

For the hundredth time I write: by the degree of information connection between the trait (predictor) and the target variable.

 
СанСаныч Фоменко #:

I didn't look at fractals, I looked at the chart.

It responded with its graph to my message, but it was just pissing on the topic from alcoholic frenzy. It was originally talking about fractals.

That's how a deaf phone works, through a chain of clowns.

 
Maxim Dmitrievsky #:

what do you see on this piece of graph, what pattern?

If you take the usual sliding window, you won't find any stable dependencies there

But if you draw it like this. The time from the red point will be reversible, the increments moving forward from the point will correlate with each other with increasing lag. The further away from the point the greater the lag.

The correlation will be negative, but if we mirror the graphs from the point, it will be positive.

It turns out that in this case we should take it as a reference point and build a prediction window from it. This is what was meant by a stuttering window.

Technically it can be done in different ways.


 
СанСаныч Фоменко #:

For the hundredth time: by the degree of informational connection between the trait (predictor) and the target variable.

This is selection by the best result. I asked about the primary selection algorithm. Why did you decide that the remaining possible features are less informative than those initially selected.

If it's overfitting and selecting based on the result of an informative relationship, and initial selection based on naivety, that's one approach. A normal approach, if the selected ones include the resultant ones, then the idea works.

Is there an algorithm for initial selection of predictors? I would like to understand the logic. How to understand initially how the trait is related to the target.

I have / we have ))) so far the same over-selection, piled, tried, understood the cuts, go further))))

 
Maxim Dmitrievsky #:

If you take the usual sliding window, you will not find stable dependencies there

But if you draw it like this. Time from the red point will be reversible, increments in the future from the point will correlate with each other with increasing lag. The further away from the point, the greater the lag.

The correlation will be negative, but if we mirror the graphs from the point, it will be positive.

It turns out that in this case we should take it as a reference point and build a prediction window from it. This is what was meant by the stuttering window.

Technically it can be done in different ways.


Hehe, you want to watch fractality in the mirror)))))) Somehow you have to find reference points and edges))))))

Nice idea, and even the sense of some circles on the water, the impact of counteraction))))

 
Maxim Dmitrievsky #:

It responded with its graph to my message, just a drunken ramble on the topic. Initially we were talking about fractals.

That's how a deaf phone works, through a chain of clowns.

How are you so adept at (not) hearing each other. The answer was about Ulad's graph sort of, but I didn't get the fractals part either))))) But that's no reason at all...))))

 
Valeriy Yastremskiy #:

How you guys are so good at (not) hearing each other. The answer was about Ulad's graph I think, but I didn't understand about fractals either))))) But that's no reason at all...))))

Ulado responded to me with a graph to my post about fractals, even though no one asked him about it
There is a chronology of messages


Then I answered Sanych how to improve the autocorrelgram through a non-standard sliding window.
 
Maxim Dmitrievsky #:
Ulado replied to me with a graph to my post about fractals, even though no one asked him about it
There's a chronology of messages

h ttps://www.mql5.com/ru/forum/86386/page2774#comment_42491865

Then I answered Sanych how to improve autocorrelgram via non-standard sliding window.

Never mind)))))

Hey, I understood it as an answer to Sanych completely)))))

 
Valeriy Yastremskiy #:

This is selection by best result. I was asking about the initial selection algorithm. Why they decided that the remaining possible features are less informative than those initially selected.

If it's overfitting and selecting based on the result of an informative relationship, and initial selection based on naivety, that's one approach. Normal approach, if the selected ones include the resultant ones, then the idea works.

Is there an algorithm for initial selection of predictors? I would like to understand the logic. How to understand initially how the trait is related to the target.

I have / we have ))) so far the same over-selection, pile up, try, understand the cuts, go further))))

There is a measure of information relation and there are packets that calculate such a measure. I named it, I'm too lazy to look for it.

We move the window and get a time series with its own statistical characteristics. If we select features with a strong information connection and its small fluctuation, we will find a feature that will have a fairly constant predictive ability in the future. We hope so. Something for non-stationary markets.

Reason: