Market condition - flat or trend? Which dominates? - page 8

 

But I have another idea - to see what percentage of these "trends" are lost in the real-time analysis.

If at least half of the trends are detected correctly from the start - it's a mega-grial =)

 
Zigzag with channels... What are the channels??? What do they mean?

In my opinion, this is the price level at which the Zigzag beam can change direction... These channels are built on the current bar....

 

I think the number of ZZ segments must be about the same for the compared channel widths in order for the comparison to be correct. Has this been fulfilled? Also, for "narrow" channels, the story will give a set of "points" (i.e. values of the ratio sought for different segments) and one can look at their distribution. The point is to try to understand whether the results for "wide" channels really do not fit into this distribution.

P.S. Finally, this topic is out of the flat :)

 
lna01:

I think the number of ZZ segments should be about the same for the compared channel widths to make the comparison correct. Has this been performed?

I don't understand why you have to compare different history (and it will turn out different) with different channels?


lna01 wrote (a):

P.S. Finally this topic is out of the box :)

It was a resistance breakout. Now we have to wait for a pullback to the same level and play for a bounce =)

 
komposter:
lna01:

I think the number of ZZ segments must be about the same for the compared channel widths in order for the comparison to be correct. Has this been performed?

I don't understand why you have to compare different history (and it will be different) with different channels?


But it will be equivalent in the number of hypothetical transactions. That is, the points will be equivalent in terms of statistical security. Just from the point of view of behaviour in reality (i.e. in the future) the dynamics of the sought characteristic is interesting. And how close these dynamics will be for different channel widths.
 
lna01:
But it will be equivalent in the number of hypothetical transactions. That is, the points will be equivalent in terms of statistical security. Just from the point of view of behavior in real life (i.e. in the future) we are interested in the dynamics of the sought characteristic. And how close these dynamics will be for different channel widths.

100 trades a year is not the same as 100 trades a month. I don't think it is correct to compare strategies based on the number of bends.

Moreover, a wide channel can capture a significant chunk of history, which may be dominated by a different market condition.

 
komposter:

All the more so, a wide channel can capture a significant chunk of history that may be dominated by a different market condition.

Right. And so we started the game. And here it has become history too. Wouldn't the indicator turn out to be different on this piece?

The assumption of fractality gives us an opportunity to try to judge the market behavior on longer times by scaling the regularities found for the shorter times. The only question is to what extent it is applicable to the market :)

 
kharko:
Zigzag with channels... What are the channels??? What do they mean?

In my opinion, this is the price level at which the Zigzag beam can change direction... These channels are built on the current bar....

These are levels at which the current (last) ZigZag knee is fixed and is not redrawn (in the direction). In general, just as a visualization option
 
komposter:

I have another idea - to see what percentage of these "trends" are lost in the real-time analysis.

What is good about the ZigZag channel is that levels are fixed and you can manage pending orders if necessary, changing them. It seems to me more reliable, although it does not exclude some slippages and losses. In addition, spread will take away something. We should try the two options I mentioned and at least run them through history.

If at least half of the trends are detected correctly from the start - it's a mega-grial =)


There's no need to rush to conclusions. It is necessary to check what we are counting.
 

to SK



SK. писал (а):
grasn:

to Neutron. IMPORTANT!


https://www.mql5.com/ru/forum/50458 post "grasn 11.01.07 16:16".


Seryoga, I am telling you about it with some delay (I just re-read it further from the designated place and realized that I failed to fulfill my promise). I am answering your question "how it all works" - everything is simple. So, let's say we have a formula for calculating the lifetime of a linear regression based on some channel parameters, and this formula has a statistical advantage (very important). From the current datum (it is fixed), we iteratively look through the past (historical) datums and build a linear regression for each sample. As Vladislav wrote, we get a fan "going out" into the future. Now for each such channel we calculate its probable length. Now we obtain neither a fan, nor a straight channel (reversal zones appear) where the price will develop. And if we consider the price position in the formula, we will be able to calculate the zone of price movement more accurately. Just when collecting statistics, we should not forget that the channel breaks when the price leaves its borders, respectively goes up or down and this defines the price position quite precisely, i.e. when getting the calculated channel length, the price leaves it either up or down, this can be worked out. Serega, well, I haven't promised to tell you about the trick (waves, diffusions ...) :o)).


I apologise for the intrusion. Is there anything I could look at? And in general, I'm interested in this topic, but I haven't understood everything from the middle of the conversation. If you don't mind, you could start a new thread on the subject.


It was a long time ago. From this link it all started, about on page 4 - Vladislav shared his approaches, which "fit" very well with my search of the stable levels, around which the price "centers" (addition: something like a flat). I wrote about it recently in this thread.


I'm not against this topic, it's another thing, maybe it will not be interesting for many people now. And most importantly - why? It's not a "joke of humor", the described works and really has a significant statistical advantage. However, as always there are subtleties, dependencies that I have found work only for some classes of channels. And the model as a whole "inherits" from the used statistical methods the decent drawdowns and complexities with calculation of stops, I think I wrote about it in this thread as well.


If the theme is really interesting - collect statistics, while my colleagues measure the market with their own yardsticks. The essence of the method of collection I have already described:


"For this purpose it is enough to start an iterative process of searching for these very trends. For example, at each reference point in some historical range it is necessary to enumerate linear regressions (at a fixed current reference point) and leave for each reference point only that LR which has the maximal duration in the future. Similarly, one can refute those who argue that the trend prevails. You need a 50/50 ratio - just as easy to arrange :o) But there is another subtlety, the point is that a linear regression or channel can be built, literally, on any data, but formally (from a mathematical point of view) the built channel will not be a trend. I came up with the following as a pseudo-trend criterion: if the channel "lived", i.e. the price did not go out of its borders one more initial length, then the trend was rather a trend. This is due to speculative calculations of the probability of occurrence of a trend of a certain length in a "random" time series. If you set the historical range for finding channels on the clock, from 300 counts and above, then trends will be everywhere."


Only you need to include data for the whole enumeration, i.e. for all the channels received, otherwise you won't be able to pick out the "stable" channels from the whole mass. There is a subtlety with the criterion itself. The point is that exit of the price outside the channel does not destroy the channel at all - in the next count, the price can return into the channel and "sit" there for some time. Increasing the channel borders will also help. So, this simple criterion will present a huge noise and distort the real picture, and therefore you won't find any correlations. A different criterion is needed here, but the trend formula is linear regression.



Once you've collected data, choose a system from the Data Mining category to search for dependencies, then classify the parameter "channel lifetime" and run a method, for example Column Importance (Column importance, such method as associations, clustering and others are present in all systems in one form or another) and see what investigated parameters of channels most affect the lifetime, I assure you they can be counted on the fingers. I think you can easily figure out which parameters to use and what to do next. And don't forget old Hearst.