I made one of these things once ... - page 12

 

Well, it's hard not to notice the dips in the 00's and in the middle. However, to judge their statistical significance by eye ... I don't think I can. :-)

One would have to calculate the mean and variance. Maybe those fluctuations are fractions of a percent or, on the contrary, tens. Then the significance would also be clear.

PS

What is perhaps surprising is the smoothness of the top edge of the histogram. If these were purely statistical things, this edge would be like an uneven comb. As it is, the increases and decreases are quite decent.

 
Deleted my previous reply, giving a more informative version :)

Prival:

From your stats, if I understand correctly, the 50 level is flying, but that's not much of a difference to what I was suggesting.

Yes, it turns out that we miss both 50 and 00.

It is easy to replace the condition in my indicator with yours and call this variant counting of entries of the buy type. Similarly, it is easy to make a Sell entry count. Here are all variants, currently uncommented is the sell type

      TLvl1 = NormalizeDouble(RLvl1+Delta*0.0001,Digits);
//      if (High[pos] >= TLvl1 && Low[pos] <= TLvl1) Cnt[Delta]++;
//      if (High[pos] > TLvl1 && Open[pos] < TLvl1) Cnt[Delta]++;
      if (Low[pos] < TLvl1 && Open[pos] > TLvl1) Cnt[Delta]++;
      TLvl2 = NormalizeDouble(RLvl2+Delta*0.0001,Digits);
//      if (High[pos] >= TLvl2 && Low[pos] <= TLvl2) Cnt[Delta]++;
//      if (High[pos] > TLvl2 && Open[pos] < TLvl2) Cnt[Delta]++;
      if (Low[pos] < TLvl2 && Open[pos] > TLvl2) Cnt[Delta]++;

And these are the results, left buy right sell, about the same as if, only the stats are smaller.


 
Yurixx:

Well, it's hard not to notice the dips in the 00's and in the middle. However, to judge their statistical significance by eye ... I don't think I can. :-)

One would have to calculate the mean and variance. Maybe those fluctuations are fractions of a percent or, on the contrary, tens. Then the significance would also be clear.

PS

What is perhaps surprising is the smoothness of the top edge of the histogram. If these were purely statistical things, this edge would be like an uneven comb. As it is, the increases and decreases are pretty decent.

I think they're statistically significant. Which doesn't mean there are significant practical implications, though :)


P.S. There is no need to judge by eye, the indicator writes the data to a file

 
As a matter of fact, with a sensible strategy implementing these bar graphs is a big question.
 
HideYourRichess:
As a matter of fact, with a sensible strategy implementing these bar graphs is a big question.
I'm afraid it will come down to context and filters again :)
 

This is unfortunate.

On the other hand, if you look at H-volatility, it is not strictly 2, it also has a certain waviness to it.

And another thing. If there is such a waviness in the data, then it may be in the pens, for example. If the effect is significant, then perhaps one could try to improve the prediction of the direction of the dash, for example. However, I doubt it.

 
HideYourRichess:

This is unfortunate.

On the other hand, if you look at H-volatility, it is not strictly 2.

Waviness in time or in the zigzag parameter?

 
Candid:

So, here is a simple script, it counts the number of "round" level crossings plus Delta points. I used it on EURUSD, GBPUSD and USDCSD from 10:55 of 16.06.2004. The result is unexpected and interesting.

I will accept comments both on the script text and the question :)


P.S. The script lies for large Delta, but it must not cancel unexpectedness of the results

IMHO not so much. Suppose limit pending orders are always placed at the nearest "round" levels. Suppose a limit order is placed at a bar formation. Suppose all open positions are closed at the opening price of the next bar. Then if a High or Low touches a Limit order, it will trigger once. But the results of such triggering in this case are calculated incorrectly, i.e. by the number of triggering, instead of the quality.


That is, the task should be properly set as applied to trading, since no broker will pay for the number of crossings of any levels:


1. If High has hit the nearest "round" level, then to the amount we add the difference between this level and the opening price of the next bar.

2. If Low has touched the nearest "round" level, we subtract from the sum the difference between the level and the price of the next bar opening.


We obtain results without taking the spread into account. If the results are close to zero even without taking the spread into account, there is obviously nothing to catch.

 
Reshetov:

IMHO that's a little bit wrong. Let's assume we always place limit orders at the nearest "round" levels. Let's assume that we always set the limit orders when a bar is formed. Suppose all open positions are closed at the opening price of the next bar. Then if a High or Low touches a Limit order, it will trigger once. But the results of such triggering in this case are calculated incorrectly, i.e. by the number of triggering, instead of the quality.

That is, the task should be properly set as applied to trading, since no broker will pay for the number of crossings of any levels:

1. If we hit the nearest "round" level, we add to the sum the difference between this level and the opening price of the next bar.

2. If Low hit the nearest "round" level, then we subtract from the sum the difference between this very level and the opening price of the next bar.

We obtain results without taking the spread into account. If the results are close to zero even without taking the spread into account, there is obviously nothing to catch here.


No, it's true :), you just missed the thread a bit. What you wrote may be the next step in the evolution of the script (or better yet, the indicator, there is more accurate code that does not lose some of the crossovers).

This script was designed to measure exactly the strength of levels, on the assumption that price should lag at strong levels. The next step is to simulate trading with varying degrees of detail and different strategies are possible.

Your variant almost corresponds to the one suggested by Prival. But closing at the next bar is not very good. In general, the algorithm should work at minimum timeframe, i.e. at minutes. That is why we should close by the timeframe, e.g. in one hour, as he suggested.

Say, for calculation of more complex characteristics described by Prival, we will have to implement more complex position accounting, it may be the next step of evolution.

 
Moved the text to the next page - since it kind of opens up a new twist on the theme, it makes more sense to put it closer to the top.
Files: