NS + indicators. Experiment. - page 4

 
Would it be helpful to give another link which explains why Close is harder to predict - Interesting properties of High, Low
 
klot:
I recently experimented with ZZ in Neuroshelle Day Trader. I fed the normalized difference between the price and several fixed ZZ extremums to input PNN (classifier). I also tried ratios of differences (i.e. harmonic models if you want). NS finds truths on a limited time interval. I won't say it's a grail, but the system is in profit on data it hasn't seen.


And how exactly did you normalise the difference? The difference between the price and the last extremum? Or something else? And what classifier did you use? Kohonen charts?

I haven't gotten to ZZ yet. So far I experimented with Kohonen, and normalised the data by muving. In general, too, the potential is visible, though weak. I want to connect the grid outputs to an "amplifier" =)). Besides I tried to classify the candlesticks - I coded them using different methods and loaded them in the Kohonen table. In principle it was not bad either, at least similar plugs have the same classes. But I still have not got the hang of normalization. I tried to convert it to 0+1 range, -1+1 range, sigmoidal and tangent. I tried using data "as is". Somehow I did not see any advantage of one method or the other.

 
Rosh, do you have any idea why for a random process the value of H+L is predictable (in this case, persistent)?
 
I think because it is much easier to predict the range that limits the price than the closing price itself. Also, I think you have to determine the probability of a move or reversal continuing than the absolute value of the price itself and when it will get there.
 
Neutron:
Rosh, do you understand why H+L is predictable (in this case, persistent) for a random process?

I thought it was simple, H and L are like a confidence interval for a random variable, if maybe the number and the cattle did not change, then this confidence interval will remain in place (a constant). And Close is a prediction of the value of this eventual value and it is running between H and L, that is why it is harder to predict.
 

I don't have this understanding and still don't have it. Prival, I don't accept your hypothesis that it is some kind of interval (do you remember the 200 point spike on the cable, which is made by one tick?). No neuronet can predict it, but Fibami, I think, is quite probable...

P.S. It is also not clear: why do such long hairpins like to go down only?

 
alexx:
klot:
I recently experimented with ZZ in Neuroshelle Day Trader. I fed a normalized difference between the price and several fixed extremums of PP to input PNN (classifier). And also tried ratios of differences (i.e. harmonic models if you want). NS finds truths on a limited time interval. I won't say it's a grail, but the system is in profit on data it hasn't seen.


And how exactly did you normalise the difference? The difference between the price and the last extremum? Or something else? And what classifier did you use? Kohonen charts?

I haven't gotten to ZZ yet. So far I experimented with Kohonen, and normalised the data by muving. In general, too, the potential is visible, though weak. I want to connect the grid outputs to an "amplifier" =)). Besides I tried to classify the candlesticks - coded by different methods and loaded in the Kohonen. In principle it was not bad either, at least similar plugs have the same classes. But I still have not got the hang of normalization. I tried to convert it to 0+1 range, -1+1 range, sigmoidal and tangent. I tried using data "as is". Somehow I didn't see any advantage of one or the other method.


I do all my experiments in NSDT. I take the differences between the price and the last extrema of ZZ. And also between last and penultimate extremum, etc... And also relations between differences, - (X-A)/(A-B), (B-A)/(B-C), (B-C)/(C-D), (X-A)/(D-A), generally trying to build Gartley harmonic models. I put everything into a probability network (there are several varieties in NSh). I normalized values using NSh, well, actually this formula

(x-ma(x,n))/(3*stdev(x,n)), lately I always use this formula. And, actually, go ahead for learning, cross-checking and OOS. .

 

Here's an example of normalisation that I use almost everywhere.

You can substitute whatever you want for Close...

Files:
normalise.mq4  3 kb
 
Mathemat:

I don't have this understanding and still don't have it. Prival, I don't accept your hypothesis that it is some kind of interval (do you remember the 200 points spike on the cable, which is made by one tick?). No neural network can predict it, but Fibami .... I think it's quite possible...

P.S. And another thing I do not understand: why is it that such longest studs only like to go down?


I'll try to explain in more detail. H and L are nothing but a confidence interval. The sl. value did not go beyond these limits for say a day. 1 daily bar. Now just assume that this random value follows a distribution law. H and L are roughly mozh+-3sko, i.e. with a probability 0.997 the random value lies in these limits. It is easier to predict H and L in this situation, because it is almost a constant, while the slope value (Close) remains as it was.

Simply plot (you can generate) the value's probability density function and mark it as positive+3co. Generate 1000 values and by it (sample) determine these points, they are almost constant, but the last number in the generation of random (Close). You can do it 100 times and check.

For studs, maybe the following method will help - you take 10 measurements (ticks) and calculate a bridge. The two most extreme ones, which are in the + and - range, are discarded. And then you can use this function again. This estimation of the unknown quantity is more accurate, because it has properties of stability to such anomalous outliers (apparent measurement error).

 

In the topic Interesting properties of High, Low we talk about the "anomalous" prediction of the (H+L)/2 series. The paradox is imaginary!

Look, if the condition that H-L (first approximation of instrument volatility) is much less than (H+L)/2 (first approximation of absolute instrument price), then (H+L)/2 is equivalent to the averaging procedure of BP with sliding window equal to 2. Come to think of it, it really is almost an averaging. On the other hand, a moving average ALWAYS has a positive autocorrelation coefficient (CAC) between adjacent series increments (this can be proved directly). Hence, for a BP obtained by integration of random gradients and consequently having an OAC of gradients tending to zero, the OAC plotted for its series (H+L)/2 will always be non-zero and positive! Unfortunately, this fact will not allow to predict BP, since for the series (H+L)/2 there will invariably be a phase delay, which will put everything in its place.

Like this.