FR H-Volatility - page 10

 

I don't know how to catch, especially at an early stage, the emergence of arbitrariness.

The input of the network should be the N penultimate segment of the WP, and the output should be the last segment of the WP. Then, with a suitable network configuration, a suitable choice of activation function, and successful training, if only N last segments are fed to the input, the network will recover these N and the next (i.e. upcoming) one as well. There is nothing to fiddle with its direction - it is clear as it is, but the size ...

Maybe something will come out of it.

 
Yurixx:

I don't know how to catch, especially at an early stage, the emergence of arbitrariness.

The input of the network should be the N penultimate segment of the WP, and the output should be the last segment of the WP. Then, with a suitable network configuration, a suitable choice of activation function, and successful training, if only N last segments are fed to the input, the network will recover these N and the next (i.e. upcoming) one as well. There is nothing to fiddle with its direction - it is clear as it is, but the size ...

Maybe something will come out of it.


And zigzags in what form is better, absolute or relative? And will it need normalization? I think I want a Kohonen layer and Grossberg's star. Although I may be wrong.
 
Vinin:
...I think it begs for a Kohonen layer and a Grossberg star.

And the Medal of Honor.)
Vinin, what's a Grossberg star?

Yurixx, suggest an NS block diagram for this case. I would like to think about it.

 
Neutron:
Vinin:
...I think the Kohonen layer and the Grossberg star are asking for it.

And the Medal of Honor :-)
Vinin, what kind of star is this?

Yurixx, suggest an NS block diagram at your discretion for this case. I would like to think about it.


It's about right here http://ann.hotmail.ru/vs03.htm

But I'll repeat myself (although it's not me anymore), just a quote:

During training of a counterpropagation network, input vectors are associated with corresponding output vectors. These vectors can be binary or continuous. After training, the network generates output signals that correspond to the input signals. The generality of the network makes it possible to obtain the correct output when the input vector is incomplete or distorted.
In learning mode, the input signal is fed to the network and the weights are corrected so that the network produces the desired output signal.
The Kohonen layer functions according to the "winner takes all" rule. For a given input vector, only one neuron of this layer produces a logical one, all others produce zeros. The output of each Kohonen neuron is just the sum of weighted elements of input signals.
The output of the Grossberg layer neurons is also a weighted sum of the Kohonen layer neurons' outputs. But each neuron of the Grossberg layer generates a weight that connects this neuron with the only Kohonen neuron whose output is non-zero.
At the preprocessing stage input signals are normalized for input vectors.
At the learning stage, the Kohonen layer classifies input vectors into groups of similar ones. This is done by adjusting the weights of the Kohonen layer so that similar input vectors activate the same neuron of the layer. Which neuron| will be activated by a particular input signal is hard to predict in advance as the Kohonen layer learns without a teacher.
The task of the Grossberg layer is then to produce the desired outputs. The training of the Grossberg layer is teacher-assisted learning. The outputs of neurons are computed as in normal operation. Then each weight is only corrected if it is connected to a Kohonen neuron that has a non-zero output. The amount of weight correction is proportional to the difference between the weight and the desired output of the Grossberg neuron.
In the network operation mode the input signal is presented and the output signal is generated.
In the full counterpropagation network model, it is possible to produce output signals from input signals and vice versa. These two actions correspond to forward and backward propagation of signals.

 
Vinin:
And zigzags in what form is better to feed, absolute or relative? And will normalization be needed? I think it calls for the Kohonen layer and the Grossberg star. Although I may be wrong.

Neutron:

Yurixx, suggest at your discretion the NS block diagram for this case. I would like to think about it.


I can't offer a flowchart. The history of this thought is as follows.

At first I really thought that NS should consist of 2 layers - Kohonen and Grosberg. The problem was only that for kaga each segment can be any, from 1 and ... Suppose I want to input N segments of WP and limit sigment size from 1 to 50. Then the number of maximal neurons in Kohonen layer (before clustering) is 50^N. That's a lot. That's why I was thinking about Renko. At H=10, the size of similar segment of ZZ varies from 1 to 5. That's only 5^N neurons - already acceptable for small values of N. And all segments larger than 5H can be clipped with 5H.

Next, the Kohonen layer recognizes the pattern and excites the corresponding neuron. The last segment of ZZ (not included in this N) is fed to the Grosberg layer. The Grosberg layer contains, say, 100 neurons, each of which corresponds to the size of the last ZZ segment, from 1 to 100. Thus a neuron of the Grosberg layer is excited. During learning, the weight of the connection from the excited Kohonen neuron to the excited neuron of the Grosberg layer is increased by 1. So it is not a counter-propagation network. But that was my "plan" :-))

And then I realized that after training when inputting WP, the Grosberg layer would just show me at the output a distribution function for the future WP segment. That's basically what I was aiming for. However, there are 2 "buts" here.

1. I can build such a distribution much faster and without any NS.

2. The almost two-year history of minutiae contains about 630000 bars. Kagi ZZ with parameter H=10 has around 17400 segments in that history. And the number of neurons in the Kohonen layer with N=6 will be 15625, i.e. there will be an average of 1.1 experimental values for every pattern. What kind of distribution is that? :-)

Thus, the clustering associated with the transition to renko partitioning is disastrously insufficient. We either need to cluster the FP using the Kohonen layer, or (which is more likely) move on to more constructive ideas.

PS

Don't judge me harshly for being naive. My experience with networks is 1.5 books read and no implementation.

 

I suggest we start with the simplest one. Let's divide ZZ into elementary constructions consisting of one vertex. Let's normalize sides by length of the first vertex and keep one significant digit after the decimal point. In this case we have 17400 constructions, divided (for step H=10) into 50/2H*10=25 groups (approximately) on the basis of "aspect ratio". I.e. in each group we have several hundreds of patterns - already statistics.

Now we just need to write it into NS and find out, how does the FR of predicted movement length (green vector minus H) depend on the value of left margin. Except, colleagues, NS is not really needed for this problem. Or am I missing something?

P.S. The figure on the right shows the PD of the ZZ aspect ratio at one vertex. These are H=5,10,15 and 20 pips constructions for EURUSD (ticks). It looks like the normalisation idea is sound and it allows for a noticeable reduction in the dimensionality of the input data.

 
Neutron:

It remains to shove this into NS and figure out how the FR of the predicted motion length (green vector minus H) depends on the value of the left-hand edge. Except, colleagues, NS is not really needed for this problem. Or am I missing something?

P.S. The figure on the right shows the FR of the ZZ side ratio at one vertex. it is the H=5,10,15 and 20 pips constructions for EURUSD (ticks). It looks like the normalisation idea is sound and it allows for a noticeable reduction in the dimensionality of the input data.


I don't think NS is needed for that either. And the normalization option seems valid, it hadn't occurred to me.
 

Neutron

I can't figure out what you've built. The FR (distribution function) should look a little different https://ru.wikipedia.org/wiki/%D0%A0%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5_%D0%B2%D0%B5%D1%80%D0%BE%D1%8F%D1%82%D0%BD%D0%BE%D1%81%D1%82%D0%B5%D0%B9

Maybe it is SP (probability density function) ? If so, can you repeat, in a bit more detail, what is it on the right graph (what is on the X and Y axis)

 

to Prival

We performed a series of n calculations (the length of the right side of the Zig-Zag expressed in units of length of the left side) and obtained some set of values x1,...,xi,...,xn. This is the so-called sample. We plot the values xi of x on the abscissa axis. Partition the x-axis into equal intervals dx and count the number of calculations nk that result in x values lying in the interval xk+-1/2dx (here xk is the coordinate of the centre of the interval on the x-axis). On each interval, construct a rectangle of height nk and width dx. The diagram thus obtained is called a histogram. It shows the density of the distribution of calculation results along the x-axis.

If the number of calculations is large, then the interval width can be made small (with many more samples in each interval). Then instead of a histogram we will get a graph on which a value proportional to the fraction of nk/n samples falling in each interval is plotted on the ordinate axis. This graph is called a distribution curve or distribution function; the function itself is called a probability density function.

P.S. The PDF may be normalized, in which case the integral of it over the entire range is identically 1.

 

Neutron

Thank you. I see it now. Only you have constructed probability density function (PDF) and not PDF (distribution function), because PDF has 3 properties. 1. It (PDF) is non-decreasing. 2. If x tends to infinity, then the PDF tends to 1. Naturally if everything is normalized. The SP and PDF are related by an integral (PDF is an integral of the SP).

This graph is called a distribution curve or distribution function and the function itself is called a probability density function." is not quite correct