Market etiquette or good manners in a minefield - page 23

 
The interest is not in the NS as such - it really is the same, just maybe the activation function is different or something... (It's a pity JOONE turned out to be glitchy when tested - it's full of all sorts of features...) Interest in the quality of the input signal, the learning process, and the output
 
Neutron писал(а) >>

Come on, give us an idea - we'll discuss it!

What's promising to put in there. - Mascot?

Oh, I've already put in all sorts of things... :) All right, I'll tell you the whole story.

I've already said once that I like genetics, because the question of what network behavior should be optimal - for ORO - is a very open one.

I was supposed to ask the network on each new bar and react accordingly. So I wrote something and gathered different averages, stochastics or whatever God sent me for input and started it. And now I see that the grid works wonders. It draws grails (not immediately, of course, but it learns before the grail)...

The mistake was that not many indicators can be used with 0 shifter - they need the whole bar for drawing. In fact, I was feeding data to the grid from the bar that hadn't existed yet and thus was looking into the future.

Since then I've been struggling with the input data. But I found out for myself, that learning works :)

 
YDzh писал(а) >>
The interest is not in NS as such - it is really the same, only maybe the activation function is different or something else. Moreover, nothing depends on the activation function (its specific type), nothing depends on the learning method (if it is performed correctly)!

It all depends on what you are going to predict. That's the point.

There is no point in discussing about indukes based on BP smoothing in one or another form (MACD for example) - we can show strictly that BP in this case is an insuperable obstacle for BP prediction with antipersistence properties in the series of first difference (price series are exactly such).

Concerning "look into the future" in teaching NS, we all experienced it...

 
Neutron писал(а) >>

It all depends on what you are going to predict. That's the point and the point.

There is no point in discussing about indukes based on BP smoothing in one or another form (MACD for example) - we can show strictly that BP in this case is an insuperable obstacle for BP prediction with antipersistence properties in the series of first difference (price series are exactly such).

Concerning "look into the future" in training of NS, we all went through that...

Well, it depends on how you look at it. Bar is an average value. The minimum bar is a minute, it means that everything that lies before the current minute has already been averaged to some extent.

 
Neutron писал(а) >>

It all depends on what you are going to predict. That's the point and the point.

There is no point in discussing about indukes based on BP smoothing in one or another form (MACD for example) - we can show strictly that BP in this case is an insuperable obstacle for BP prediction with antipersistence properties in the series of first difference (price series are exactly such).

Concerning "look into the future" in training of NS, we all went through that...

Well, if so - then there is no point to touch indicators at all. They are all built on the aggregation of data in one form or another. Except for time, volume and prices there is no primary data. Then you have to go down to the tick level... But there is "a lot of noise" there. Paradox...

 
YDzh писал(а) >>

Well, if so, then there is no point in touching indicators at all. They are all built on the aggregation of data in one form or another. Apart from time, volume and price there is no primary data. Then you have to go down to the tick level... But there is "a lot of noise" there. Paradox...

It's true! Except there's no fuss - the soup is separate.

As for the bar, I only use opening prices - no averaging. My plan is to use only ticks, like wise Prival says. However, I will have to fuss with the saving mode and data collection. But if it's worth it, why not?

 

OK, I think that's it until the weights are corrected!


for(int i = cikl; i >= 0; i--)
{
out = OUT2(i);---------------------------------------------------// Получаем вых. сигнал сетки
test = (Close[i]-Close[i+1])/Close[i+1];--------------------------// Получаем n+1-вый отсчет

d_2_out = test - out;---------------------------------------------// Ошибка на выходе сетки
d_2_in = d_2_out * (1 - out*out);--------------------------------// Ошибка на входе выходного нейрона

Correction2[0] += d_2_in * D2[0];---------------------------// Суммируем микрокоррекции
SquareCorrection2[0] += Correction2[0] * Correction2[0];----------// по каждому весу входящему в вых. нейрон
Correction2[1] += d_2_in * D2[1];---------------------------// и суммируем квадраты оных микрокоррекций
SquareCorrection2[1] += Correction2[1] * Correction2[1];
Correction2[2] += d_2_in * D2[2];
SquareCorrection2[2] += Correction2[2] * Correction2[2];

d_11_in = d_2_in * (1 - D2[1]*D2[1]);-----------------------------// Считаем ошибку на входах нейронов
d_12_in = d_2_in * (1 - D2[2]*D2[2]);-----------------------------// скрытого слоя

for (int k = 0; k < 17; k++)
{---------------------------------------------------------------// Сууммируем микрокоррекции для входов
Correction11[k] += d_11_in * D1[k];----------------------// первого нейрона
SquareCorrection11[k] += Correction11[k] * Correction11[k];
}

for (k = 0; k < 17; k++)
{---------------------------------------------------------------// Суммируем микрокоррекции для входов
Correction12[k] += d_12_in * D1[k];----------------------// второго нейрона
SquareCorrection12[k] += Correction12[k] * Correction12[k];
}
}
 

I'm posting the codes just in case:

NeuroNet_1 is an empty grid training EA

NeroLite_ma is a two-layer perceptron, actually easily expandable to an N layer -:)

Files:
 
Neutron писал(а) >>

That's the truth! Just don't make a fuss - the soup is separate.

As for the bar, I only use opening prices - no averaging. And I'm going to do what the wise Prival will do - to switch to ticks. However, I will have to fuss with the saving mode and data collection. But if it's worth it, why not?

I don't make a fuss :) The usefulness of tics is acknowledged in the literature... It smells like chaos theory. About whether it's worth it... Is it worth it? And where does Prival advise it?

And what, what are the results on the opening price? I've been playing around with regression analysis for no reason at all. And it turned out that high and low are much easier to predict than the opening price... That's not so strange...

 
YDzh >> :

And where does Prival advise this?

... >> everywhere! The restroom is wise!