Zigzag indicator and neural nets - page 9

 
Piligrimm:
The entire program is written in Matlab, the part that calculates the forecasts is compiled in Matlab and runs as an executable running from the indicator that collects input data at the arrival of a new bar every minute. The part that performs network training and threshold coefficient optimization works directly in Matlab and runs on a timer every 5 minutes, because the compiled ehem-file with network training does not work, I cannot understand the reason, compilation proceeds without errors.


Thank you, I will deal with network building in Matlab and binding to MT4. If you have something of your own design, please send it to loknar@list.ru . I will be very grateful.

 
Mathemat:
Piligrimm wrote (a): Now my system retrains every 5 minutes and recalculates forecasts every minute on arrival of a new bar, if I had an order of magnitude more RAM and performance, retraining would be done at each step along with the calculation, and the accuracy of the forecasts would improve significantly

Re-training every 5 minutes and recalculating the forecasts every minute - isn't that too often? And your desire to further increase the frequency of retraining (and calculations) to improve prediction accuracy (on every tick, or what?) seems strange to me. I doubt that a really working system would benefit from retraining at a frequency that coincides with the incoming data frequency.

P.S. And pf>25 is not just a dream, but something out of the question... Although with a 5:1 ratio of profitable trades to unprofitable ones and TP/SL = 5 it's quite feasible.


The right to doubt is your right. I am only expressing my vision of the market and how to implement my strategy. If you work in a relatively calm market, then retraining every 5 minutes is enough. For example, I trained the system during one day and then I connect to the market again without retraining and make forecasts using old settings. Although the error in the test sample is from 14% to 28%, the system makes satisfactory forecasts, though there is no guarantee that the forecast for the period of interest will be incorrect.

By striving to do retraining before each calculation, I am trying to address the stability and accuracy of the system under all conditions, news releases, etc. While this may seem redundant, from my experience of market research it is a prerequisite for an efficient and unsinkable system that always goes one step ahead in all conditions, which is what I intend to implement.

 
Loknar:
Piligrimm:
The whole program is written in Matlab, the part that calculates the forecasts is compiled in Matlab and runs as an executable running from the indicator that collects input data every minute after a new bar passes. The part that performs network training and threshold coefficient optimization works directly in Matlab and runs on a timer every 5 minutes, because the compiled ehem-file with network training does not work, I cannot understand the reason, compilation goes without errors.


Thanks, I'll be looking into building networks in matlab and linking to MT4. If you have something of your own design, please send it to loknar@list.ru . I would be very grateful.

To give you an example, a simple but efficient network, I use one of these:

 
Piligrimm:

To give you an example, a simple but effective network, I use one of these:



Thank you for the information

If you need any device for Matlab (I'm downloading 7.5 with a bunch of addons) or all associative software for neural networks- I'm ready to cooperate.

 

I can share the "formula for happiness" for anyone who wants it !

GP1[iq+1] = 0.3*((-0.610885 *GP1[iq-1]*GP1[iq-1]*GP1[iq-2]-0.0795671 *GP1[iq]*GP1[iq-1]*GP1[iq-1]*GP1[iq-1]*GP1[iq-2]+1.19161 *GP1[iq-1]*GP1[iq-1]-0.422269 
                   *GP1[iq])/(GP1[iq-1]*GP1[iq-1]-0.505662 *GP1[iq]*GP1[iq-1]*GP1[iq-1]-0.415455 *GP1[iq-2]*GP1[iq-2]))+0.7*((-0.610885 *GP1[iq-2]*GP1[iq-2]*GP1[iq-3]
                   -0.0795671*GP1[iq-1]*GP1[iq-2]*GP1[iq-2]*GP1[iq-2]*GP1[iq-3]+1.19161 *GP1[iq-2]*GP1[iq-2]-0.422269 *GP1[iq-1])/(GP1[iq-2]*GP1[iq-2]-0.505662 *GP1[iq-1]
                   *GP1[iq-2]*GP1[iq-2]-0.415455 *GP1[iq-3]*GP1[iq-3]));
 
GP1 - это или точки перелома в Зиг-Заге, или в любой другой последовательности, которую Вы хотите прогнозировать, например, МА, или просто цены валют, 
хотя я в этих вариантах не проверял, но думаю будет работать.
В расчетах используются переменные сформированные в обратном порядке по отношению к стандартной индексации в МТ4, если хотите применять формулу для
прямой индексации МТ4,то iq-..., замените на iq+... .
Прогноз не 100%, но лучше, чем ничего, использовать в индикаторах можно.

For individual adjustment to your task you can also play with coefficients: 0.3*( and 0.7*(, in total it should be one.

 
Piligrimm:

For anyone interested, I can share the "formula for happiness"!

GD[iq] 
Some wild polynomial of random numbers (if GD[iq] is a quote). Maybe it would make sense if by some law to recalculate constant coefficients -0.610885 etc. Could you please tell me the theory by which this terrible formula is obtained :-). Or is it the proverbial intelligence of NS
 
Piligrimm:

For anyone interested, I can share the "formula for happiness"!

For individual adjustment to your task you can also play with coefficients: 0.3*( and 0.7*(, in total it should be one.


So, what is iq ? If we are talking about a zigzag, is it simply a sequence of its indices? I.e. iq-1 would be the previous break point of the zigzag ?
 
Loknar:
Piligrimm:

For anyone interested, I can share the "formula for happiness"!

For individual adjustment to your task you can also play with coefficients: 0.3*( and 0.7*(, the sum should be one.


So what is iq ? If we are talking about a zigzag, is it just a sequence of its indices ? I.e. iq-1 is the previous break point of the zigzag ?

Yes, exactly so, iq-1 is the previous point. I developed this polynomial for my indicator whose charts are shown above. I didn't check it but I hope it may be useful for someone.

If we talk about the algorithm used to build this polynomial, it is based on finding laws that connect different arguments, in this case lagging arguments relative to the trend to be predicted.

The figure shows how this polynomial works for me: the blue line is the trend on inflection points, the pink line is the one passed through the polynomial. The data input is normalised, hence this scale of scale.

 
Piligrimm писал (а): For anyone interested, I can share the "formula for happiness"!
GP1[iq+1] = 
There you are, you reindeer...
 
Prival:
Piligrimm:

For anyone interested, I can share the "happy formula"!

GD[iq] 
Some wild polynomial of random numbers (if GD[iq] is a quote). Maybe it would make sense if by some law to recalculate constant coefficients -0.610885, etc. Could you please tell me the theory by which this terrible formula is obtained :-). Or is it the proverbial intelligence of NS

The polynomial I showed earlier is not so wild, for example I can show a really wild polynomial, which I use in my calculations.

It's written in Matlab, I removed the last two lines to prevent it from going into circulation.

GR(i)=0.25*(0.4*(0.55*(0.6*(0.09*(-0.00192393 +GM(i+3)*(-0.1725) +GM(i+6)*(1.17444))+0. 28*(-0.00130286 +(-0.000123992 +GM(i+5)*(-0.821849) ...

+GM(i+6)*(1.82199))*(0.302188) +(-0.00145804 +GM(i+4)*(-0.153087) +GM(i+6)*(1.15453))*(0. 699112))+0.09*(-0.000577229 +GM(i+3)*(-0.162435)...

+GM(i+6)*(1.16299))+0.09*((0.832328 *GM(i+4)*GM(i+6)-0.119317 *GM(i+6)*GM(i+5)-0. 100951 *GM(i+5)-0.0192996 *GM(i+2))/(GM(i+4)-0.361992...

*GM(i+5)-0.0452508 *GM(i+6)))+0.09*((1.00001 *GM(i+6)*GM(i+6)*GM(i+6)*GM(i+6)-1. 03818 *GM(i+6)*GM(i+6))/(GM(i+6)*GM(i+6)-1.03817...

*GM(i+6)))+0.09*((1.07271 *GM(i+6)-0.512733 *GM(i+6)+0.684408 *GM(i+4)-0.485238 *GM(i+4)*GM(i+4))/(1-0.240858 *GM(i+5)*GM(i+6))+0.09...

*((1.00137*GM(i+6)*GM(i+6)-0.000473002 *GM(i+4)*GM(i+6)-0.998682 *GM(i+6)*GM(i+6)+6.

*GM(i+6)))+0.09*(0.730651 *GM(i+4)*GM(i+4)*GM(i+6)/(GM(i+4)*GM(i+4)-0.269349 *GM(i+5)*GM(i+5)))+0. 09*((0.717833 *GM(i+6)*GM(i+4)*GM(i+6)...

-0.11191*GM(i+4)*GM(i+4)*GM(i+4))/(GM(i+6)*GM(i+4)-0.471068 *GM(i+6)*GM(i+5)+0.209781 *GM(i+6)*GM(i+6)-0.132089 *GM(i+3)*GM(i+6)-0.000702832 ....

*GM(i+5))))+0.4*(0.2*(0.6*(-0.00130286 +(-0.000123992 +GM(i+5)*(-0.821849) +GM(i+6)*(1. 82199))*(0.302188) +(-0.00145804 +GM(i+4)...

*(-0.153087) +GM(i+6)*(1.15453))*(0.699112))+0.4*((0.717833 *GM(i+6)*GM(i+4)*GM(i+6)-0. 11191 *GM(i+4)*GM(i+4))/(GM(i+6)*GM(i+4)...

-0.471068 *GM(i+6)*GM(i+5)+0.209781 *GM(i+6)*GM(i+6)-0.132089 *GM(i+3)*GM(i+6)-0. 000702832 *GM(i+5))))+0.25*(-0.000577229 +GM(i+3)*(-0.162435)...

+GM(i+6)*(1.16299))+0.35*((1.00001 *GM(i+6)*GM(i+6)*GM(i+6)-1.03818 *GM(i+6)*GM(i+6))/(GM(i+6)*GM(i+6)*GM(i+6)-1. 03817 *GM(i+6))

+0.2*((1.07271 *GM(i+6)-0.512733 *GM(i+6)+0.684408 *GM(i+4)-0.485238 *GM(i+4)*GM(i+4))/(1-0. 240858 *GM(i+5)*GM(i+6)))))+0.45*(0.4*((1.73835 ...

*GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)-0.0334794 *GM(i+3)*GM(i+4)*GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)-0. 919558 *GM(i+4)...

*GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)-0.376192 *GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)-0.345737)/(GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)*GM(i+5)-0. 0355159...

*GM(i+3)-0.901092 *GM(i+4)))+0.6*((-2.01988 *GM(i+3)*GM(i+3)*GM(i+4)*GM(i+6)+2.90062 *GM(i+3)*GM(i+4)+5.31466 *GM(i+3)*GM(i+3)...

*GM(i+5)-3.01304 *GM(i+3)*GM(i+3)-4.34954 *GM(i+3)*GM(i+5))/(GM(i+3)*GM(i+4)-2. 16719))))+0.4*(0.33*((1.00914 *GM(i+4)*GM(i+5)...

*GM(i+5)+0.977507 *GM(i+4)*GM(i+4)*GM(i+5)-1.9751 *GM(i+4)*GM(i+3)*GM(i+5))/(GM(i+4)*GM(i+5)-0. 988447*GM(i+3)*GM(i+3))+0.67*((2.51015 ...

*GM(i+6)-0.979174 *GM(i+5)*GM(i+6)-0.642762)/(1-0.111777 *GM(i+5)*GM(i+5)*GM(i+4))))+0. 4*(0.9*(0.3*((1.00914 *GM(i+4)*GM(i+5)*GM(i+5)...

+0.977507 *GM(i+4)*GM(i+4)*GM(i+5)-1.9751 *GM(i+4)*GM(i+3)*GM(i+5))/(GM(i+4)*GM(i+5)-0. 988447*GM(i+3)*GM(i+3))+0.7*((0.0988538 *GM(i+4)...

*GM(i+6)-0.0240242 *GM(i+4)*GM(i+5)*GM(i+5)+0.0291295 *GM(i+4)*GM(i+4)+0. 904081 *GM(i+4)-0.951504 *GM(i+3))/(GM(i+4)-0.943467...

*GM(i+3))))+0.1*((2.01304 *GM(i+5)*GM(i+5)*GM(i+5)-2.02312 *GM(i+4)*GM(i+5)*GM(i+5)+0. 0156151 *GM(i+5)*GM(i+5)*GM(i+5)...

/(GM(i+5)*GM(i+5)*GM(i+5)-1.01005 *GM(i+4)*GM(i+5)-1.14951e-005 *GM(i+5)*GM(i+5)+0. 0155924 *GM(i+5)*GM(i+5)-7. 72653e-007 *GM(i+5)-7.

*GM(i+5)*GM(i+5))))+1.8*(0.3*((-0.610885 *GM(i+4)*GM(i+4)*GM(i+5)-0.0795671 *GM(i+3)*GM(i+4)*GM(i+4)*GM(i+5)+1. 19161 *GM(i+4)...

*GM(i+4)-0.422269 *GM(i+3))/(GM(i+4)*GM(i+4)-0.505662 *GM(i+3)*GM(i+4)*GM(i+4)-0. 415455 *GM(i+5)*GM(i+5))+0.7*((-0.610885 *GM(i+5)*GM(i+5)...

*GM(i+6)-0.0795671*GM(i+4)*GM(i+5)*GM(i+5)*GM(i+6)+1.19161 *GM(i+5)*GM(i+5)-0. 422269 *GM(i+4))/(GM(i+5)*GM(i+5)-0.505662 *GM(i+4)...

*GM(i+5)*GM(i+5)-0.415455 *GM(i+6)*GM(i+6))))+0.3*((0.325815 *GM(i+5)*GM(i+5)*GM(i+5)-0. 322486 *GM(i+4)*GM(i+4)+0.00437944 *GM(i+5))...