How to form the input values for the NS correctly. - page 29

 
sergeev писал (а) >>

To paraphrase, we can say that here is a proof of the fact that it is possible to get a linearly separable classification for the market.

I myself, after reading some clever books in the place about impossibility to make "exclusive or" by linear network, perceived it as impossibility to use linear networks for market (for that logical reason that market is much more complicated than a simple "exclusive or" :).

And maybe it is not? Maybe Yuri is right? And we don't need to shag a bunch of books on non-linear, but just beat everything with planes?

Linear separable classification can be obtained for ANY problem.

BUT you have to look for patterns yourself and form inputs based on them, whereas a non-linear perseptron can find patterns by itself.

As I wrote above -- if you get positive results with a linear perseptron, you don't need one, the rules have already been found.

 
Well, then the linear/non-linear question is closed.
 

Network template. Runs in a separate thread (CWinThread).

Version not yet finished as desired. No thread interrupt control.

Poorly implemented logic of file operations.

A lot of incorrect comments on the code.

15.07.08

Files:
better1.rar  50 kb
 
Made reading from workflow via timer (so as not to load unnecessary data)
Brought in pointers to read information from a workflow (to reduce number of copies)
Function plotting root-mean-square error (to see how the network is "traveling" to local minima)
Added button to stop calculations and saving current weights to a file.
Added normal comments

16.08.08
Files:
better1_1.rar  53 kb
_hilo_1.mq4  4 kb
_target_2.mq4  2 kb
 
sergeev писал (а) >>
Made reading from workflow via timer (to avoid overloading data).
Brought in pointers to read information from workflow (to reduce number of copies).
Function plotting root-mean-square error (to see how the network is "travelling" by local minima)
Added button to stop calculations and saving current weights to a file.
Added normal comments

16.08.08

Enviable prolificacy of Sergeyev Alexey !

 
Sart писал (а) >>

An enviable prolificacy from Alexei Sergeev !

+1 :)

 
1. Made more or less normal exchange between MetaTrader and VC++ via headers.
- CreatePattern - creates file of input, output vectors, also writes information about number of patterns and dimensionality of outputs/outputs into header.
- Then VC++ reads and creates similar array in itself. After the grid processing it creates a file with the same name but with the type .wgh where it records weights of the grid, thresholds and in the grid model header (number of layers, their dimensions)
- NeuroInd.mq4 indicator (NeuroIndP) - it reads the weights file and builds analogous model in itself and now uses the same algorithm as CreatePattern to go through bars and give this grid inputs. The indicator builds the output vector. NeuroIndP - reads the same but shows entry points.

The CreatePattern script and NeuroInd are "related" by the same algorithms of input vectors but with the difference that NeuroInd may have different dimension of the input vector (depending on how we decide to process the net). We should try to universalize this algorithm somehow and put it in a separate file (e.g. in <InputPatternAlg.mqh>). That way only it can be changed in this complex.
-----------
2. I've got rid of the intermediate CLayers in VC++ class structure (I think it was a mistake :) + we slightly save memory + the grid calculation algorithm coincides with the calculation algorithm in the indicator).
 

The more I work with the network, the more I realise that it is not so much the inputs but the input-output pair that are important. The articles from the first page from StatBars will be very "useful for learning" in this respect. Also noticed that if the output is continuous rather than binary, the approximation is faster and probably better. (few inconsistent and repetitive in-out pairs).

The inputs are basically fine. A square of dashes (e.g. 5 periods of 5 values per period) gives unique non-repeating inputs.

As for the outputs...

I tested the output with the ratio (Up-Dn)/(Up+Dn). It fits quickly.

The only drawback is that the ratio does not give an approximate idea of the absolute value of Up and Dn, which would be desirable :) It makes no difference whether it's 10/20 or 50/100.

If we just output a simple Up-Dn (to know the magnitude of price deviation and in what direction) and then compress it with an arctangent, it turns out that the saturation of values occurs.

(I should say right away that I am using arc-tangent and not linear compression, because I don't want to look for and bind to maxima).

It is possible to divide (Up-Dn) by coefficient to decrease saturation value, then saturation will occur on large values, which will already be rare and there will be no big repeatability and inconsistency.

Another option is to try separate networks only for changing Up and for Dn.

I wonder if anyone has any other outputs?

 
sergeev писал (а) >>

The more I work with the network, the more I realise that it is not so much the inputs but the input-output pair that are important. The articles from the first page from StatBars will be very "useful for learning" in this respect. Also noticed that if the output is continuous rather than binary, the approximation is faster and probably better. (few inconsistent and repetitive in-out pairs).

The inputs are basically fine. A square of dashes (e.g. 5 periods of 5 values per period) gives unique non-repeating inputs.

As for the outputs...

I tested the (Up-Dn)/(Up+Dn) ratio on the output. It converges quickly.

The only drawback - this ratio does not give an approximate idea of the absolute value of Up and Dn, which would be desirable :) It makes no difference whether it's 10/20 or 50/100.

If we just output a simple Up-Dn (to know the magnitude of price deviation and in what direction) and then compress it with an arctangent, it turns out that the saturation of values occurs.

(I should say right away that I am using arc-tangent and not linear compression, because I don't want to look for and bind to maxima).

It is possible to divide (Up-Dn) by coefficient to decrease saturation value, then saturation will occur on large values, which will already be rare and there will be no big repeatability and inconsistency.

As another option it is probably necessary to try separately networks only for changing Up and for Dn.

I wonder if anyone has any other outputs?

Do you feed some kind of input vector at each bar and require an output at each bar?

 
This version of MPS is a little better, but it's still not what you need, i.e. Short is followed by Long, and vice versa.
Files:
mps.zip  7 kb