Finding a set of indicators to feed into the neural network inputs. Discussion. A tool for evaluating the results. - page 5

 
joo писал(а) >>

Why can't they be seen? The differences are visible. There is no oversaturation of neurons when the right search range is chosen. "You just don't know how to cook them."(c) :)

For different complexity of tasks there will be different optimality of tools, as you have correctly noticed (screwdrivers).

On the subject of range selection is very debatable. GA is very long, much longer than ORO - another drawback.

Interesting to know your technique for selecting a scale range, or what you are guided by

joo wrote >>

There will be different optimum tools for different complexity, as you correctly point out (screwdrivers).

GA is not for NSs. For the NS-th has its own methods of training (optimization). At least simply because you cannot use CV as with ORO.

 
rip >> :

Correct, the genetic algorithm does not use the error function to adjust the weights.

As far as I understand, you could mark up the m5 by the maximum profit that can be on the history and use this mark-up as a fitness function.

That's exactly what the function by which you're estimating an individual looks like.

>> so :)

 public double evaluate( IChromosome a_subject) {
        this. NNWeights=new double[ a_subject. size()]; //создаем вектор весов для нейросети
        for (int i=0; i<this. NNWeights. length; i++)
        {
            DoubleGene g= ( DoubleGene) a_subject. getGene( i);
            this. NNWeights[ i]= g. doubleValue(); //заполняем вектор весов для нейросети
        }
        net. SetWeights(this. NNWeights); //устанавливаем веса нейросети
        Trade trade =new Trade();
        for ( int i=0; i<this. csv. CSVInputs. length; i++)
        {
            trade. TradeCurrentSignal( net. ComputeForOneNeuronOutLayer(this. csv. CSVInputs[ i]),this. csv. CSVPrice[ i]);
        }
        
      return 1000000+ trade. Profit; //целевая функция должна быть >0
    }
 
rip >> :

It is a question of test sampling error. That is, you take the next month after the training sample. You mark it according to your algorithm. You feed the outputs to the trained network. You compare the results. That's exactly the graph of these errors we are interested in.


You can also get the error graph of the training sample and estimate the learning process of your network (or development of generations in the genetic algorithm).

When training, I learn until the target function increases or until I get bored (in time). I take care of retraining indirectly - training sample is much larger than weights in the neural network. then, I unload to .mq4 ( here, there may be an error... I'm testing ... everything works correctly) and watch with the MT4 strategy tester what it results in.

"comparing the result" - this is the part of your thought i don't understand... I don't understand this part of your thought. What's the point of intentionally retraining the network and comparing its result with the result of the network trained during previous periods?

 
rip >> :

Show the result when you compare all the sets you feed to the inputs :) I think all will be highly correlated. All indicators given use the same input data for the calculation.

I'm not comparing all the sets.... Even going through 200 sets of inputs, in fact there are more, is long and makes little sense as you might miss something.

 
iliarr >> :

When training, I learn until the target function increases or until I get bored (in time). I take care of retraining indirectly - training sample is much larger than weights in neural network. then, I unload to .mq4 ( here, perhaps, there is an error... I test it ) and see with MT4 strategy tester what I get.

"I don't understand this part of your thought... I don't understand this part of your thought. What i want to do is to retrain the network purposely and compare its result with the result of the network trained during previous periods.


Why retrain?! There is a sample on which you are training the network. Now simulate its operation, feed the network a sample with which it is not familiar and compare the result obtained from the network with that expected for the test sample.

 
rip >> :

Why retrain?! There is a sample on which you are training the network. Now simulate its operation, feed the network a sample with which it is not familiar and compare the result obtained from the network with that expected for the test sample.

what you're suggesting will determine the predictive capabilities of the trained network, not the quality of the training... and the predictive capabilities of a network depends not only on the quality of the training, but also on the structure of the network, how the network interprets what it produces and what information it feeds into it.

 
iliarr >> :

But in Forex, the result of what you suggest determines the predictive capabilities of the trained network, not the quality of training... while the predictive capabilities of a network depends not only on the quality of training, but also on the structure of the network, on how you interpret its output and on the information that you feed into the network.

Ahem... What does this have to do with predictive abilities? You have a network, you have your network with an assumed interpretation of its responses.

Based on your code, the Trade() class simulates the trading process in one way or another. Opening a position, holding a position, closing a position.

>>Based on that, you decide how suitable for you the given individual. So you've initially laid down some kind of rule

of interpreting the outputs.

 
iliarr >> :

i can't compare all the sets.... Trying through even 200 sets of inputs, in fact there are more, is long and makes little sense as you might miss something.

Hm ... The idea of a training sample:(Next price predictor using Neural Network )


ntr - of training sets

lb - lastBar


// Fill in the input arrays with data; in this example nout=1
for(i=ntr-1;i>=0;i--)
{
outTarget[i]=(Open[lb+ntr-1-i]/Open[lb+ntr-i]-1.0);
int fd2=0;
int fd1=1;
for(j=nin-1;j>=0;j--)
{
int fd=fd1+fd2; // use Fibonacci: 1,2,3,5,8,13,21,34,55,89,144...
fd2=fd1;
fd1=fd;
inpTrain[i*nin+j]=Open[lb+ntr-i]/Open[lb+ntr-i+fd]-1.0;
}
}

 
rip >> :

Hm ... The idea of a training sample:(Next price predictor using Neural Network )


ntr - of training sets

lb - lastBar


// Fill in the input arrays with data; in this example nout=1
for(i=ntr-1;i>=0;i--)
{
outTarget[i]=(Open[lb+ntr-1-i]/Open[lb+ntr-i]-1.0);
int fd2=0;
int fd1=1;
for(j=nin-1;j>=0;j--)
{
int fd=fd1+fd2; // use Fibonacci: 1,2,3,5,8,13,21,34,55,89,144...
fd2=fd1;
fd1=fd;
inpTrain[i*nin+j]=Open[lb+ntr-i]/Open[lb+ntr-i+fd]-1.0;
}
}

Thank you. I'll have a look.

 

IlyaA писал(а) >>

The public needs to see a graphical dependence of the learning error on time (number of epochs).

->

iliarr wrote(a) >>
we must be talking about different things... I'm not teaching with a teacher ( there's a learning error in this way of teaching)... i'm teaching to the maximum of the target function and i don't know what the maximum possible value of the target function is.

->

rip wrote >>

We are talking about test sampling error. I.e. you take the next month after the training sample. You mark it up, according to your algorithm. You feed the outputs to the trained network. You compare the results. That's the error graph we're interested in.

You can also get an error graph of the training sample and estimate how your network is learning (or the development of generations in the gen. algorithm is going on).

Looks like rip and IlyaA don't seem to understand that iliarr is using a teaching method without a teacher. What kind of learning error are we talking about when the target function is profit? Or do you both think that having trained the network on history, you'll run it on a test history and compare the obtained profit? The profit will be different, less or more, but different. The test history is different. Do not confuse with approximation please, where the criterion for the quality of approximation is the standard deviation of the original function and the obtained one.


StatBars wrote :>>

ORO when a neuron becomes oversaturated practically stops "training" it, while GA can easily oversaturate a neuron and continue to increase neuron weights further.

->

StatBars wrote >>

On the subject of range selection, it's very debatable. GA is very long, much longer than ORO - another drawback.

->

StatBars wrote(a) >>

GA is not for NSs. For NSs, there are their own methods of training(optimisation). If only because you can't use CV as with ORO.

I don't understand why such an unquestioning attitude?

Here are some links that came up on a search that says the opposite:

http://alglib.sources.ru/dataanalysis/neuralnetworks.php

http://network-journal.mpei.ac.ru/cgi-bin/main.pl?l=ru&n=13&pa=10&ar=3

http://masters.donntu.edu.ua/2004/fvti/solomka/library/article2.htm

http://www.neuropro.ru/memo314.shtml


I did not and do not claim that GA is the only correct optimization solution for all problems. It is simply more versatile than any other solution, and often faster, easily adaptable to almost any task. Another question, a legitimate one, is what minimum number of runs of a fitness function is necessary to find the optimal solution. This is what we should be talking about, not the speed of the optimization algorithms themselves. And here GA will outrun most other algorithms.

I can open a separate thread, if anyone is interested, where I can post interesting test functions for optimization algorithms and the results of various algorithms. I think it will be very useful not only for those who deal with NN, but for anyone who wants to get a result of their efforts optimal in many ways.

StatBars wrote :>>

Interesting to hear your technique for selecting a range of weights, or what you are guided by

In Maple or Mathca I look at the range of weights in which the activation function works. If the scope of some function is [-1;1], what is the point of "searching" for example in such a range of (-100;100) variables?