Using Neural Networks in Trading. - page 12

 
registred >> :

... What, no one has ever solved the problem of neural networks in forex?

Why no one?

Those who have decided, keep quiet :)

1.

2.

And the rest have to work with protein neurons :(

 
goldtrader >> :

Why no one?

Those who have made up their minds keep quiet :)

1.

2.

And the rest have to work with protein neurons :(

So what you are discussing here is useless, it turns out in fact... I personally have not used neuronics in forex, I have dealt with them on other matters. I would like to try them for trading, so to speak, but I don't have time yet. That is why I cannot say anything in relation to Forex. Training a network is a very complex thing. I mean that it is often very difficult to find a qualitative generalization, of course, we must perform a lot of experiments increasing the number of neurons in the hidden layer, increasing the sample size, look how the network is trapped in a shallow local minimum and try to get it out of there. And after all this it may happen that nothing happens. In general, they are in fact a lot of difficulties.

 
registred >> :

So what you're discussing here is useless, it turns out really...

In my post above, under numbers 1 and 2, there are links that you didn't follow, judging by your response.

The neural network advisors are trading there.

NS is not the easiest tool to make profits in the financial markets, but it works well in the right hands.

 
goldtrader >> :

In my post above, under numbers 1 and 2, there are links that you didn't follow, judging by your response.

That's where the neural network advisors are trading.

NS is not the easiest tool to make profit on financial markets, but it works well in skilful hands.

I was, I often hang out there on euroflood. I have my own prediction systems, they are not based on neural networks. Actually neural networks are just an interesting topic. I know approximately what I would require from a network, it's just, like I said, I don't have time for all this programming yet. All the more, I'm happy with my system so far. The difficulty is in setting up this whole neural network. Like I said, it takes too long to learn. I would apply other things than gradient method of optimization.

 
registred писал(а) >>

I would apply other things than the gradient method of optimisation.

If not a secret, which one?

 
registred писал(а) >> Network training is a very complicated thing. You can make a network over-trained, under-trained, i.e. it is often difficult to find a qualitative generalization, you have to do a lot of experiments increasing the number of neurons in a hidden layer, increasing the sample size itself, look how the network gets into a shallow local minimum and try to get it out of there. And after all this it may happen that nothing happens. In general, there is actually a lot of complexity with them.

This is practically a fundamental thing for neural networks. I tried to raise this problem, but as it turns out, not many people are interested in it. More interested in architecture and sophistication of neural networks, although just this question has been solved long ago and as it turns out there is no point in chasing it. Increase of neurons in a hidden layer leads to increase of sample size - increase of sample size leads to undertraining of the network as there are too many rules on bigger historical sample which the network cannot understand and learn. As the result, it gets stuck in some local minimum from which it is impossible to get out - it is either overlearning or underlearning. It's more likely to be overtrained. As a result, increasing the number of neurons negatively affects the neural network operation in the future.

 
LeoV писал(а) >>

This is practically a fundamental thing for neural networks. I tried to raise this problem, but as it turns out, not many people are interested in it. More interested in architecture and sophistication of neural networks, although just this question has been solved long ago and as it turns out there is no point in chasing it. Increase of neurons in a hidden layer leads to increase of sample size - increase of sample size leads to undertraining of the network as there are too many rules on bigger historical sample which the network cannot understand and learn. As the result, it gets stuck in some local minimum from which it is impossible to get out - it is either overlearning or underlearning. Overlearning is more likely. As a result, increasing the number of neurons negatively affects neuronet operation in the future.

As an experienced practitioner, have you come up with the limits? What is in your opinion the optimal size of training set, structure and number of inputs?

 
StatBars >> :

Which one, if it is not a secret?

If you take neural networks, kernel approximation neural networks are better, they learn quickly.

 
LeoV >> :

This is practically a fundamental thing for neural networks. I tried to raise this problem, but as it turns out, not many people are interested in it. More interested in architecture and sophistication of neural networks, although just this question has been solved long ago and as it turns out there is no point in chasing it. Increase of neurons in a hidden layer leads to increase of sample size - increase of sample size leads to undertraining of the network as there are too many rules on bigger historical sample which the network cannot understand and learn. As the result, it gets stuck in some local minimum from which it is impossible to get out - it is either overlearning or underlearning. It's more likely to be overtrained. As a result, increase of the number of neurons negatively affects neural network operation in the future.

The network almost always finds a local minimum, it is usually deep enough and minimally necessary to solve a given task. As for the hidden layer, it all depends on dimensionality of input parameters, which essentially represent complexity of a problem to be solved. I.e. there may not be enough neurons in the hidden layer, or there may not be enough examples for a given dimensionality of inputs. In a word, it is necessary to conduct tests, gradually increasing the number of neurons in the hidden layer from the 1st neuron and so on, until the required generalisation error is reached.

 

Suppose we have some single-parameter dependence, such as y(x)=F(x), where the general form of this dependence F is unknown to us and which generates the price series, or rather, the dependence of the expected price increment on the reading of some of our indicators. In this situation we can assume that the dependence is linear, for example, and knowing several previous values of the price increments y[i] and indicator readings x[i], we can easily solve the problem of searching for the optimal (in the sense of the least deviation) linear approximation of the unknown law F by the polynomial of first degree y(x)=a*x+b . Then, coefficients a and b are searched by least squares method and are equal:

We can go further, and approximate the unknown dependence (law) by a polynomial of second degree y(x)=a2*x^2+a1*x+a0 or even nth! But all this is for the function of one variable or, in our case, one indicator... If we expect to use two indicators, obtaining analytic solution for approximation of input data by plane (function of two variables) is already more difficult, and we can no longer find analytic expression for the nearest n-th order surface to F(x1,x2) in case of increasing degree of polynomial. But this problem is easily solved by NS with two inputs x1,x2, one hidden layer and enough neurons in it. Further, we increase number of inputs to 10-20 and have a gipper-surface of arbitrary order in 10-20-dimensional feature space - it's a dream!

In fact our perception of world around is based on the same principle - we unconsciously build some gipper-surfaces in our head, which reflect reality-our experience in optimal way. Each point on such imaginary sheet-surface is a responsible decision in one or another life situation, not always accurate, but almost always optimum...

Right! I'm getting a bit carried away. In short, it is difficult, if not impossible, to think of anything better than Neuronka for price analysis, unless it is insider information.