You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
To continue - write to E-mail
Unfortunately I didn't understand anything in the reply. Could you write something specific about the problem already here on the forum? Otherwise what's the point of exchanging emails?
Question for mathematicians:
Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?
Please explain it clearly.
After I found out about ANN and its application in forex, I wanted to study this topic (ANN, I know about forex for a long time), so I did. So, I have a few questions about the use of the ANN for Forex, the answers to which I haven't found yet:
1) In one of the materials, which I read, it was written that when you learn ANN is possible to "retrain the system", "retrained" ANN gives the right results only in situations (patterns) which it was trained, in other cases, its results are not true, ie.i.e. the ANN becomes a trivial table and loses its ability to generalize. My question is: whether such situation is possible with ANN working with FOREC and whether possibility of forming of such situation depends on a way of training (GA, stochastic, method of back propagation of error) or type of the network (I'm going to use unidirectional multilayer model). How to avoid such a situation?
2) Suppose, I choose a trivial method of training a network on history (a) and work after training (b): (a) I take a moment in history T, which is equal to the current moment T=0, and input a training system with close prices X(T+1), X(T+2), X(T+3),... X(T+N) (where N=const and X is the price of the instrument as a function of T), then I feed the forecast made by my system X'(T) before training at that moment and the real value of X(T).= X'(T) then I teach the system to that situation, then I decrease T by one and repeat this whole cycle again until T > 0 (the bigger T is, the further "ancient" time moment T is, for "one step" T can be, for example, one day), when the system is trained (b) I simply wait for the "step" (in our case we wait one day), if the previous forecast was not correct, I teach the system, then I calculate the forecast and open a deal with it, etc..
Advisors, working on the basis of ANN I have seen on this resource, are guided by probability of correctness of the forecast (correct me if I am wrong), and if this probability is greater than a certain constant B given by a human, then the deal is opened. How do you evaluate probability in general, for example with the way EA works?
I, personally, do not know how an EA can NOT open deals every 24 hours, for example (unless the forecasted income is lower than the spread of a symbol). What input data can the Expert Advisor use to enter the market NOT strictly periodically?
3) In Ceasar's EA, I saw a constant of forgetting, I do not understand why one is needed, and how to implement learning method-dependent forgetting? Isn't the ability to "forget" a natural property of the ANN?
ZZY I need the opinion of professionals on the topic ANN, if too lazy to write, plz, just throw me a link to the resource (s) answering each of the items in the thread separately.
ZZZY I have not read the source code, only studied the instructions for their use.
Question for mathematicians:
Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?
Please explain it clearly.
Please explain the question.
Question for mathematicians:
Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?
Please explain it clearly.
Explain the question.
I think the question means, "Is it worth bothering with neural networks?"
Question for mathematicians:
Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?
Please explain it clearly.
Explain the question.
The question probably means, "Is it worth bothering with neural networks?"
It does not depend on the method of training, it can depend on the type of network, but it is unlikely.
How to avoid it - training sample should be hundreds, thousands times larger than the number of weight parameters in the network,
then the probability of overtraining will be less.
The point is simple, NS is just a function of the set of inputs and the set of weighting parameters.
By selecting a set of parameters, the goal is to get a given response at the function's output - this is learning.
There are a lot of weighting parameters - hundreds and thousands, hence the overtraining of networks in most cases.
Learning NS is actually optimising a function with a huge number of parameters (hundreds and thousands).
I don't know what to do to avoid overtraining in this case,
The only solution is to take a training sample of 1-100 million samples.
But there is no guarantee...
Another thing is network architecture. Better classifying meshes than interpolating meshes.