Using artificial intelligence at MTS - page 20

 
usdeur:
solandr wrote (a):
usdeur:
To continue - write to E-mail

Unfortunately I didn't understand anything in the reply. Could you write something specific about the problem already here on the forum? Otherwise what's the point of exchanging emails?
Where is the e-mail itself?
 

Question for mathematicians:

Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?

Please explain it clearly.

 
Hello Gentlemen Developers!
After I found out about ANN and its application in forex, I wanted to study this topic (ANN, I know about forex for a long time), so I did. So, I have a few questions about the use of the ANN for Forex, the answers to which I haven't found yet:
1) In one of the materials, which I read, it was written that when you learn ANN is possible to "retrain the system", "retrained" ANN gives the right results only in situations (patterns) which it was trained, in other cases, its results are not true, ie.i.e. the ANN becomes a trivial table and loses its ability to generalize. My question is: whether such situation is possible with ANN working with FOREC and whether possibility of forming of such situation depends on a way of training (GA, stochastic, method of back propagation of error) or type of the network (I'm going to use unidirectional multilayer model). How to avoid such a situation?
2) Suppose, I choose a trivial method of training a network on history (a) and work after training (b): (a) I take a moment in history T, which is equal to the current moment T=0, and input a training system with close prices X(T+1), X(T+2), X(T+3),... X(T+N) (where N=const and X is the price of the instrument as a function of T), then I feed the forecast made by my system X'(T) before training at that moment and the real value of X(T).= X'(T) then I teach the system to that situation, then I decrease T by one and repeat this whole cycle again until T > 0 (the bigger T is, the further "ancient" time moment T is, for "one step" T can be, for example, one day), when the system is trained (b) I simply wait for the "step" (in our case we wait one day), if the previous forecast was not correct, I teach the system, then I calculate the forecast and open a deal with it, etc..
Advisors, working on the basis of ANN I have seen on this resource, are guided by probability of correctness of the forecast (correct me if I am wrong), and if this probability is greater than a certain constant B given by a human, then the deal is opened. How do you evaluate probability in general, for example with the way EA works?
I, personally, do not know how an EA can NOT open deals every 24 hours, for example (unless the forecasted income is lower than the spread of a symbol). What input data can the Expert Advisor use to enter the market NOT strictly periodically?
3) In Ceasar's EA, I saw a constant of forgetting, I do not understand why one is needed, and how to implement learning method-dependent forgetting? Isn't the ability to "forget" a natural property of the ANN?

ZZY I need the opinion of professionals on the topic ANN, if too lazy to write, plz, just throw me a link to the resource (s) answering each of the items in the thread separately.
ZZZY I have not read the source code, only studied the instructions for their use.
 
Aleksey24:

Question for mathematicians:

Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?

Please explain it clearly.

What a strange question to ask.
Please explain the question.
 
Mak:
Aleksey24:

Question for mathematicians:

Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?

Please explain it clearly.

That's a strange question to ask.
Explain the question.



I think the question means, "Is it worth bothering with neural networks?"
 
Mak:
Aleksey24:

Question for mathematicians:

Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?

Please explain it clearly.

That's a strange question to ask.
Explain the question.



The question probably means, "Is it worth bothering with neural networks?"
 
I will add more to my question (2). Is this program structure viable, I'm not talking about the input data itself, I mean my approach to training the ANN, i.e. when to call the training function?
 
1. It is possible, and moreover, that this will be the situation in most cases.
It does not depend on the method of training, it can depend on the type of network, but it is unlikely.
How to avoid it - training sample should be hundreds, thousands times larger than the number of weight parameters in the network,
then the probability of overtraining will be less.

The point is simple, NS is just a function of the set of inputs and the set of weighting parameters.
By selecting a set of parameters, the goal is to get a given response at the function's output - this is learning.
There are a lot of weighting parameters - hundreds and thousands, hence the overtraining of networks in most cases.
 
IMHO, it's not worth steaming about with networks :)

Learning NS is actually optimising a function with a huge number of parameters (hundreds and thousands).
I don't know what to do to avoid overtraining in this case,
The only solution is to take a training sample of 1-100 million samples.
But there is no guarantee...
 
Mak, you are clearly exaggerating something. Instead of exceeding it by hundreds or thousands of times, as you say, by ANN theory 10 is enough. And the criteria for retraining (fitting) are known: it is a global minimum error in the testing area.

Another thing is network architecture. Better classifying meshes than interpolating meshes.