Using artificial intelligence at MTS - page 21

 
Mak:
IMHO, it's not worth steaming about with networks :)

Learning NS is actually optimising a function with a huge number of parameters (hundreds and thousands).
I don't know what to do to avoid overtraining in this case,
The only solution is to take a training sample of 1-100 million samples.
But there is no guarantee...
So you understand "overtraining" as "undertraining"? seems to me they are antonyms =) clarify please.
 
Better classifying meshes than interpolating meshes.
I haven't read about that, please explain the difference between them, if you can, give an example of how each one works, you can use generic terms =)
 
Mak:
Aleksey24:

A question for mathematicians:

Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?

Please explain it clearly.

That's a strange question to ask.
Explain the question.


EXPLAIN:

I now think that it is necessary to trade not with specific fitted parameters, but with the spectrum of each parameter in the system.
The easiest way is to put several identical EAs, but with different sets of parameters - in different ranges of the parameter spectrum.
Each of these Expert Advisors should be allocated a certain % of the deposit, but all of them should be equal to the deposit percentage value, when trading using only one Expert Advisor (without spectrum).
Then if on moving averages three Expert Advisors open three positions, respectively at the beginning of movement in the middle and at the end.

I cannot yet decide how to use this idea in one EA for testing.

I asked Posh about this problem but still no answer.

The task of multivariate normal distribution (Gaussian) and neural networks of aX+bY+...=Z type are the same (for trading), or am I confused and confused in my head?
 
Aleksey24:
Mak:
Aleksey24:

Question for mathematicians:

Is the idea of applying a multivariate normal distribution of the parameters to be optimised equal to the principle of neural networks?

Please explain it clearly.

That's a strange question to ask.
Explain the question.


EXPLAIN:

I now think that one should trade not with specific fitted parameters, but with the spectrum of each parameter in the system.
The easiest thing to do is to put several identical EAs, but with different sets of parameters - in different ranges of the parameter spectrum.
Each of these Expert Advisors should be allocated a certain % of the deposit, but all of them should be equal to the deposit percentage value, when trading using only one Expert Advisor (without spectrum).
Then if on moving averages three Expert Advisors open three positions, respectively at the beginning of movement in the middle and at the end.

I cannot yet decide how to use this idea in one EA for testing.

I asked Posh about this problem but still no answer.

The problem of multivariate normal distribution (Gaussian) and neural networks of aX+bY+...=Z type are the same (for trading), or am I confused and confused in my head?
You're talking about something complicated with spectra! Here are the resources on the subject of ANN that I used to study:
https://ru.wikipedia.org/wiki/%D0%9D%D0%B5%D0%B9%D1%80%D0%BE%D0%BD%D0%BD%D1%8B%D0%B5_%D1%81%D0%B5%D1%82%D0%B8 - it's on the wiki and it's all in the broad strokes.
http://users.kpi.kharkov.ua/mahotilo/Docs/Diss/diss_ch1_intr.html - there's a paper on ANN, and in the middle of the paper there's a discussion of where it's from and what it's about, with diagrams and formulas.
http://www.robo-planet.ru/library.php?parent_id=324&PHPSESSID=82aafc8cb3c043ecbe043fa11dd50943 - this is a link to "Fundamentals of Artificial Neural Networks", a good site, there's a whole "tree" on the subject of ANNs - not just what I wrote.
 
Thanks for the links, I'll look into it.
But about the "spectra", you're wrong.
I'm not a proFessor, of course - but there is a rational point there.
 
Folks, no one has answered me, is it necessary to think through the algorithm of forgetting, or is it, after all, a natural property of the ANN?
 
lucifuge:
Folks, no one has answered me, is it necessary to think through the algorithm of forgetting, or is it, after all, a natural property of the ANN?

If you limit training to a finite number of bars (or not bars, whatever is used), forgetting will be a natural process. The market is changing, and what worked five years ago may not work now. But new conditions have already emerged, and if you don't teach it, it will pass by.
It is up to everyone to decide.
 
Mathemat:
Mak, you are clearly exaggerating something. Instead of exceeding it by hundreds or thousands of times, as you say, by ANN theory 10 is enough. And the criteria for retraining (fitting) are known: it is a global minimum error in the testing area.

Another thing is network architecture. Better classifying meshes than interpolating meshes.
Maybe - I'm a sceptic about NS.
Well yes, in statistics it is believed that one can draw some conclusions if the number of samples is 10 times the number of unknown parameters. But the errors in doing so are on the edge of reasonableness.

But you should agree that NS is essentially just a function of some kind of vector of inputs and set of weights.
This set of weights contains from hundreds (in the simplest cases) to tens and hundreds of thousands of parameters (weights).
Learning of NS is nothing else but optimizing this function by these hundreds - hundreds of thousands of parameters.
Everybody knows what happens in such cases.
That's why I'm a skeptic ...
 
lucifuge:
Mak:
IMHO, it's not worth steaming with networks :)

Learning NS is actually optimizing a function with a huge number of parameters (hundreds and thousands).
I don't know what to do to avoid overtraining in this case,
The only solution is to take a training sample of 1-100 million samples.
But there is no guarantee...
So you understand "overtraining" as "undertraining"? seems to me they are antonyms =) clarify please.
By overtraining I mean what is called CurveFitting.
It occurs when there are a lot of optimization parameters and little data.
 
Mak:
lucifuge:
Mak:
IMHO, it's not worth steaming with networks :)

Learning NS is actually optimizing a function with a huge number of parameters (hundreds and thousands).
I don't know what to do to avoid overtraining in this case,
The only solution is to take a training sample of 1-100 million samples.
But there is no guarantee...
So you mean "overtrained" as "undertrained"? Those are antonyms =) clarify, please.
By overtraining I mean what is called CurveFitting.
It occurs when there are a lot of optimization parameters and little data.

But this raises the question of network size. What a network can store depends upon its size and architecture. If you set too many samples to be trained that the network cannot remember, it will cause the overlearning effect - the network will stop recognizing what it knows.