Neuromongers, don't pass by :) need advice - page 5

 
Figar0:


.......

Interesting phrase, why "random" inputs were used, can you explain in a nutshell?

This phrase means the following. On a reliably large sample any indicators show results that are comparable to those that would be obtained with random, "off-balance" inputs. Including if a neuronet "does not understand" what its creator tries to explain it (it is usually creators' problem, but at the same time it is a stumbling block).

The experiments with training of a neural network with data representing random increments are indicative. The MO of such series is 0. A network trained on such random data is the better trained, the closer its results are to 0. Thus, a perfectly trained network on a series of NAs will give a perfectly flat straight line of 0.

And vice versa. If the results show a fact other than 0 in the positive region, this means one thing: some regularities found by the network are being exploited, while there is also a weight pulling the MO into the negative region - the spread.

 
Figar0:


1) Hmm, how to improve working NS, I've been struggling with this problem for years anyway. There are some improvements, but it's pennies and crumbs, and that's considering that I know my grid inside out. The only qualitative leap was once I figured out how to improve the training system. That's why I advise you to think in that direction.

And so the inputs (super secret of neural networkers) change here and there - pennies; tweak the architecture - crumbs....

2) Z.U. Would you post a full test of OOS, for example, for just last March? I'll try to see how it compares to mine.

3) Z.U.2 In a follow up) So, according to you it's not about the NS type. What is it about? I agree in principle, but that's the secret of a capable NS, even generally having one I can't formulate....

1) Yes, a lot depends on the training system. But there's probably not much that can be improved.

About inputs - hmm, perhaps this is one of the main salts to help shifting MO to +. The inputs are driven by the theory that describes them, at least.

2) Interesting to see. And please on pairs like GBPJPY.

3) I'm not sure that the type of NS has nothing to do with it either, but Andrei claims that NS doesn't play a special role in this case. My version is a combination of factors: theory-reasoned inputs, theory-reasoned and theory-described links between inputs and outputs. Reasoned (though, dhz) outputs. I would like to hear Andrei's opinion on this.

 
Figar0:

Judging by the speed of preparing tests with such a long period and a large number of retraining, all this is automated within the DLL itself.

In EA.

How many training parameters/weights are there inside the network itself, what is the criterion for stopping the training (number of epochs, reaching an acceptable error on the test sample)?

35 neurons 60 scales. There's no training in its classical sense -- I get the optimal result immediately by ANC.

Interesting phrase, why "random" inputs were used can you explain in 2 words.

Equivalent to the phrase "drain at the speed of the spread".
Figar0:

That is why I advise you to think in this direction.

Alas there is nothing to improve for lack of it, but checking the network for adequacy is a valuable idea, I don't even have it yet. Though the probability is minuscule, but it exists.

Z.U. Would you post the full OOS test, for example, for the just past March? I'll try to see how it compares to mine.

Tomorrow then already.

(Follow-up) So, according to you it's not about the type of NS. What is it about? I agree in principle with this, but here is the secret of a capable NS, even generally having one I can not formulate....

Agreed :) .
 
Figar0:

Z.O.S. Would you post a full OOS test, for example, for just last March? I'll try to see how it compares to mine.

I put it in the mail.
 
TheXpert:
I put it in my personal message.


Yes thanks, I looked it up. It's a pity it's private, I don't know if I can discuss it here now...

As a trial balloon, just a little bit and without much specifics: Is there some error in the algorithm, because at 15M TF all trades are opened as at 1H TF? Although it may be just participation of data of a larger TF in calculation...

And the first thing that comes to mind as an improvement, where maybe we should look for an answer:

- we actually get a flipped system (except for a few trades), we can "play around" with the threshold of the Expert Advisor on the neural network response as a weak signal filter. What is obviously good on a training period (the flipped system at "power" of NS will really max out), requires a somewhat different approach to interpreting the signal on new data.

- The contradiction: the percentage of profitable trades (normal) and the final result (I want to improve it). A couple of years ago, I used to make an Expert Advisor based on k-nearest neighbours; percentage value of profitable trades was stably over 70-75%, while the final result was not so good. The remaining 25% of deals turned out to be so fat that I swallowed all profit of the 75% of successful deals. I have some ideas here too, but to be honest, I have not really solved this problem. Although, I understand where its roots are growing.

In general I got everything about your system except "echo" but I haven't got used to it yet, but it's a waste of time) and one thing:

joo : theory-based inputs


What kind of theory justified the inputs in the context of the applied problem we're solving? It is worth a Nobel Prize) I again tried to bring some theoretical grounding for the NS inputs, in particular this very aim I asked in a branch https://www.mql5.com/ru/forum/114902 But to say that I succeeded, again I can not. More precisely, I have succeeded, but it is so overkill that it is difficult to use in practice.

 
Figar0:

As a trial balloon, just a little bit and without much specifics: Is there some error in the algorithm, because at 15M TF all trades are opened as at 1H TF? Although probably it's just participation in calculations of data of a larger TF...

It's the specifics of the job.

What is obviously good at the learning period (the reversal system at the "power" of NS will indeed give out the maximum), requires a slightly different approach in interpreting the signal on new data.

Well, you can stick with any system you want. Yes now it is almost reversible, there is a small gap between closing and opening, you can play with it, but it will hardly do much. I'll try to explain why.

On the learning period any adequate trading strategy will behave fine. On forward any will fail. I.e. a crude one will fail just as well as a sophisticated one, since the trade is simply based on unknown data. Yes, to be completely transparent, the trading strategy is on top and depends only on the tail. Neuronics does not depend on the trading strategy in any way.

I generally understood everything about your system, except "echo", haven't yet digested to the end how it works, but that's a gain) and one point:

Well, if that's really the case, welcome to private, we can talk more substantially there.

It's a pity no other neuronists can be heard.

EuroChief


 
Figar0:


What kind of theory could justify the inputs in the context of the applied problem we are solving? This is worthy of a Nobel Prize) Again I tried to bring some theoretical justification of the NS inputs, in particular I asked this very aim in the branch https://www.mql5.com/ru/forum/114902. But again I can not say that I succeeded. More precisely, I have succeeded, but it is so overkill that it is difficult to use in practice.

The Theory of Overflowing Patterns and the Second Type of TS. No, it's not Nobel Prize-worthy of course. There is no fundamental discovery and no mathematical deduction here. Rather, it is some set of considerations that can be used to select and compile input data for analysis by a neural network or other analysis tool.

A monstrous amount of theoretical and experimental work is still required to build up a clear idea of why it works.

 
TheXpert:
....

EuroChief

Andrey, can you attach some MM to the EA, a dumb martin for example. Very curious to see. I know you're saying it's early. Yes, it's early, but very curious.
 

I can tell you roughly.

EF is about 4.5, that is, for this time (10.2001 to today) with a maximum drawdown of 25% on the eurochief can earn 100 *(1.2^4.5 - 1) = ~130% .

In order to start a serious conversation, you need a FS of at least 20

 

How do you deal with the problem of neural network retraining? How do you form a test sample?

This is an important question for me personally. Now I'm reading articles about training sample size and want to do some experiments with the way of forming a test sample, which I always use for early training stopping.

Why I'm asking: I've looked at your OoS results and the results on the test sample. Apparently the system learns well and approximates patterns on the test segments, but sometimes fails on the validation segments. Maybe it makes sense to form the test sample in a different way...