Neuromongers, don't pass by :) need advice - page 6

 

Already know which pairs to test next :)

 
TheXpert:

I already know which pairs I will test next :)

try not to test pairs, but DTs ...

;)

 
alexeymosc:

How do you deal with the problem of neural network retraining? How do you form the test sample?

There isn't. With a certain ratio of weights to patterns, this problem stops happening. I was talking about sampling earlier.

Obviously, the system learns well and approximates patterns in test segments, but sometimes fails in validation segments. Maybe it makes sense to shape the test sample differently...
If only it were that simple... It might make sense... What's the other way around?
 

These options are on my mind at the moment, as they say:


- The test sample is always made up of the most recent data prior to the validation cut-off (take into account "The Time Series Recency Effect ", although this too is one of the researcher's a priori assumptions, but one can try);


- test sample is randomly mixed with the training sample;


- Test sample is not randomly mixed up with the training sample, but of 000100010001 type, i.e. covers the sample space evenly.



And try a different test sample size for each case. Options:


- equal to the validation cutoff;


- calculated based on sampling error, i.e. confidence interval of say 5%, confidence level of 95%.

 

So I understand that you don't use a test sample at all... You just train the network and go ahead, i.e. you test it at once. And if the network is trained on the same data (training sample) and the quality of training is evaluated on the test sample? And then - OOS.

IMHO - test sample is necessary to control the training of the network.

 

I agree with alexeymosc. If you get into neural networks, you should be armed properly.

I think that's what it's called:

  • Training data sampling (data segment A; estimating error on it makes no sense),
  • validation (estimates the error on another data segment, B; segment B is implicitly involved in the training, as B determines the end of the training by the minimum error)
  • and test, C (data not known at all).
 
joo:

Flowing Pattern Theory ....


You are so confidently talking about some flowing patterns. Meantime neither Yandex nor Google have even heard about them (or their search doesn't work either :)). And while I probably know what you mean, I'd like something at least a little more detailed if possible.

TheXpert:

I already know which pairs I will test next :)


Why pairs?) Try some indices, gold... I wonder what will work out there.

Z.U. And in my opinion all these pictures so far say that TC with such settings at the moment will not work. But from 2001 to 2005 it's good) We should make some adjustments.

 
Figar0:


1) You speak so confidently of some flowing patterns.

2) Meanwhile, neither Yandex nor Google have even heard of them (or maybe their search doesn't work either :)). And although I probably know what it's all about, if possible I'd like something at least a little more detailed.

.....

1) Well, how else could it be? You know it is a figment of my imagination.

2) It's not very well known, I guess. :) What more can I say? - Just what I said before, you can look it up on the forum. Maybe I'll pile up all my posts and write some sort of a "synthesis" of the essence of the theory and it will be helpful for me personally.

 

Maybe a trivial question, but still.

Can you please tell me if this is the right way to teach NS or not?

Or is it wrong to train repeatedly but with a different purpose for the indicator, and should it be like this?

If anything, I use NeuroSolutions.

 
Summer:

Can you please tell me if this is the way to teach NS or not?

I don't see any good reason to say no. Why not? Is the data new? yes.

Teaching using the window method (i.e. essentially getting a recurrence formula) is exactly how it is taught.