How to form the input values for the NS correctly. - page 13

 

Folks, I apologise for "cutting in".

Has anyone worked in this direction?

The idea of "fishing out" useful information without a priori knowledge of it seems tempting. I wonder how this could realistically be used in our business?

 
Neutron писал (а) >>

Folks, I apologise for "cutting in".

Has anyone worked in this direction?

The idea of "fishing out" useful information without a priori knowledge of it seems tempting. I wonder how this could realistically be used in our business?

Feedback like this

 
It looks like a correlation. I.e. the function you are looking for is a function of correlation from input to output or vice versa.
 

Mutual information is offered as a target function. That is, it is a variant of learning without a teacher.

Is this what the end result will be? Some kind of sliding vector, i.e. a multidimensional muving?

 
YuraZ писал (а) >>

feedback like this


No, it looks more like PCA.

 
lna01 писал (а) >>
No, mutual information is offered as a target function. So it is a variant of learning without a teacher.

Why not? If there is correlation, the target function tends to one during training. In principle, I don't see much difference from conventional optimization.

 
sergeev писал (а) >>

2 YuraZ. People are picking up. That's good to see.

Your expert in the above thread was my first introduction. Thank you very much for the code. I will paste here its slightly corrected and decorated version. It is perfect for beginner.


I have strictly compared the output of my version with the real one on a test pattern.

I tried your version, I have never managed to teach it!

learning was faster in my variant

 
Neutron писал (а) >>

The idea of "fishing out" useful information without a priori knowledge of it seems tempting. I wonder how this could realistically be used in our case?

What about the I(X, Y) function?



IMHO, PCA (principal component analysis) or MGC (principal component method) and recirculation networks are what you need.
 
FION писал (а) >>

Why not? If there is correlation, the target function tends to one during training. In principle, I don't see much difference from conventional optimization.

"No" referred to feedback :). As for correlation as a target function, it does seem to be related to mutual information. But since the formulas are different, the learning trajectory may be different as well. Generally speaking, it is difficult to count on reaching the global extremum for a nearly complex system. And the resulting local extremum may turn out to be different for different learning paths.

 
Neutron писал (а) >>

Folks, I apologise for "cutting in".

Has anyone worked in this direction?

The idea of "fishing out" useful information without a priori knowledge of it seems tempting. I wonder how this could realistically be used in our case?

May I ask where this excerpt is from? I once tried doing an extraction of a useful signal from noise, but the work remained unfinished.