"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 22

 
Who has dealt with SLTM?
Long short term memory - Wikipedia, the free encyclopedia
Long short term memory - Wikipedia, the free encyclopedia
  • en.wikipedia.org
Long short term memory (LSTM) is a recurrent neural network (RNN) architecture (an artificial neural network) published1 in 1997 by Sepp Hochreiter and Jürgen Schmidhuber. Like most RNNs, an LSTM network is universal in the sense that given enough network units it can compute anything a conventional computer can compute, provided it has the...
 
gpwr:

You with the links right away. Otherwise I don't understand you completely. Or without abbreviations :) .

And imho any modeling network can be a classifier.

 
TheXpert:
Who dealt with SLTM?

I was just about to ask if all neurons have a stroke structure like:

in*wg
  +
 ...

Now I see that not all of them, we should take this into account and make a variety of algorithms not only by activator type but also by neuron type.

 
Mischek:
You first try to formulate a general or near-general opinion to the requirements for a specialist
Nope. This is not even the second level of dusk, but the third.
 
Mischek:
You first try to formulate a general or almost general opinion on the requirements for a specialist

Vladimir gpwr suits me personally, maybe a couple more of their own will come up, so that the guests are not needed.

Another thing is that people are used to the fact that the case should move with a clock clock, but this is OpenSourse, such projects can last much longer, because people work when there is time.

 
TheXpert:
Who has dealt with SLTM?
Why exactly are you interested in it?
 
Urain:
Why exactly are you interested in it?
Not familiar with the principle at all. She's not the only one, I can ask more questions :)
 
TheXpert:

You'd better give me the links right away. Otherwise I don't fully understand you. Or without abbreviations :) .

And imho any modeling network can be a classifier.

SVM = Support Vector Machine

RBN = Radial Basis Network

Here are some links:

Т. Serre, "Robust Object Recognition with Cortex-Like Mechanisms", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE

Bell, A. J., Sejnowski, T. J. (1997).
The independent components of natural scenes are edge filters.
Vis. Res., 37, 3327-3338.

Olshausen, B. A., Field, D. J. (1996).
Emergence of simple-cell receptive field properties by learning a sparse code for natural images.
Nature, 381(6583), 607-609.

http://www.gatsby.ucl.ac.uk/~ahrens/tnii/lewicki2002.pdf

 
TheXpert: SLTM
Not familiar with the principle at all. She's not the only one, I have more questions :)

Wiki says that in addition to the usual circuit.

in*wg
  +
 ...

The neuron also uses inputs multiplication, and return signal (apparently, from delay), also swears that the main BeckProbe method often gets stuck when the error cycles in feedbacks, so it is desirable to do hybrids of learning with GA. Activators are only on the first layer, everything is linear, first neuron (or committee not very clear) transforms inputs, others play a role of filters (allowing or not passing the signal).

You can call it a neuron block, or a single neuron with a complex function of passage, it depends on how you look at it, a network is built from such blocks.

 
TheXpert:
Not familiar with the principle at all. She's not the only one, I can give you more questions :)
You're not the only one who's like this.