"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 35

 
You throw out sketches of the constructor, and there on the composition will be seen what to call from what...
 
Mischek:
The idea is that it should resonate ( is friendly ) with the logo methaqvot

MetaPerceptive ;) (Percipient Perception)

there is even an idea for the logo :) -- The name of the robot sniffing the flower is a play on the ambiguity of the word Perceptive. The more unusual the robot, the better :)

________

simply and tastefully Neural. or TheNeural :)

________

Neural Nets eXtension (NNX).

________

Neural Nets Xpert (NNXpert) :))))

________

Neural Nets Toolkit (NNToolkit)

________

Neural Nets Tools (NNTools)

 
TheXpert:

I'm all for Neural Nets eXtension (NNX).

especially in this part of Neural NetseXtension (NNX) because we will fuck with it heartily

Isn't it better to transform into Neural Universal eXtension (NUX) almost LINUX

 
Yeah, we'll either have to do a ballot or throw the options to Metakvotam later.
 
gpwr:
If the question is for me, in the literature, the networks I described are called hierarchical neural networks.

The cognitron is something similar, if you ask me.

Waiting for more :)

 
TheXpert:
Yes, we should either have a vote or throw options to Metakvotam later.

Why the rush, in principle it is still necessary to take into account in the name of the product and the ability to exchange with other neuropackages and the generation of the final stage of the finished Expert Advisor.

The product is more than NN. In the process, it may be something else useful.

 
Mischek:

The product is larger than NN.

Understood. But it is tied to NN. Will be tied more accurately.

I'm afraid that it will not turn out something like "AWVREMGVTWNN" :) The main thing is to convey the essence, the nuances are not so important.

 

We need information about

-Conjugate gradient descent

-BFGS

 

Method of conjugate gradients (wiki)

-BFGS (wiki)


Метод сопряжённых градиентов — Википедия
  • ru.wikipedia.org
Метод сопряженных градиентов — метод нахождения локального минимума функции на основе информации о её значениях и её градиенте. В случае квадратичной функции в минимум находится за шагов. Определим терминологию: Векторы называются сопряжёнными, если: Теорема (о существовании). Существует хотя бы одна система сопряжённых направлений для...
 

Lecture 3. HMAX model.

To better understand the details of the biological transformation of visual information, let us look at the popular object recognition model HMAX ("Hierarchical Model and X"). This model was created at MIT by Tomaso Poggio in the late 90's. The description and code of the model can be found here

http://riesenhuberlab.neuro.georgetown.edu/hmax.html

With some slight modifications the HMAX does a much better job than the classical neural networks in face recognition. This picture describes the model quite well:

The first layer of the model (S1) consists of filters of short, straight sections of four different tilts (vertical, horizontal, 45-degree tilt, and 135-degree tilt - shown in red, yellow, green and blue), each 16 in size, so that each section of the input image is "covered" by 4 x 16 filters. Each filter is a neuron with an output equal to the sum of image pixels in some area of the image multiplied by the input weights of this neuron. These input weights are described by the Gabor function. Here is an example of these filters (weights):

The second layer of the model (C1) consists of complex neurons. Each complex neuron selects the maximal activation (output) of the S1 neurons, which filter sections of the same slope in different parts of the image and two neighboring dimensions. Thus, this complex neuron has invariance to the position and size of elementary segments, which is explained below:

Neurons in the third layer of the model (S2) take inputs from C1 neurons. As a result, we get filters of more complex shapes (designated P1, P2,...) consisting of elementary segments. For each figure we have 4 sets of filters of different sizes. Filters within each set differ in their spatial position ("looking" at different parts of the image).

Neurons of the fourth layer of the model (C2) choose maximum activation of S2 neurons, filtering the same figure but of different sizes and spatial arrangement. The result is filters of more complex shapes that are invariant to their position and size.

References:

T. Serre, "Robust object recognition with cortex-like mechanisms," IEEE Trans. on Pattern, Aug 2006.

http://cbcl.mit.edu/publications/ps/MIT-CSAIL-TR-2006-028.pdf