
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The article, part two is "how an artificial neuron works". Correct me if I'm wrong.
Judging from the article - this particular implementation of the neuron contains the following:
1. Weights for all neurons that are included in this neuron.
2. Sums up the weights using the weighted sum.
3. Actualising function - which already outputs the final value of this particular neuron.
Although the code doesn't compile, it is written quite clearly. See the neuron class for details.
Okay. Neuron activation function? That is, the function of bringing to the range between 0 and 1, or between -1 and 1?
Yes, that's right. the author there gave the names of the most frequently used ones in wikipedia or online you can read more about them.
I am also interested in this topic myself, later I will also dig more in detail as time permits).
Yeah, that's right.
It seems that the weight coefficients, by which the input values of neurons are multiplied, arise as a result of "training" of the network. That is, they do not exist at first, and then they appear. But how exactly - it is not clear yet.
I, like you, have not used neurons before, but after reading the article and looking at the code carefully, all such questions have disappeared.
The initial weight value for neurons is set randomly or from a file where it was saved before. Further in the learning process, based on the error of the target value and the value at the output of the most recent neuron, all weights are recalculated. The recalculation of weights itself is performed in each neuron independently (see the part of the article where the neuron is described and view the code of the neuron itself).
I, like you, have not used neurons before, but after reading the article and reviewing the code carefully, all such questions disappeared.
The initial weight value for neurons is set randomly or from a file where it was saved before. Further in the learning process, based on the error of the target value and the value at the output of the most recent neuron, all weights are recalculated. The recalculation of the weights itself is done in each neuron independently (look at the part of the article where the neuron is described and view the code of the neuron itself).
The article is interesting. Can you explain "on your fingers":
The article, part two is "how an artificial neuron works". Correct me if I'm wrong.
Good evening, Peter.
The neuron inside consists of 2 functions:
1. First, we calculate the sum of all incoming signals taking into account their weighting coefficients. That is, we take the value at each input of the neuron and multiply it by the corresponding weighting factor. And add up the values of the obtained products.
Thus, we get a certain value, which is fed to the input of the activation function.
2. The activation function converts the obtained sum into a normalised output signal. This can be either a simple logic function or various sigmoid functions. The latter are more widespread as they have a smoother transition of state change.
Communication between neurons is organised as a direct transfer of the output value of one neuron to the input of a subsequent neuron. In this case, referring to point 1, the input value of a neuron is taken into account in accordance with its weight coefficient.
another question what is neurotNam - the method of creating neurons in a layer ? it is not declared anywhere and the logic of why the initial value of a neuron is equal to the remainder of division by 3 minus 1 is not clear ?
The errors in the code have been corrected and the file in the article has been replaced.
The specified line assigns the initial data of the output value of the neuron and can be replaced by a constant. This value will be changed at the first direct recalculation of the neural network value.