"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 39
![MQL5 - Language of trade strategies built-in the MetaTrader 5 client terminal](https://c.mql5.com/i/registerlandings/logo-2.png)
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Shall we build radial-baseline networks?
They have a strong bug, even though they learn quickly, they do not predict well on unknown data.
General course (from site http://www.intuit.ru/department/expert/neuro/ where registration is required, if you don't want to register - Nic_Touch nickname pas zdraste01 )
Lectures http://www.softcraft.ru/neuro/ni/p00.shtml
Mapping examples http://www.statsoft.ru/home/products/version6/snn.htm
Books on models and training methods
u - activator input
y - additional power factor.
Wherever there is an exponent, it is necessary to avoid calculating exponents of positive numbers in order not to get large numbers exceeding 32 bits. For example, calculating sigma is better in the following way
double x = -y*u;
double e=exp(-|x|);
if(x<=0) return(1./(1.+e));
if(x>0)return(e/(1.+e));
These formulas are derived quite simply. If the exponent argument is negative, we leave the formula unchanged. If positive, we multiply the numerator and denominator by the exponent of the same argument, but with a negative sign.
Wherever there is an exponent, it is necessary to avoid calculating exponents of positive numbers in order not to get large numbers exceeding 32 bits. For example, calculating sigma is better in this way
double x = -y*u;
double e=exp(-|x|);
if(x<=0) return(1./(1.+e));
if(x>0)return(e/(1.+e));
These formulas are derived quite simply. If the exponent argument is negative, we leave the formula unchanged. If it is positive, we multiply the numerator and denominator by the exponent of the same argument, but with a negative sign.
Strangely enough, the sigmoid itself works correctly even with large negative inputs, but the hypertangent function fails, which is why I have added a shifted sigmoid as an addition to the classical algorithms. The function works in the same limit as the hypertangent, but is faster and has no problems with #IND.
In addition to this, when adjusting angle of attack (on small values of coefficient y) the hypertangent does not reach -1;1, the shifted sigmoid has no such problems.
In general, who wants to finalize the hypertangent, but I think the function is unpromising, not only that I have to save the result because the exponent calculation is used twice, so still need checks, plus problems with failure to reach the limits when adjusting the attack.
My conclusion is that the hypertangent is a bust, the shifted sigmoid rules.
My conclusion is that the hypertangent is a bust, the shifted sigmoid rules.
seconded.
It is possible to make a simpler implementation of sigmoid in the [-1;1] limit
but this implementation also has a problem with #IND, so it's better to add a couple of simple operations than to write numerous checks
here added + - / , that's 3 extra operations vs. a lot of checks
This is the best option, both as a convenience of the working range [-1;1], and in terms of speed. The area of definition is the entire numeric line.
This is exactly the activation function I have been using lately after trying a lot of alternatives and testing their speed of execution.
And I have this one as my favorite in the echo grid:
By the way.
We need a sigmoid function.
The requirements are a normal form of the function itself and its derivative (not too hard to calculate) and a normal form of the inverse function and its derivative.