"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 14
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
1) The scheme of outputs [-1;0;1] has one bug, in theory all three output options should be equal probability, in fact it is very difficult to keep the hypertangent at zero or sigmoid at 0.5 and it will try to jump.
This may be because I gave only as an example.
2) In the "Statistics for Trader" by Bulashev there is a scheme for position (order) efficiency evaluation, we can apply this scheme and train the network in trading signals supply, while trawls, breakeven are all elements of the TS not related to the grid.
3) Filters are elements of preprocessing (preparation of examples), it is a necessary thing, but we must separate flies from cutlets. If you shove preprocessing into mesh algorithm then you won't get universalization.
No, filters in this case are part of the trading logic, not preprocessing of data.
I propose not to shove grids into the algorithm, and to allow teaching the network as part of the general logic of TS. What do you think is the output of the NS? Only the final buy/sell prediction?
I.e. ATR of RSI and wizards will set the context? Even on multiple TC inputs? This is a dumb fitting without a chance.
You need something that will really make money, or an example where the NS is only one of the elements of the TS and then what to train it with?
P.S. by the way, quite good results on many pairs show systems on 2 wagons with some non-standard filtering (the NS is not needed there :))
1) it can be because I only gave an example.
2) no, filters in this case are a part of trading logic, not preprocessing data.
I propose not to shove grids into the algorithm, but to give an opportunity to train the network as a part of the general logic of TS. What do you think is the output of the NS? Only the final buy/sell prediction?
1) This is not a boulder in your direction, I was just emphasizing the importance of this point.
2) The output of the NS can be a signal of any interpretation, in the context of trading, it can be a classification of market conditions (bad good, the trend is flat, etc.) and specific trading signals, by the way nobody forbids to classify the signal of a particular indicator. For example: "the "mashka now" gives a bad signal. Having trained the grid for such a signal, it can be further used in the committee. It was suggested above to create a convenient interface for combining grids in committees. The efficiency of transactions is only a particular case of postprocessing.
I want to teach NS to trade by your TS, adding a few degrees of freedom.
You need something that will really make money
Well, it's just that this filter is trivial and make it no problem at all. The algorithm is simple. We run the TS, collect inputs and necessary parameters (MA, MSI, ATR) at entry points or some vicinities.
Then on entry we give all collected parameters, on exit - either result of transaction in pips or simply 1 if plus, -1 if minus. Then we feed all that into a trivial 3-layer non-linear Perspectron and train it.
Voila.
2) Output of the NS may be a signal of any interpretation, in the context of trade it may be a classification of market conditions (good bad, trend flat, etc.) and specific trading signals, by the way no one forbids the classification of a particular indicator signal. For example: "the "mashka now" gives a bad signal. Having trained the grid for such a signal, it can be further used in the committee. It was suggested above to create a convenient interface for combining grids in committees. Efficiency of deals is just a special case of postprocessing.
the committee is only part of the solution. How to conveniently and efficiently implement training of such NSs, which are only part of the logic of a particular system? They cannot be trained separately because there is no training sample.
Well just this filter is trivial and make it no problem at all. The algorithm is simple. We run the TS, collect inputs and necessary parameters (MA, RSI, ATR) at entry points or some vicinities.
Then we get parameters on input and output - either result of transaction in pips, or simply 1 if it is good, or -1 if it is bad. We feed it all into a 3-layer nonlinear Perspectron and train it.
Voila.
Yes, you can do it that way, but it's just through one place)))
For example, the TS with options. Carry out a similar exercise for each run of wholes? Okay, you can somehow twist and automate this process.
Or vice versa, input filter is normal (Boolean logic), and buys/sells NS.
But in principle it is possible to get out of everything and implement it somehow. The question is convenience, clarity and portability for others to use.
Is working with the NS only about choosing its topology? The method of training also plays an important role. Topology and learning are closely related.
All users have their own imho, so you can't take half of the decision-making over yourself.
We need to createa networkdesigner that does not limit itself to some presets. And as universal as possible.
In the scheme of construction of the network proposed by me the method of training is not dependent on the topology!
Since the grid itself knows where something comes from and what goes where, then the error propagation is automatic and the programmer does not have to bother with this.
sergeev
2011.10.19 17:06:50Will a two-dimensional array be enough for a variety of topologies and visual understanding?
I replied the other day, but at leisure I thought it over:
To build a network, you'll need the following connection table
This is an example of a three-layer MLP zero layer inputs, first layer two neurons, second layer one neuron.
First three columns are created by consecutive listing of all neurons and all neuron inputs, the second pass sets matching (with one exception, if "communication layer" is more or equal to "layer" then output is more than 0, those back signal can be taken only from delay operator).
Using such a connection table you can set the topology even by random, and this is still an indicator of versatility.
Actually, I was thinking about storing the layer number in the neuron itself, and make numbering sequential for one-dimensional array, but for now it's better to discuss the general formula, and the details later.
In the scheme proposed by me, the method of learning does not depend on the topology!
Since the grid itself knows where what comes from and where it goes, the error propagation is automatic and the programmer does not have to bother with it.
I do not believe (with) :)