How to form the input values for the NS correctly. - page 23

 
StatBars писал (а) >>

I'm not developing NS, I'm currently searching for optimal inputs and outputs to build a training sample, I think proper sampling is more important than NS, there are lots of NS variants in different languages on the web...

Right. Architecture with good inputs is not a problem. You could say: Inputs are everything, architecture is nothing.


Here, gentlemen picked up normal inputs and got proper results with MTS "Сombo":




--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


This is when the EA works with a fixed lot. I haven't added an MM to it yet.
reply 11.05.2008 14:17 zxc
Valio:

I also tried to put my hand on this miracle. I have tried it for a week, debugged the algorithm in the basic version and added some functions of my own into basicTradingSystem, i.e. remade "basic BTS" in the author's language. Results are fantastic on historical data: profitability from 8 to 12, expected payoff is about 1000, but it is real with my own indicator. I have tested it on 1H EUR, time interval is about half a year. Next month after - result

"the month after - the result" ???

Valio, so what result do you get in forward? Very interesting!

It's just I re-did this EA too, and at the beginning the profitability on euromoney for a year (not even for half a year, but for a year) by H1 was over 18 (and higher)! But forwards turned out to be not so good, to put it mildly...

Now, after another revision, profitability during optimization is lower (about 10), but now forward is holding decently (more than 3 months). In forward more than 3 months, profitability is more than 2.5. I am still working on it, I think it can be a very interesting Expert Advisor.

reply
 
Reshetov писал (а) >>

Right. The architecture, with the right inputs, is no longer a problem. You could say: Inputs are everything, architecture is nothing.


Here, the gentlemen picked up the right inputs and got the appropriate results with MTS "Combo":

I agree with you to some extent. But, network architecture plays a big role ... e.g. RBF networks are much better at solving some interpolation problems.

 
rip писал (а) >>

I agree with you on some points. But, network architecture plays a big role ... RBF networks, for example, are much better at solving some interpolation problems.

When applied to trading, interpolation and approximation problems are absolutely useless, because the market is changing all the time and the quotes are not smooth functions. Here we need to solve the extrapolation problems, so that the trading strategy can successfully pass the forward tests, rather than being limited to fitting the history. We don't need to know what value the price had between so-and-so and so-and-so date, because it is already known without any interpolation.


So don't waste your time on interpolations and architectures. Moreover interpolation and approximation can be done using different methods and is much easier and more accurate.


Choose adequate inputs so that pattern classification can be performed even on an elementary architecture. After that there is no more need to select the corresponding architecture. Trying to do the opposite is only a waste of time.


For building a house, the most important thing is the foundation, not the finish. Although the finish looks more attractive than the foundation.


And so is the architecture of NS. It certainly adds functionality, but only if the inputs are adequate. If inadequate, the plaster will not save the house from collapse, if the foundation collapses.

 
Reshetov писал (а) >>

The most important thing for building a house is the foundation, not the finish. Although the finish does look more attractive than the foundation.

So does the architecture of the NS. It certainly adds functionality, but only if the inputs are adequate. If they are inadequate, then plaster will not save the house from collapse if the foundation collapses.

OK, I agree - input and output signals are important. This is the problem statement that defines the NS architecture. Pattern classification, that's one of them.

Say, why can't an entry signal be the determination of the sign of the next bar + the determination of extrema of the same bar? What is wrong?


Even for classification, the important question will be which NS you take and how you prepare the data. A network is not an accurate tool, it cannot

It may give the result to a hundredth, but it may give the criteria ... in relation to which another tool will give the calculation.

 
Reshetov писал (а) >>

For trading, interpolation and approximation problems are absolutely useless, because the market is changing all the time and quotes are not smooth functions. Here we need to solve the problems of extrapolation, so that the trading strategy successfully passes the forward tests, rather than being limited to fitting the history. We do not need to know what was the price value between so-and-so and so-and-so date, because it is already known without any interpolation.

So don't waste your time on interpolations and architectures. Moreover, interpolation and approximation can be done by various other methods and is much easier and much more accurate.


Choose adequate inputs so that pattern classification can be performed even on an elementary architecture. After that there is no need to select the corresponding architecture. Trying to do the opposite is only a waste of time.


For building a house, the most important thing is the foundation, not the finish. Although the finish looks more attractive than the foundation.


So does the architecture of the NS. It certainly adds functionality, but only if the entrances are adequate. If inadequate, plaster will not save the house from collapse if the foundation collapses.

I absolutely agree. As one esteemed comrade (Steve Ward) said to me - "Look for entrances" )))))

 
TheXpert писал (а) >>

Freeze!!! I already have a ready-made VC++ libc.

Only there are 2 problems:

1. binding to Boost, I want to get rid of it, it's better to serialize it manually, it glitches anyway.

2. something with adaptive pitch.


Why make a bicycle? Especially there

1. MLP with possibility to create tree structure.

2. std::valarray + aggressive optimization of operations for faster counting.

3. there is an adaptive step

4. patterns with auto-normalization.

5. ample opportunities for expansion.



Ы ?

Yes, it's all very well, thanks for the suggestion. But as they say it's better to do it once for yourself :)

I really learned a lot this weekend.

Especially when I managed to reduce the number of loops from 10e7 to about 10e4.

I did it in two ways.

1. The neuron with the largest error is corrected at double speed. (tried to replace it with a neuron with minimal error - worse)

2. If the correction of a neuron is less than some minimal (for example, 10e-6), its correction increases by 10 times.

I liked it very much. :)

Well, as for using someone else's, until you understand the need for something yourself, you don't need it yet.

Especially since speed is important here, and increasing functionality at the expense of speed is unacceptable. In my opinion, it's better to write a grid for a specific task. Of course, not without having normal objects and a well-thought-out structure, but...

About point 2 and 3 it's already interesting. Is there any way to find out the methods you use?

-----------------------------------

I've read the posts. Too bad we're still here. :) All I'm doing is writing HOW IMPORTANT ARE THE INPUTS!!!.Architecture is nothing - inputs are everything, Look for inputs. etc.

But there is nothing concrete. Maybe the gurus would like to share? А?

 
rip писал (а) >>

>> What's a lib?

My own, I've posted it on RSDN before. I haven't got to SourceForge yet, and it needs some more work...

 
sergeev писал (а) >>

I did this in two ways.

1. The neuron with the largest error is corrected with double speed. (tried to replace it with a neuron with minimal error - worse)

2. If the correction of a neuron is less than some minimal (for example, 10e-6), its correction increases by 10 times.

I liked it very much. :)

Yeah, cool, why don't I just build it myself?

Well, as for using someone else's, until you understand the need for something yourself, you don't need it yet.

All the more, that speed is important here, and increasing of functionality at the expense of speed is inadmissible. In my opinion, it's better to write a grid for a specific task. Of course, not without having normal objects and thought-out structure, but...

As for speed - I've done my best, I think I can speed up my code by 3-5% at most. And it won't be easy :).

Regarding items 2 and 3, I'm already curious. Is there any way to find out which methods are used?

2. "C++ language", Bjorn Straustrup, look for aggressive optimization, the point is to reduce copy operations.

3. V. Golovko's lecture notes, try to search his works or google "adaptive step learning", I can't give particular link now.

5. Extension not to the detriment of speed, templates rule :).

 
sergeev писал (а) >>

-----------------------------------

I've read the posts. It's a shame it's still there. :) All just write HOW IMPORTANT to create inputs!!! Architecture is nothing - inputs are everything, Look for inputs, etc.

And there is zero specificity. Maybe the gurus will share? А?

More specifically, I can advise you on how to look for adequate inputs.


Take the simplest perceptron (see How to find a profitable trading strategy ) and connect to its input certain indukes and their combinations. What gives the largest profit factor on this very perceptron, i.e. the best fit in testing with constant lot (without MM), is very likely to pass the forward testing on a more complex architecture. Why, it is easy to explain. After all, the perceptron is a linear classification. Which means we will get linear separability on the inputs by patterns. By architecture we will add classification by non-linear parameters and get an improved result.


If you do the opposite, you get nothing. Sophisticated architecture consumes immense resources and searches for the signs of the patterns at once by non-linear separability ignoring linear separability. And non-linearity without linearity is nothing but mere fitting. What we get is a waste of time.


One more snack. If the grid is trainable, it should never be trained to the end. It must always be under-trained. For example, to fully train it needs 1000 epochs, divide by 3 and you get about 300 epochs. This is quite enough. Why? If we completely overtrain the network, it will only be suitable for stationary environments. And financial instruments are non-stationary switchable environments. That is, it can only partially switch from one stationary state to another in some time, while remaining in the same state for the most part. Or it can also return to the previous state. Therefore a complete retraining of the mesh is a naked adjustment to some temporary switchable environment.


And finally, for nerds who think that NS interpolation capabilities are necessary for trading, I can give you a specific counterargument. Just take any redrawing indicator or oscillator and you will get an amazing interpolation on history without any neural networks and tricky architectures. Of course traders shun redrawing indices because what is suitable for interpolation or approximation is not suitable for extrapolation in non-stationarity conditions.



 
Reshetov писал (а) >> One more snack. If the net is trainable, it must never be trained to the end. It must always be under-trained. For example, you need 1000 epochs to fully train it, divide by 3 and you get about 300 epochs. This is quite enough. Why? If we completely overtrain the network, it will only be suitable for stationary environments. And financial instruments are non-stationary switchable environments. That is, it can only partially switch from one stationary state to another in some time, while remaining in the same state for the most part. Or it can also return to its previous state.
I would further strengthen this advice: divide by 10. For some reason a branch about stochastic resonance comes to mind. Learning the mesh to the end may drive the target function to a deep minimum, i.e. to a steady state. Stable states are not typical of financial markets at all. They are quasi-stable, i.e. such that are ready to turn into a disaster (trend) at any moment under the influence of even a slight "noise". But this is just philosophical thinking...