Machine learning in trading: theory, models, practice and algo-trading - page 628

 
Nikolay Demko:

No, I was saying that you can't mix market data with the results of the network.

In other words, your network is processing quotes, and you are feeding it with data about whether the last trade was successful or not, this is heterogeneous data, you can't mix it up.

And in general, whether the network worked well or not, this is a separate unit (I used to call it fitness-function in GA, in NS the name of the error function is used, but the essence is the same).

Suppose you train network backprop, it turns out you have an error becomes part of the data, buttery oil. I hope you know what I mean.

Yes, I got it... At first I want to train it simply in the optimizer of MT5 - it will allow me to get immediate trades and equity results, and give them back to the net without any complicated complicated programming.

And what about the architecture - it can be redesigned, but so far I don't have any other variants, because I haven't even "felt" this one yet. That it will show at least some results - that's for sure, but what kind of results is a question :)

 
Maxim Dmitrievsky:

I know all about it, cross-validation is also a fitting but more sophisticated

recurrence also loops on itself and sometimes can't learn

And I don't quite understand - you say you can't feed network outputs to inputs, and then you write a recurrence... :) and that's all it does, it eats its outputs

The recurrence is, in the simplest case, a common MLP that eats itself.

On cross-validation I agree, but there are more sophisticated methods. At the same time crossvalidation gives acceptable results in spite of simplicity of the method.

Although if you take it as a whole, NS is a fitting. Universal approximator, and while we are at that stage of development of NS science where it is not reliably established how to find the point of the field which can say that NS has learned the dependence, rather than fit the data.

This is the problem with representing a complex function of one variable by a set of simple functions from many variables.

And if you solve this problem you actually build an AI.

 
Nikolay Demko:

On cross-validation I agree, but there are more sophisticated methods.

Although if we take as a whole, NS is a fitting. Universal approximator, and while we are at that stage of development of NS science where it is not reliably established how to find the point of the field which we can say that NS has learned the dependence, but not fitted to the data.

This is the problem with representing a complex function of one variable by a set of simple functions from many variables.

And if you solve this problem, you're actually building an AI.

This is all too complex to imagine at the same time, let alone imagine all the connections in the NS and how it will interact with each other.

We don't need an AI, but at least some kind of reaction to market changes would be nice, with some "memory".

 
Maxim Dmitrievsky:

This is all too complicated to imagine at the same time, especially to imagine all the connections in the NS and how it will interact with each other.

We do not need the AI, but at least some kind of reaction to market changes would be nice, with some "memory".

If you don't like kittens, maybe you just don't know how to cook them )).

NS will approximate and even summarize any data, the main thing is that the data contain what you're looking for.

It means that besides the choice of type of NS it is no less important to correctly prepare the data.

As you see, the task is interdependent, what data you need to feed depends on the type of NS, and which NS to choose depends on the data you have prepared for it.

But although this problem is closed it is solvable, for example in GA the same thing is used, initially the algorithm knows nothing about the data, by gradual branching of the problem it converges to a robust solution.

So it is the same here, systematize your research, keep a log, and you will succeed.

 
Maxim Dmitrievsky:

Yes, I got it... at first I want to train it simply in the optimizer of MT5 - it will give me the opportunity to get the results of deals and equity immediately, and give them back to the grid, without any tambourines

And what about the architecture - it can be redesigned, but so far I don't have any other options, because I haven't even "felt" this one yet. That it will show at least some results - that's for sure, but what is the question :)

Maxim, well, you don't need to train network in MT optimizer. The NS-trainer and the optimizer are completely different algorithms with completely different optimality criteria.

If you are still using that NS structure, which was drawn before, it is too weak for the market. I've already written that I succeeded only when I got to 15-20-15-10-5-1 structure. And it is only for one type of deals. I also did everything by the methods described by Haikin, i.e. nothing new, no tricks.

Simpler structures were poorly trained.

 
Yuriy Asaulenko:

Maxim, well, don't train the network in the MT optimizer. The network trainer and optimizer are completely different algorithms with completely different criteria of optimality.

If you still use that NS structure, which you drew before, then it is too simple - too weak for the market. I've already written that I succeeded only when I got to 15-20-15-10-5-1 structure. And it is only for one type of deals. I also did everything by the methods described by Haikin, i.e. nothing new, no tricks.

The simpler structures were poorly trained.

But nothing prevents me from adding another one to this one. The point is not the depth of the grid, but to make it with feedbacks. This is my whim now, like an artist, and so I see :) on the classic is not interesting

screw it all to the grid with backprop is a pain in the ass... it's better to keep it simple :)

because this grid is trained step by step... make a step - get a feedback, and so on, until the whole set of actions and results will be summarized

you can just take a smaller history and everything will be ok, and you can scale it up afterwards

 
Maxim Dmitrievsky:

And nothing prevents you from completing another one to this one. The essence is not in the depth of the grid and that would make it with feedbacks. This is my whim now, like an artist and so I see :) on the classic is not interesting

screw it all to the grid with backprop is a pain in the ass... it's better simple :)

because it's a grid trained in steps... you take a step, you get a feedback, and so on, until the whole set of actions and results is summarized.

So, I wrote that every N epoch I stopped the BP, ran the tests, and continued to train the BP further. I realize twenty-four hours of training is long, but this conversation was going on a couple months ago.

But it's up to the artist, of course). Don't shoot the pianist, he plays as he knows how.

ZZY Actually, the data for learning is not a lot, but a lot. On a small sample size the NS will not allocate anything useful.

 
Yuriy Asaulenko:

So, I wrote that every N epoch I stopped the BP, ran the tests, and continued to train the BP further. I realize twenty-four hours of training is a long time, but this conversation was a couple of months ago.

But it's up to the artist, of course). Don't shoot the pianist, he plays as he knows how.

Yes, there are more words, there redo in 2 hours )) will do tonight mb

all i need to achieve is a little more stable and clear results on the forward, but it all works.

 
Maxim Dmitrievsky:

There are more words, it takes 2 hours to redo it )) I'll do it tonight.

All we need to achieve is a little more stable and understandable results on the forward, and so everything works

I finished pre-post there, but since the page has changed, duplicate.

In fact, the data for learning should not be a lot, but very much. At a small sample size NS will not allocate anything useful.

 
Nikolay Demko:

I certainly apologize for the attack, but reread your message. It looks quite ambiguous.
In general you're right, but only about the first layer of neural network. If the feedback goes to the second and subsequent layers, or in general to parallel network layers, then your statement will lose force.
In that case Maxim should think about deepening the network and bringing feedback to hidden layers.

And as for:

As you can see, the task is interdependent, what data to submit depends on the type of NS, and which NS to choose depends on the data that you have prepared it.
Same thing. MLPs are no longer relevant, deep learning has been trending for a long time. And one network is quite capable of processing heterogeneous data, the main thing is the architecture.