Machine learning in trading: theory, models, practice and algo-trading - page 412

 
Aleksey Terentev:
My solutions were initially based on OpenNN (C++).
Now I have learned a lot and moved to Keras (Python).

Mainly I deal with deep learning of classification problems prediction.

So we are dealing with the same area of MO. Keras is a good package, especially since it is ported to R.

I just don't understand the question of what?

Good luck

 
Vladimir Perervenko:

So we are dealing with the same area of MO. Keras is a good package, especially since it is ported to R.

I just don't understand the question of what?

Good luck

I just decided to join the discussion. =)
 
Maxim Dmitrievsky:


This is already interesting... it means that additional tests will have to be carried out, maybe there is no sense in this rn as it was thought before

The only advantage is the selection of weights in the optimizer at the same time as the other parameters of the system

I'm trying to find out how well it counts within a range of known data. Multiplication table for example.

Update.

Additional experiments performed after modifying the Expert Advisor showed the following results compared to the ordinary MLP 3-5-1.

Different formulas were used for checking:

//double func(double d1,double d2,double d3 ){return d1*d2;} //the error is 0-2%, because multiplication is inherent in formulas. 2% is obtained because the genetic optimizer does not always stop at an ideal solution. But there is also an absolutely exact solution with error = 0.
//double func(double d1,double d2,double d3 ){return d1*d2*d3;}// error 0-2% multiplication is in the formulas, at MLP 1%
//double func(double d1,double d2,double d3 ){return MathSin(d1);}// 2-6% error, on MLP 0,1%
//double func(double d1,double d2,double d3 ){return MathSin(d1)*MathSin(d2)*MathSin(d3);}// error 2%, on MLP 1%
//double func(double d1,double d2,double d3 ){return MathSin(d1)*MathSin(d2)*MathSqrt(d3);}// 3-4% error, on MLP 1%
//double func(double d1,double d2,double d3 ){return MathSin(d1)*MathPow(d2,.33)*MathSqrt(d3);}// error 8-4%, at MLP 1.5%
//double func(double d1,double d2,double d3 ){return MathPow(d1,2)*MathPow(d2,.33)*MathSqrt(d3);}// error 8-4%, at MLP 1.5%

As a result, we can conclude that the usual MLP has 2-3 times less error than Reshetov's RNN. Perhaps some of this error is due to the fact that the MT5 terminal genetic optimizer does not stop at an ideal solution.
 
elibrarius:

I'm trying to find out how well it counts within a range of known data. A multiplication table for example.

Update.

Additional experiments after changing the EA showed the following results, compared to the usual MLP 3-5-1.

Different formulas were used for testing:

//double func(double d1,double d2,double d3 ){return d1*d2;} //the error is 0-2%, because multiplication is inherent in formulas. 2% is obtained because the genetic optimizer does not always stop at an ideal solution. But there is also an absolutely exact solution with error = 0.
//double func(double d1,double d2,double d3 ){return d1*d2*d3;}// error 0-2% multiplication is included in formulas, at MLP 1%
//double func(double d1,double d2,double d3 ){return MathSin(d1);}// 2-6% error, at MLP 0,1%
//double func(double d1,double d2,double d3 ){return MathSin(d1)*MathSin(d2)*MathSin(d3);}// 2% error, at MLP 1%
//double func(double d1,double d2,double d3 ){return MathSin(d1)*MathSin(d2)*MathSqrt(d3);}// 3-4% error, at MLP 1%
//double func(double d1,double d2,double d3 ){return MathSin(d1)*MathPow(d2,.33)*MathSqrt(d3);}// error 8-4%, at MLP 1.5%
//double func(double d1,double d2,double d3 ){return MathPow(d1,2)*MathPow(d2,.33)*MathSqrt(d3);}// error 8-4%, at MLP 1.5%

As a result, we can conclude that the regular MLP has 2-3 times less error than Reshetov's RNN. Perhaps some of this error is due to the fact that the MT5 terminal genetic optimizer does not stop at an ideal solution.

Yes, interesting results, perhaps a full enumeration would be more accurate
 
Maxim Dmitrievsky:

Yes, interesting results, maybe a full overshoot would be more accurate
would be more accurate, but after a few weeks or months... 8 selectable coefficients with 1% step 1008= 10000000000000000 iterations, and the terminal itself switches to genetics, with this number.
 
elibrarius:
Would be, but in a few weeks or months... 8 selectable coefficients with steps of 1% 1008= 10000000000000000 iterations, and the terminal itself switches to genetics, with this number.

Can you tell me, classifying MLP from alglib I understand requires at least 2 outputs, how to work with this? no help or description anywhere... tinny )
 
Maxim Dmitrievsky:

May I ask, classifying MLP from alglib I understand, requires at least 2 outputs, how to work with it?)

Yes 2, from the help http://alglib.sources.ru/dataanalysis/neuralnetworks.php

A special case are neural networks with linear output layer and SOFTMAX-normalization of outputs. They are used for the classification problems, in which the outputs of the network must be non-negative and their sum must be strictly equal to one, which allows to use them as probabilities of assigning the input vector to one of classes (in the limiting case the outputs of the trained network converge to these probabilities). The number of outputs of such a network is always at least two (limitation dictated by elementary logic).

Haven't dealt with it yet, still experimenting with linear.
 
elibrarius:

Yes, from http://alglib.sources.ru/dataanalysis/neuralnetworks.php reference

A special case are neural networks with linear output layer and SOFTMAX-normalization of outputs. They are used for the classification problems, in which the outputs of the network must be non-negative and their sum must be strictly equal to one, which allows to use them as probabilities of assigning the input vector to one of classes (in the limiting case the outputs of the trained network converge to these probabilities). The number of outputs of such a network is always at least two (limitation dictated by elementary logic).


Ah, well, that is, we feed probability of one or another outcome, for example probability to buy 0.9, then to sell 0.1, one array will serve as a buffer for probabilities to buy, another to sell, we fill them with these values, and after training they will be used to dump separately probabilities to buy and to sell, I understand correctly?
 
Maxim Dmitrievsky:

Well, in other words we feed probability of one or another outcome, for example probability to buy 0.9, then to sell 0.1, one array will serve as a buffer for probabilities to buy, the other to sell, and after training in them and will drop individual probabilities, I understand correctly?

And what I read - I have the same opinion.

Although I don't know what its advantage is... having 1 exit, we also get probabilities - about 0 (or -1) Buy, and about 1 Sell. (Or vice versa, depending on how you train).

Maybe it is useful if there are 3 or more outputs? After all, 1 output would be difficult to use.... for 3 or 4 tasks (although you can also use 0, 0.5 and 1 as classes Buy, Wait, Sell)

 
elibrarius:

And what I read - I have the same opinion.

Although I don't know what its advantage is... having 1 exit, we also get probabilities - about 0 (or -1) Buy, and about 1 Sell. (Or vice versa, depending on how you train).

Maybe it is useful if there are 3 or more outputs? Then 1 output would be hard to use.... for 3 or 4 tasks (although you can also use 0, 0.5 and 1 as Buy, Wait, Sell classes)


Yeah, they probably made it so that you can have more than 2 classes... then it will probably be more clustering and you can use other methods like k-means :)