Machine learning in trading: theory, models, practice and algo-trading - page 2033

 
Maxim Dmitrievsky:
Writing neural networks in the terminal is not an option at all. There any function can suddenly work in a different way than expected. Use a ready tested one.
Well a regular grid is good to learn) with recurrence I'm trying to figure out how to calculate the gradient
 
Aleksey Vyazmikin:

Show me a picture of what tree clusters look like, I don't know what we're talking about yet.

Why open it? :) I just make a mini copy with a similar structure for debugging.

Several times remade it, after unpacking it will take 6GB.

Day of week, day of month, hour, minute, ...same for exit..., deal duration in minutes, SL, TP, result +-1
 
Alexander Alexeyevich:
Well a regular grid is normally trained) with recurrence I understand how to calculate the gradient
On its own is unlikely to show good results, they are recommended to stack with convolutional
 
Maxim Dmitrievsky:

Do you want to write the network yourself?

Here there is a minimum of words and a maximum of code in python, but also English.

https://datascience-enthusiast.com/DL/Building_a_Recurrent_Neural_Network-Step_by_Step_v1.html

these are simple digital filters with a bunch of filter coefficients

Maxim Dmitrievsky:
Writing neural networks in the terminal is not an option at all. There any function can suddenly work in a different way than expected. Use a ready to use verified

Why?

 
Renat Akhtyamov:

These are just regular digital filters with a bunch of filter coefficients

why?

That's what I'm saying) the main thing is that everything should be counted correctly
 
Renat Akhtyamov:

These are just regular digital filters with a bunch of filter coefficients

why?

Because it's a semi-poker jap.
 
Alexander Alexeyevich:
That's what I'm saying) the main thing is that everything should be counted correctly

If you have experience in general networks, you might have tried to teach with MQL and plain C++ code for your own interest, if not, you can try it and see for yourself.

 
Maxim Dmitrievsky:
Because it's a semi-poker jap

I remember, it seems they promised to add WinML with ONNX)

 
Maxim Dmitrievsky:
By itself is unlikely to show good results, they are recommended to be stacked with convolutions

These are interesting questions. What do you mean by "stack"? How to understand which architecture (ensembles, model trees) is better? By what metrics is it understood, by the final result? How to properly combine, for example, the same lstm recurrence katbusta? And is it necessary, is it worth it...
 
I've been tweaking the optimizer lately, mostly in the area of metrics. I have made such a mess that it makes me proud. I am a real charmer, and to the tune of so keep me in general :-)