Machine learning in trading: theory, models, practice and algo-trading - page 1547
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I don't know, it's always different.
Hmm, so maybe think, how can you find out?
By the way, I can build a model in R using your data - if you are interested in comparing the efficiency of the methods.Hmm, so can we think of a way to find out?
By the way, I can build in R model by your data - if you are interested in comparison of methods efficiency.it is already impossible to do anything better there, the model is perfect and confirms the random nature of kotir
Further improvements can only be made by different methods of working with random processes, as I wrote above
it is already impossible to do anything better there, the model is perfect and confirms the random nature of kotir
Further improvements can be made only at the expense of different methods of working with random processes, about what I wrote above
Random solutions for random processes seems to me too risky method in its ideology...
I'm going back to something I've wanted to do for a long time - MO + stoh
http://www.turingfinance.com/random-walks-down-wall-street-stochastic-processes-in-python/
The topic is interesting - especially Merton's model with jumps or some variations of it. It seems that unlike usual diffusion it is not reduced (by time sampling) to autoregression, or it is done in some non-trivial way. Perhaps calculations in the sliding window for the portfolio will be quite unaffordable.
A random forest is a fit to a story with no room for adjustment. I squeezed every option out of SL a year ago.
Linear regression has a much better chance of producing a profit. When training, you don't have to feed the real prices, you have to feed the relative prices.
Pytorch = TensorFlow 1.x + Keras + Numpy = TensorFlow 2.0
Pytorch = TensorFlow 1.x + Keras + Numpy = TensorFlow 2.0
What grid configuration do you like?
The constructor is cool!
For example, many people mindlessly use "activation functions" even when they are not needed. "Activation functions" = converting data to a certain range of values with partial or complete loss of information - it's like a hash function for a file.
If the input is already normalized data, the "activation functions" between layers are not fucking necessary. In Alglib you can't get rid of the "activation function".
I have whole change control system in Jenkins + MLFlow for enumerating variants and storing results.
Right now the configuration is like this:
Of course, I didn't immediately understand how to train network on video card at the expense of data latency. Now my code is optimized and trained 100 times faster than original version by reducing amount of data uploads to video card.
The constructor is cool!
For example, many people mindlessly use "activation functions" even when they are not needed. "Activation functions" = converting data to a certain range of values with partial or complete loss of information - it's like a hash function for a file.
If the input is already normalized data, the "activation functions" between layers are not fucking necessary. In Alglib you can't get rid of the "activation function".
I have whole change control system in Jenkins + MLFlow for enumerating variants and storing results.
Right now the configuration is like this:
Of course, I didn't immediately understand how to train network on video card at the expense of data latency. Now my code is optimized and teaches 100 times faster than original version by reducing amount of data uploads to video card.
but what about the recurrence layer? lstm or gru
The constructor is cool!
For example, many people mindlessly use "activation functions" even when they are not needed. "Activation functions" = converting data to a certain range of values with partial or complete loss of information - it's like a hash function for a file.
If the input is already normalized data, the "activation functions" between layers are not fucking necessary. In Alglib you can't get rid of the "activation function".
I have whole change control system in Jenkins + MLFlow for enumerating variants and storing results.
Right now the configuration is like this:
Of course, I didn't immediately understand how to train the network on the video card at the expense of data latency. Now my code is optimized and learns 100 times faster from the original version by reducing the amount of data uploads to the video card.
Does your NS successfully predict on forward? If so, it is interesting to see the signal or at least the results of the tester with forward.
What about the recurrence layer? lstm or gru.
Maybe I will, but right now I want to fully test my version. I only need to add 1 line of code to change the network structure. We're not translating text, but recognizing a historical event.
https://pytorch.org/docs/stable/nn.html - choose any layer you like and add it on one line.