Machine learning in trading: theory, models, practice and algo-trading - page 1453
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The faces seem to be the same, but the questions and answers... the same... It's like Groundhog Day or something)))
ZZ is 80% again...
An unsophisticated person would suspect some kind of conspiracy or a problem with his or her tower.
It seems that this is the 3rd or even 4th "wave" about the same thing, I can not say exactly because more than 2 / 3 posts have not read.
WHAT'S GOING ON, GUYS?
If you're referring to English, I don't know what you nerds are doing here, but scientists are still studying the Brownian Fractional Motion to model volatility. There are no more accurate methods of describing market movements in the world yet. That is, starting with Black and Scholes and on to newer research.
https://tpq.io/p/rough_volatility_with_python.html
https://www.quantstart.com/articles/derivatives-pricing-ii-volatility-is-rough#ref-gatheral
So far all I see from you is a discussion of candle color predictions, zigzags and other kindergarten nonsenseThere's a yuima package for R that has all this stuff - fractional Brownian, Levy flights, etc etc. There is a book about it, which may be useful at least because of the bibliography.
haha
I mean it'snot fantastic, like for ZZ, I don't argue, it's easy to get 95%, but it's useless, I mean it's fantastic 65% quality to predict purely future price changes, without past ones, on which ASR depends directly.
But if the price is obviously over 55% then I guess I screwed up, because I cannot predict much more than 50%, but I have ZZ, so the price is equally "cool", what does it mean? That it is possible to trade on SB?1. If it's easy, give a concrete example with numbers, if you know how.
2. No need to advise ("take", "look") do it yourself and prove by concrete examples your statement. And the reference to the "big brothers" ... You could have written more simply: "A man told me so".
There are too many clever chatterboxes.
Hello Vladimir!
How are you doing with the script that I sent you, have you tried to experiment with it? Maybe you've developed the idea and this approach to regression
In private.
Try 2 layers and reduce the number of neurons in the layers, down to 1 in each layer.
before the white vertical line - sample, after - oos
The more neurons - the more probability of fitting (more degrees of freedom), try to decrease the number of neurons as long as the neuron can produce at least somewhat sane results.
That is, the clearer the information in inputs and the rougher the mesh, the better.
Man, are you right.
The results have improved?There is a yuima package for R that has all this stuff - fractional Brownian, Levy flights, etc. etc. There is a book about it, which may be useful at least because of the bibliography.
thanks, there are some unheard of models at the end, I'll read it
Thanks, there are some never-before-heard models at the end, I'll read
There are a lot of things there. For example - compound Poisson processes, which Alexander from TP branch invents and never invents)
have the results improved?The scatter (sawtooth amplitude) of the balance has slightly increased, the frequency of transactions has decreased, but on forward it repeats its stability for quite a long time. I've tried 20, 50 and 1000 neurons in 2 layers - immediately goes to the bottom, or some kind of chaos, although the training period is even line up. I also tried 30 layers of 10 neurons - same thing. I put 3 neurons in 2 layers - stable)))).
Put it on the real, I'll check it out.
Try 2 layers and reduce the number of neurons in the layers, down to 1 in each layer.
before the white vertical line - sample, after - oos
The more neurons - the more probability of fitting (more degrees of freedom), try to decrease the number of neurons as long as the neuron can produce at least somewhat sane results.
That is, the clearer the information in the inputs and the rougher the mesh, the better.
I'm tired of drooling, it's too beautiful.
This is in time for what period? What is the secret technology?