Machine learning in trading: theory, models, practice and algo-trading - page 2124
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
How Ivakhnenko recommends dividing, so that the model is trained properly
http://www.ievbras.ru/ecostat/Kiril/Library/Book1/Content393/Content393.htm
this is not enough, because later you may get new data from a totally different distribution
I used to do such things long ago in old bots
this is not enough, because then you may have new data from a completely different distribution
I did that a long time ago in old bots
I agree ...
By the way, it's a cool book, it's like a Soviet school, and everything is there, wood, networks, rsa, modeling, hypotheticals and in clear Russian
How Ivakhnenko recommends dividing, so that the model is trained properly
http://www.ievbras.ru/ecostat/Kiril/Library/Book1/Content393/Content393.htm
? topical reading ))))
Ibid, beginning of the opus:
http://www.ievbras.ru/ecostat/Kiril/Library/Book1/Content0/Content0.htm#Ref
An analysis of a variety of memes that are close in construction by V.P. Leonov [URL= http://www.biometrica.tomsk.ru/lis.htm - (Uniform Resurse Locator) - Uniform Resource Locator; this is how we will mark references in the list of references, presented by addresses in the Internet without the year of publication indicated], confirms the ideas expressed by V. V. Nalimov [1989] about probabilistic distribution of meanings. The following traditional transformations of memes in the scientific environment can be distinguished:
are you really reading this?
I agree...
By the way, it's a cool book, the Soviet school as it is, and everything is there, both wood and nets and rsa and in clear Russian
awesome book, for beginners in the MO it's the best
? topical reading ))))
there , the beginning of the opus:
http://www.ievbras.ru/ecostat/Kiril/Library/Book1/Content0/Content0.htm#Ref
are you really reading this?
http://gmdh.net/articles/theory/bookInductModel.pdf
a big plus is that linear models always converge to a local minimum. That's why the method is still relevant
? topical reading ))))
there , the beginning of the opus:
http://www.ievbras.ru/ecostat/Kiril/Library/Book1/Content0/Content0.htm#Ref
are you really reading this?
what's wrong?
it's a great book, for beginners in the iO it's perfect
How Ivakhnenko recommends dividing, so that the model is trained properly
http://www.ievbras.ru/ecostat/Kiril/Library/Book1/Content393/Content393.htm
This will not work for timeseries. It's an analogy to mixing a train with a test. There will be a peek at the dots next to each other.
It doesn't work for timeseries. It is an analogy of mixing a trayn with a test. There will be a peek at the points next to each other.
If you remove the autocorrelation of the traits, it's fine.
It doesn't work for timeseries. It is an analogy of mixing a trayn with a test. There will be a peek at the dots next to each other.
Yes I understand, but the point itself is good, separate not just by statistical properties and evenly across the test and trail
If you remove the autocorrelation of the traits, it will work.
If all points from both the test and the trace are ranked in one common list (rearranged according to some pattern), it means that they are mixed up. My understanding is this. The test should not be mixed in any way with the trail.