Machine learning in trading: theory, models, practice and algo-trading - page 3164
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I got teary-eyed, I was laughing so hard.)
I asked Bard to write in Russian, he wrote with a mistake, it happens. Russian is not my native language, I use it only here in essence...
and he answered me.)
Do you understand?
He started trolling me )))
That's just brutal))))
He's not trolling you.
You wrote ruSki - that's "Russian" in Serbian.
That's why he's writing to you in Serbian
He's not trolling you.
You wrote russki - that's Serbian for "Russian".
That's why he's writing to you in Serbian
Ahh))))
Interesting article about trees and reinforcement learning in them.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4760114/
I tested the algorithm on market data...
Algorithm runs stable on new data compared to Forest....
the algorithm is not retrained, on all validation samples the result is either better than on the test sample or much better, I have not seen worse....
Akurasi is on average 2-4% better than Forrest. If Forrest has 0.58, RLT has ~ 0.62.
Anyway, according to the first tests, the algorithm is worthwhile, but it takes a long time to learn....
Homemade. The possibilities for experimentation are unlimited....
Yeah. there's no point in discussing homemade ones.
Why waste time on homemade ones? There are dozens of non-homemade ones with algorithms that are practically used by millions of users.....
tested the algorithm on market data.
The algorithm works steadily on new data compared to Forest....
the algorithm does not retrain, on all validation samples the result is either better than on the test sample or much better, I have not seen worse....
Akurasi is on average 2-4% better than Forrest, so if Forrest has 0.58, RLT has ~ 0.62.
Anyway, according to the first tests, the algorithm is worthwhile, but it takes a long time to learn....
According to their theory, there are supposed to be some "strong" traits that work well and the problem is to separate them from the rest of the "weak" ones. In their field, genetics, that's probably the case. But our situation is obviously different - traits are roughly equal in strength, often collinear, and their strength rating can change over time.
In general, if it were only a matter of selecting informative traits, San Sanych, with his secret method, would have become a trillionaire long ago).
Their theory assumes that there are some "strong" traits that work well and the only problem is to separate them from the rest of the "weak" ones. In their field, genetics, this is probably the case. But our situation is clearly different - traits are approximately equal in strength, often collinear, and their strength rating can change over time.
In general, if it were only a matter of selecting informative traits, San Sanych, with his secret method, would have become a trillionaire long ago).
Well, the algorithm really works and is more stable and akurasi better and kappa better... in other words, it works better ...
and it works both after 1000 new observations and after 20 000 ... and the error is either the same or better.
the signs are about the same in strength
Well, and here I can't agree.
the importance of this algorithm
Yeah... there's no point in discussing homemade ones.
Why waste time on homemade ones? There are dozens of non-homemade ones with algorithms that are practically used by millions of users.....
I don't discuss packages, I offer to discuss only ideas.
Because I can experiment and do things that are not included in these algorithms - black boxes.
I don't discuss packages, I suggest discussing only ideas.
Yeah... there's no point in discussing homemade ones.
Why waste time on homemade ones? There are dozens of non-homemade ones with algorithms that are practically used by millions of users.....
Lately I see quite a few articles with "homemade" trees. As a rule, they don't write trees from scratch, but use the rpart package from R. So one thing does not interfere with another (self-made packages).
There is no analogue of the package in python, I think. So saving a homemade tree in ONNX will surely be problematic.