Machine learning in trading: theory, models, practice and algo-trading - page 3164

 
mytarmailS #:

I got teary-eyed, I was laughing so hard.)

I asked Bard to write in Russian, he wrote with a mistake, it happens. Russian is not my native language, I use it only here in essence...

and he answered me.)


Do you understand?

He started trolling me )))

That's just brutal))))

He's not trolling you.

You wrote ruSki - that's "Russian" in Serbian.

That's why he's writing to you in Serbian

 
Dmytryi Nazarchuk #:

He's not trolling you.

You wrote russki - that's Serbian for "Russian".

That's why he's writing to you in Serbian

Ahh))))

 
mytarmailS #:

Interesting article about trees and reinforcement learning in them.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4760114/

I tested the algorithm on market data...

Algorithm runs stable on new data compared to Forest....

the algorithm is not retrained, on all validation samples the result is either better than on the test sample or much better, I have not seen worse....

Akurasi is on average 2-4% better than Forrest. If Forrest has 0.58, RLT has ~ 0.62.


Anyway, according to the first tests, the algorithm is worthwhile, but it takes a long time to learn....

 
Forester #:
Homemade. The possibilities for experimentation are unlimited....

Yeah. there's no point in discussing homemade ones.

Why waste time on homemade ones? There are dozens of non-homemade ones with algorithms that are practically used by millions of users.....

 
mytarmailS #:

tested the algorithm on market data.

The algorithm works steadily on new data compared to Forest....

the algorithm does not retrain, on all validation samples the result is either better than on the test sample or much better, I have not seen worse....

Akurasi is on average 2-4% better than Forrest, so if Forrest has 0.58, RLT has ~ 0.62.


Anyway, according to the first tests, the algorithm is worthwhile, but it takes a long time to learn....

According to their theory, there are supposed to be some "strong" traits that work well and the problem is to separate them from the rest of the "weak" ones. In their field, genetics, that's probably the case. But our situation is obviously different - traits are roughly equal in strength, often collinear, and their strength rating can change over time.

In general, if it were only a matter of selecting informative traits, San Sanych, with his secret method, would have become a trillionaire long ago).

 
Aleksey Nikolayev #:

Their theory assumes that there are some "strong" traits that work well and the only problem is to separate them from the rest of the "weak" ones. In their field, genetics, this is probably the case. But our situation is clearly different - traits are approximately equal in strength, often collinear, and their strength rating can change over time.

In general, if it were only a matter of selecting informative traits, San Sanych, with his secret method, would have become a trillionaire long ago).

Well, the algorithm really works and is more stable and akurasi better and kappa better... in other words, it works better ...

and it works both after 1000 new observations and after 20 000 ... and the error is either the same or better.

Aleksey Nikolayev #:

the signs are about the same in strength

Well, and here I can't agree.

the importance of this algorithm


 
СанСаныч Фоменко #:

Yeah... there's no point in discussing homemade ones.

Why waste time on homemade ones? There are dozens of non-homemade ones with algorithms that are practically used by millions of users.....

Because I can experiment and do things that are not included in these algorithms - black boxes.
I don't discuss packages, I offer to discuss only ideas.
 
Forester #:
Because I can experiment and do things that are not included in these algorithms - black boxes.
I don't discuss packages, I suggest discussing only ideas.
And how many of you have implemented the ideas discussed here?
And how many of you got better results than with the ready-made library?

The figure is about zero, isn't it?

And the library is reproducible code, everyone can run it and everyone will get the result, reproducible result, real result, in the form of yes-no answer + experience and knowledge added.

And discussions are just a waste of time, we discussed, discussed and forgot, nobody even wrote a line of code, just like grandmothers on the bench, and time is gone and life with it... And neither knowledge nor experience was added.

 
It feels like knowledge is diminishing, blame it on batch thinking.
 
СанСаныч Фоменко #:

Yeah... there's no point in discussing homemade ones.

Why waste time on homemade ones? There are dozens of non-homemade ones with algorithms that are practically used by millions of users.....

Lately I see quite a few articles with "homemade" trees. As a rule, they don't write trees from scratch, but use the rpart package from R. So one thing does not interfere with another (self-made packages).

There is no analogue of the package in python, I think. So saving a homemade tree in ONNX will surely be problematic.

Reason: