Machine learning in trading: theory, models, practice and algo-trading - page 2365

 
Aleksey Nikolayev:

Then it's time to switch to R)

the language is too nauseating, like a sour saury.

then julia, if you want it faster than python

 
Aleksey Vyazmikin:

Whatever you put in, that's what's going to happen.

Whatever you put will be mush, always!!!

The rules that appeared during the training in matrix X will never work in the future because of the non-statsynarity of the market

Rules are tied to the indexes in the columns of the matrix, and the indexes "float" constantly because of non-statsynarity...

The repeatability of the rules will be about zero all the time...

How else can I explain it? I've already said it in words and pictures, and it's all gone by...

Aleksey Vyazmikin:

Okay, I get it. You don't need help.

Help with what?

 
Maxim Dmitrievsky:

The tongue is too nauseating, like sour salmon.

Are you sure it's tongue? ))

 
Maxim Dmitrievsky:

the language is too nauseating, like sour salmon.

then julia, if you want it faster than python.

Some things that I like very much afterwards seem disgusting at first - coffee, caviar, wasabi, rock music, etc.) I can't say about the saury, but they eat surströmming.)

My personal choice is C and the C interpreter from Zern's ROOT, but I had to switch to R because a lot of the matstat stuff is only in it.

The important thing is that R packages are written mostly by mathematicians, not by programmers, like in python or in our mcl5 - this makes them much more sensible.)

 
Aleksey Nikolayev:

Some things that I like very much afterwards seem nasty at first - coffee, caviar, wasabi, rock music, etc.) I can't say about saury, but they eat surströmming.)

My personal choice is C and the C interpreter from Zern's ROOT, but I had to switch to R because a lot of the matstat stuff is only in it.

The important thing is that R packages are written mostly by mathematicians, not by programmers, like in python or in our μl5 - this makes them much more sensible.)

Probably, but I'm not a mathematician, thank god.

 
mytarmailS:

No matter what you put will be mush, always!!!

The rules that appeared during training in matrix X will never work in the future due to non-statsynarity of the market

Rules are tied to indexes in matrix columns, and indexes "float" constantly because of non-statsynarity...

The repeatability of the rules will be about zero all the time...

How else can I explain it? With words and pictures...

So we need to check the predictors for stability and comparability at different time intervals.

mytarmailS:

Help with what?

Computational resources.

 
Maxim Dmitrievsky:

already wrote how to remove the serial correlation in the sliding window almost to zero, when preparing the data

Remind me, how? MGC?

Or just throw out the columns to be correlated and leave one of them?
 
elibrarius:

Remind me again how? IGC?

Or just throw out the correlated columns and leave one of them?

I looked through mgc to see if there is a ser. correlation

if there is, then remove the series of correlated samples, and / or run through gmm, which automatically makes the distribution more normal

it's not about the correlation of the samples, it's about the correlation of the samples to themselves over time, that's why it's called serial.

some local specialists are afraid of it, denying features in sliding windows, they just don't know how to clean dataset)

 

after such a decorrection the models work for the entire history depth (without spread), but they do not work with spread

Why and where is the error - my idea has not gone further since that moment

and no one gave me a hint
 
Maxim Dmitrievsky:

I used mgc to see if there is a ser. correlation

if there is, then remove the series of correlated samples, and / or run through gmm, which automatically makes the distribution more normal

it's not about the correlation of features, it's about the correlation of samples to themselves over time, that's why it's called serial.

some local specialists are afraid of it, denying features in sliding windows, they just don't know how to clean dataset.)

I see, it's a kind of time compression - throwing out lines where almost nothing happened. I guess that makes sense. But there probably aren't very many lines like that, 5%? And they're mostly nighttime?