Machine learning in trading: theory, models, practice and algo-trading - page 1377

 
Grail:

Absolutely right, the more the better (less than 100k is noise), but you need to consider that the properties of the market are changing, and how to take this into account in training, it's a big secret.

I've tried to evenly decrease weight of lines, but I haven't noticed any improvement. What other options?

 
elibrarius:

Here I have tried reducing the weight of the lines evenly, but I haven't noticed any improvement. What other options do you have?

Something wrong must have been done, should be better, although you have to reconfigure the classifier, the optimum without time weighting is different.

You can also try to divide lurn into e.g. 10 fragments, train them on more or less optimal average for all on 10% test (close to the end (present)) and then use modulated quality of classification (1-.1) as weights for fragments. It is also possible to use a sliding window, of course, with a certain step to get more uniform weights. By the way, the dynamics of these weights itself is a very important feature, which has to do with the change of market modes.

 
The Grail:

Something wrong must have been done, it should be better, although you have to reconfigure the classifier, the optimum without time weighting is different.

You can also try to divide the lurn into for example 10 fragments, train them on more or less optimal average for all on 10% test (close to the end (present)) and then use modulated quality of classification (1-.1) as weights for fragments. You can also use a sliding window, of course, with a certain step to get more uniform weights. By the way, the dynamics of these weights itself is a very important feature, having to do with the change of market modes.

I don't quite understand the idea, is it the way Vladimir advised? That is, after training on part of the data to put weights to the test plot?
 

Not quite understand the idea, is it like Vladimir advised? That is, after training on part of the data to put weights to the test piece?

That's about right, if I understand it correctly. Train on the first slice, get akurasi and use as a weight, modulated -1, 1 on akurasi all slices and so for all slices, you can do overlapping slices with a sliding window, will be more calculations, but IMHO 10-20 slices enough for a sample of 500 strings


PS weight is given not to the test one, it is one at the end, closer to the real one, but to the lernovo piece, in the picture on the left.


 
Grail:

That's about right, if I understand it correctly. Train on the first slice, get akurasi and use as a weight, modulated -1, 1 on akurasi all slices and so for all slices, you can do overlapping slices with a sliding window, will be more calculations, but IMHO 10-20 slices enough for a sample of 500 strings


PS the weight is put not to the test one it is one at the end, closer to the real one, but to the lerno slice, in the picture on the left.


Ah - well, it is the opposite of Vladimir's. He gave weights to rows in these test pieces, and constantly shifting them gave weights to the whole sample. Here you get a different weight for each line.

And you have - training on a small piece of lern (20-50k lines), check against the test and if the test is better/worse than average for all pieces, respectively change the weight to all lines in this piece of lern. Here the weight of all lines in a slice is the same.

Now do I understand your idea correctly?

 

In the piggy bank, probably interesting... haven't watched it yet. liked the content of the lecture.

https://www.lektorium.tv/lecture/14232

Зачем торговать на эффективном рынке? Модели эффективности и предсказуемость
  • www.lektorium.tv
Зачем торговать на эффективном рынке? Модели эффективности и предсказуемость. Кирилл Ильинский рассказывает о связи между эффективностью рынка и предсказуемостью движений цен финансовых инструментов. В лекции обсуждается популярная точка зрения о противоречии между эффективным рынком и техническим анализом, а также различные подходы к описанию...
 
Maxim Dmitrievsky:

In the piggy bank, probably interesting... haven't watched it yet. liked the content of the lecture.

https://www.lektorium.tv/lecture/14232

I liked the lecturer, but not the lecture. Though he reads well. I was able to sit through it for 25 minutes). Designed for a different audience. Sure something interesting will say next, but two hours to watch...

 
Yuriy Asaulenko:

I liked the lecturer, but not the lecture. Although he reads well. (I survived for 25 minutes). Designed for a different audience. For sure something interesting will say next, but two hours to watch...

I haven't figured it out yet, either.

He's giving a lot of lectures. If you watch it from the first hour you'll get a deep analysis of the market structure and models.

interesting in general. Quantum from JP Morgan or who is he.

 
elibrarius:

Ah - well, this is the opposite of Vladimir's. He gave weights to lines in these test chunks, and constantly shifting them, he gave weights to the whole sample. Here you get a different weight for each line.

And you - training on a small piece of Lerna (20-50 thousand strings), check against the test and if the test is better / worse than average for all pieces, respectively change the weight of all the strings in this piece of Lerna. Here the weight of all lines in the slice is the same.

Now do I understand your idea correctly?

Yes, in a slice the weight is the same, first each slice is trained and tested on one test at the end, and the result is recorded for the training slice, then we take the average for all slices and divide by the scatter this will be the weights.

 
Maxim Dmitrievsky:

In the piggy bank, probably interesting... haven't watched it yet. liked the content of the lecture.

https://www.lektorium.tv/lecture/14232

Financial math presented without Ito's stochastic calculus looks very mysterious and vague.