Machine learning in trading: theory, models, practice and algo-trading - page 1538

 
mytarmailS:

dtw, spectral analysis... a bunch...

i managed to create an algorithm that can see the same patterns regardless of their magnitude, i.e. the algorithm looks at one chart and sees the pattern on both the 1-minute and weekly charts, looking at only one chart, and it can really make predictions, but there is still a lot of work

i've started to read about dtw, but i don't know how to apply it to finance lines and what for i need it) but it may be an interesting topic

 
Maxim Dmitrievsky:

i started to read something about dtw, but did not understand how to apply it to fin rows and why i need it) but the topic is interesting, i guess

voice and music files are compressed using dtw and they are BP too

;)

 
Igor Makanu:

voice and music files are compressed with dtw, and they are also BP

;)

but what is the point of compressing dtw :)

 
Maxim Dmitrievsky:

But why compress fin. vr :)

Well, I have already studied this topic, in my own words, it's about this:

here in general, the value ofdtw - this is the correct algorithm for compressing BP - just BP, and not just any and whatever the data

well, if we know how to compress data correctly - do we get packages? images? - or let it be data patterns - this is exactly the data patterns which allow you to create speech recognition algorithms

this is how i seedtw being used

In principle it is possible to apply dtwto financial BP, imho if there is no data loss after conversion (i.e. reverse conversion is possible), then it makes sense to try to apply it to financial BP, as they say, but what if?

SZS: read an article a couple of years agohttps://habr.com/ru/post/135087/

 
Igor Makanu:

Well, I have already studied this topic, in my own words, it's something like this:

here in general is the value ofdtw - it's a correct algorithm for compressing BP - exactly BP, and not just any and whatever the data

well, if we know how to compress data correctly - do we get packages? images? - or let it be data patterns - this is exactly the data patterns which allow you to create speech recognition algorithms

this is how i seedtw being used

In principle it is possible to apply dtwto financial BP, imho if there is no data loss after conversion (i.e. reverse conversion is possible), then it makes sense to try to apply it to financial BP, as they say, but what if?

SZZ: read an article a couple of years agohttps://habr.com/ru/post/135087/

Well, it will be possible to do some digging sometime later, yes. To retrieve the same patterns from returns mb

On the other hand, why bother, if there is a neural network
 
Maxim Dmitrievsky:

on the other hand there is no need, if there is a neural network

i don't know, i abandoned the study of all this, it's interesting, but imho, there should and there are easier ways to understand what's what in the market

about NS, you know that data processing is more important than config or type of NS, imho, dtw is the correct processing of BP ( when processing exactly BP!!! ) it's the sequence of data that matters!

about the same dtw in voice processing - the sequence / alternation of letters is important ? ;)


UPD:

if you feed just a sliding window of BP data (bars) to the NS during training, then imho, it is an illusion that through the inputs of the NS - exactly as we drew 1,2,3...N input, the NS will perceive that the data are fed sequentially as we want, inside all inputs will be mixed up, imho, it will not be a sliding window for the NS

 
Igor Makanu:

i don't know, i abandoned the study of all this, interesting stuff, but imho, there should and there are simpler ways to understand what's what in the market

about NS, you know that data processing is more important than config or type of NS, imho, dtw is the correct processing of BP ( when processing exactly BP!!!) data sequence is important!

about the same dtw in voice processing - the sequence / alternation of letters is important ? ;)

I am not knowledgeable in these matters, as far as I know recurrent and convolutional neural networks have long been used for this purpose. For example, the seq2seq (neural language processing) algorithm from Google. Would not dtw look faded on this background, I would not want to squeeze the knuckles in the chair for unnecessary time :)
 
Maxim Dmitrievsky:
I'm not knowledgeable in these matters, as far as I know recurrent and ultra-precise neural networks have been used for a long time. For example Google's seq2seq algorithm. Would not dtw look pale on that background, I would not want to squeeze my knuckles in a chair for unnecessary time :)

I read about recurrent and ultra-precise, well, as if yes, but all the examples are as always on the recognition of pictures, and there start tricks with palette compression like to produce

I don't know, sound waves are closer to BP than pictures - also not stationary, and pictures seem to lose more by compressing information during processing than try to recover - it's like the NS learns faster and better

 
Igor Makanu:

I read about recurrent and ultra-precise, well, as if yes, but all the examples are as always on the recognition of pictures, and there start tricks with palette compression like to produce

I don't know, sound waves are closer to BP than pictures - also not stationary, and pictures there seem to lose more by compressing information during processing than try to recover - it's like the NS learns faster and better

NLP and seq2seq, those are for speech sounds and stuff.
 
Maxim Dmitrievsky:
NLP and seq2seq, those are for speech sounds and stuff.

I haven't heard about them at all, although I've read a lot of material! I'll look it up tomorrow, thanks.