Machine learning in trading: theory, models, practice and algo-trading - page 2661

 
mytarmailS #:
Cool article https://pair-code.github.io/understanding-umap/

What's the fun of it?

 
Vladimir Perervenko #:

and what's the fun in that?

Well, my main point is that it's pretty hard to recognise something with a "humap" decomposition.

For example, for a recognition algorithm to realise that two front legs are one class of "front leg".

you have to do a lot of transformations...


1) divide the "unmap" components into pieces (clusters) it is unlikely that "dbscan" will cope with this correctly (for this task).

2) variation of mammoth legs by size so that there would be invariance (we will omit this stage here)

3) correct matching of the legs to each other according to an unknown algorithm + centring

4) rotation of the legs for a more correct position

5) mirroring the feet for a more correct position

6) now we need to align the legs, remove the main distortions. I think it is possible to decompose the legs by the method of principal components and remove the first principal component from them, this in theory should remove the main distortions (I have not illustrated this).


7) And only then you can measure the distance/proximity between the legs to realise that they are similar and can be classified as one class "front legs".

 
mytarmailS #:

Well, my main point is that it's quite hard to recognise something with a "yumap" decomposition.

for example, to make the recognition algorithm realise that two front legs are one class "front leg".

you have to do a lot of transformations.

Poor elephant.
 
Maxim Dmitrievsky #:
Poor elephant.

I'm already like that elephant myself, with a square head)))

 
mytarmailS #:

I'm already like that elephant myself, with a square head)))

Yes, in theory everything is clear, for example, where its legs are and where its head is, but for the algorithm nothing is clear, just a set of points.

It's the same with signs for bots.

 
Maxim Dmitrievsky #:

Yes, in theory everything is clear, for example, where his legs are and where his head is, but for the algorithm nothing is clear, just a set of points

the same thing with features for bots.

That's why we need invariance in the broadest sense, like in computer vision, so that the algorithm itself can segment, then expand, narrow, rotate, distort and only then compare.

https://robwhess.github.io/opensift/

https://www.analyticsvidhya.com/blog/2019/10/detailed-guide-powerful-sift-technique-image-matching-python/#:~:text=SIFT%20helps%20locate%20the%20local,detection%2C%20scene%20detection%2C%20detection%2C%20etc.
 
Maxim Dmitrievsky #:

same thing with the traits for bots.

Exactly! I'm not worried about the elephant.

The market is not static. It will never be like yesterday.
 
mytarmailS #:
Exactly! I'm not worried about the elephant.
.

The market's not static. It'll never be like yesterday.

Yes, I wonder.

The example shows a model of an elephant, but if these parts are used to make a camel, it probably won't work.

---

There are similar models on market charts all the time: "elephants", "camels", "bunnies". But all of them are of different sizes. But the patterns are real and they repeat themselves all the time.

At my age it is difficult to get deeply into such complex analytical processes as elephants, but I will say that it is interesting.

 
Uladzimir Izerski #:

Yeah, it's interesting.

The example shows a model of an elephant, but if you make a camel out of these parts, it probably won't work.

Well, the camel has no tusks and proboscis, and the elephant has humps, what is the point of making a camel out of an elephant?
 
Uladzimir Izerski #:


Similar models constantly exist on market charts: "elephants", "camels", "bunnies". But all of them are of different sizes. But the models exist in reality and are constantly repeated.

But a human can see them, but a primitive algorithm cannot.
Because the primitive algorithm has no invariance to size, slope, bend, etc.... but the brain does
Roughly speaking, no matter how you teach the AI, it still knows only what it has seen before and expects only what it has seen before, and the market never repeats itself exactly.

But this can be corrected by adding invariance to the algorithm and there are such algorithms as I showed above... But it's bloody difficult for a humanist like me.