Machine learning in trading: theory, models, practice and algo-trading - page 2761

 
СанСаныч Фоменко #:

"Meaningful" is by the pictures I cited, which is what makes"informative markup a ficha-targeted at once"

And what do you mean by the word "meaningful"?

Well, if they do it right away, then ok. I don't remember that there. What's the article called? I'll read it later
 
Maxim Dmitrievsky #:
Well, if they do it right away, it's okay. I don't remember that. What's the article called? I'll read it later

Here, by VLADIMIR PERERVENKO. He has a full cycle of articles starting with data mining. My point of view coincides with him in many respects, except for the model itself. I consider it unreasonably complex for our needs.

Глубокие нейросети (Часть II). Разработка и выбор предикторов
Глубокие нейросети (Часть II). Разработка и выбор предикторов
  • www.mql5.com
Во второй статье из серии о глубоких нейросетях рассматриваются трансформация и выбор предикторов в процессе подготовки данных для обучения модели.
 
СанСаныч Фоменко # :

"Meaningful" is by the pictures I've given, which make "informative markup fiche-targeted at once

The picture from here https://www.mql5.com/ru/articles/3507 is so called - Fig.12. Variation and covariance of a set of 2 trains

from covariance to correlation is 1 step.... (but you are a genius and everyone is offended - so google it yourself).... success to you in polishing your conceptual apparatus ... once you understand the meaning of words - the pseudo-genius of your jargon and the falsity of your alleged arguments will dissipate in a flash ... you can't change the logic with your shouting.

-- in general, the thread has not changed, still torn throats trying to proclaim their genius, inventing a bicycle, - "pioneers" so to speak...

Машинное обучение в трейдинге: теория, модели, практика и алготорговля
Машинное обучение в трейдинге: теория, модели, практика и алготорговля
  • 2022.09.27
  • www.mql5.com
Добрый день всем, Знаю, что есть на форуме энтузиасты machine learning и статистики...
 
СанСаныч Фоменко #:

Here, from VLADIMIR PERERVENKO. He has a systematically complete cycle of articles, starting with data mining. My point of view coincides with him in many respects, except for the model itself. I consider it unreasonably complicated for our needs.

I didn't see any markup of the target for specific attributes. We take an increment with an arbitrary lag. It will be informative only for certain targets and uninformative for others.

I just checked which attributes are more suitable for specific targets.
 
Maxim Dmitrievsky #:
I didn't see any markup of the target for specific features. We take an increment with an arbitrary lag. It will be informative only for certain targets and uninformative for others.

I just checked which attributes are more suitable for specific targets.

I don't understand that. What does markup mean?

Target-predictor pairs are related and the pair exists precisely because they are related. And it's hard enough to find such pairs. The stronger the link, the smaller the fitting error. For another target, the predictor problem is different.

 
СанСаныч Фоменко #:

I don't understand that. What does markings mean?

The target-predictor pair is related and the pair exists precisely because it is related. And it's hard enough to find such pairs. The stronger the link, the smaller the fitting error. For the other target, the predictor problem is different.

Initially your signs are not related to the target signs, because the target signs are increment signs, i.e. meaningless signs

Then from tens and hundreds of signs, you choose the most relevant to these targets. This is the most inefficient approach, but it has its place.

So you classify cats and dogs, two classes. And at the input, as features, you give camel hooves, fish tails, tits, teaspoons, speed of light and so on. Of course, sometimes you get in, but it's very difficult.

The situation is complicated by the fact that you have cats and dogs mixed up too, because the signs of increments are not a specific object that is predicted, but only a small part of it, for example, a leg. And this leg can be a dog's, but in the moment you see it as a cat's.

Hence there is either a hard bruteforcing search of everything and anything, or inherently constructed targets based on traits.

Prado in his book made the first attempt to do class markup across the triple barrier to distinguish classes more clearly. But this approach still seems naive to me.
 
Maxim Dmitrievsky #:
Initially, your signs do not belong to target signs, because target signs are increment signs, i.e. meaningless signs

Then from tens and hundreds of signs you choose the most appropriate ones to these targets. This is the most inefficient approach, but it has its place.

So you classify cats and dogs, two classes. And at the input, as features, you give camel hooves, fish tails, tits, teaspoons, speed of light and so on. Of course, sometimes you get in, but it's very difficult.

The situation is complicated by the fact that you have cats and dogs mixed up too, because the signs of increments are not a specific object that is predicted, but only a small part of it, for example, a leg. And this leg can be a dog's, but in the moment you see it as a cat's.

Hence there is either a hard bruteforcing search of everything and anything, or inherently constructed targets based on traits.
I hope I'm wrong, but my impression is that traits are not understood in quite the same way.

 
Valeriy Yastremskiy #:
I hope I am wrong, but I have the impression that attributes are not understood in the same way.
Features are what is fed to the input of the NS, and class labels are fed to the output.

A feature should represent partial information about the object being classified, that's why it is a feature. A distinguishing mark, if you will.

The way I see it, as long as it is not defined what exactly is being classified, then all these 100 fancy ways of fitting will give the same result
 
Maxim Dmitrievsky #:
Traits are what is fed to the NS input, and class labels are fed to the output.

A feature should represent partial information about the object being classified, that's what a feature is. An insignia, if you will.

The way I see it, as long as it is not defined what exactly is being classified, then all these 100 fancy ways of fitting will give the same result

Are indirect signs possible? For example, cats and dogs often fight, but dogs are more likely to chase cats. We are given: two objects and their movements. The task: to determine which of them is a cat and which is a dog, having checked once by factual data and in subsequent times independently determine who is who. We know for sure that one of them is a cat and the other one is a dog, but we can't see their silhouette or hear them, we can't even see their traces, only the coordinate of movement. We feed the neural network the movement of objects back and forth (BUY-SELL). In the process of "thinking" and multiplication of weights, the neural network classified us that one object is always running ahead and the other behind it (MA_5[0] > MA_10[0]), and made an assumption: is the dog moving ahead now? Checked it with the actual data, got the answer (NO), corrected the data, assumed it was a cat, checked it - (YES). Now the neural network knows how to determine who is a cat and who is a dog by the fight and movement of objects. At the same time, it was not given paws, pieces of hair, teeth, barking or meowing.

That is, it seems that the neural network can be fed a lot of things and it will find something and find it in such a way(Hercule Poirot) that it will give the necessary answer. That is, the feature in this case does not represent partial information about the object being classified, but a solution is possible.

 
Ivan Butko #:

Are indirect signs possible? For example, cats and dogs often fight, but dogs are more likely to chase cats. We have two objects and their movements.

In the process of "thinking" and multiplication of weights, the neural network classified to us that one object always runs ahead and the other behind it (MA_5[0] > MA_10[0]), and made an assumption: is the dog moving ahead now?

Now the neural network knows how to determine who is a cat and who is a dog by the fight and the movement of objects. At the same time, it was not given paws, pieces of hair, teeth, barking or meowing as input.

That is, the feature in this case does not represent partial information about the object being classified, but a solution is possible.

these are not signs - they are dynamics of process development in time - dynamic series ...

and dependencies are studied as stationary series ...

(but time can also be called a sign - exogenous, the time factor adds dynamics).

you didn't get neither meow nor hair on the input, but you got smoothing of the trajectory - neural networks don't care what you approximate - it's just that dynamics always shows the result with a lag - precisely because it needs the time factor as a window to collect a sample and estimate the rate of change of the dependent variable from time ... BUT the dependence (on time) must be there to analyse the dynamics (it is it that you put into the model you describe - if you call things by their names "what is the factor and what you want to know/evaluate" in the model - then there will be less scribbles in (un)understanding each other on the forum)...

linear equation - shows velocity (tangent at a point to the curve of the trajectory), quadratic (parabola) will also show acceleration... and the convergence of (f-a)^2 will be evaluated in time and will show the result on a finite segment of this time window - MLE (maximum likelihood estimation) always works the same way, at least when approximating statics, at least when equalising dynamics.

unless you think about what you are looking at - a factor (qualitative/quantitative) or its dynamics (+ time factor) - you cannot distinguish dependencies and patterns of development - and therefore you do not understand what you are analysing and whether it is what you really need and what depends on what... and limitations of the type of analysis - analyses of dynamics ALWAYS show results with a lag.

really, wearying arguments about who looks at what crookedly and sees what crookedly and interprets it crookedly, and how crookedly he himself understands from his interpretations and tries to convince others, and some in the posts above even with foaming at the mouth.... what kind of scientific dispute can we talk about? if you abstract everything and everyone to such an extent that you twist meanings with your freedom of speech -- there is no freedom of speech in natural sciences! there are exact formulations and their exact meanings ... not pseudoscientific knowledge ... not pseudoscientific knowledge, which you promote here because of your ignorance of basic fundamentals (and you try to present it as arguments)_

you create such models (curves) - not knowing what to put on the output (what you want to know) as a result of modelling ... What factors are you interested in this dependence on?

everything is too subjective often on this thread, so it is impossible to get to objectivity (which is the true and main goal of modelling).

Reason: