You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes, that's basically what I'm doing as a more or less good option.
On another similar model I also sometimes observe small divergences, like divergence.
But not as prolonged as on the screenshot above, but quite short-lived. It made me wonder why it happens that way.
I have tried this model and saw an even more prolonged divergence.
So I do not understand where this divergence comes from. Not a correct model or low-quality source data.
I do not understand the logic of actions.
Either I should adjust the initial data approximately to normal,
or I should shovel different models.
But try to write this model first, it's not so easy to believe and throw it away ))
Inadequate model
I just can't understand the following anomaly, why this happens.
I calculated an orthogonal model, which is supposed to be better than MNC.
I got the starting coefficients.
Then model parameters (coefficients) are adjusted by median algorithm, i.e. a kind of robustness against outliers.
The model qualitatively describes the initial series.
What is an "orthogonal" model? Are you doing a decomposition on a system of orthogonal functions? Then look at what weight they are orthogonal to - anomalous behaviour may depend on that. For example, at the edges of the orthogonality segment.
What is an "orthogonal" model?
Are you doing a decomposition on a system of orthogonal functions?
Then look at how much weight they are orthogonal with - anomalous behaviour may depend on that.
For example, at the edges of the orthogonality segment.
No, it's not a function decomposition.
It is an orthogonal regression, where at each step of the calculation, the angle of slope of the normal is calculated (phi).
The normal is the shortest segment from a line to a point.
Then the slope of the angle (phi) is used to calculate the coefficients of the model.
Cartesian coordinate system
Orthogonal fit ANC fit
Probably will really need to check the values of these angles at anomalous locations.
This is an orthogonal regression, where at each calculation step, the angle of inclination of the normal is calculated (phi).
So call it by human name instead of inventing names like MSRP or TLS.
And what is the sense of it if the axes have different dimensions?
So call it by human names instead of making up names like INPC or TLS.
and what's the point of it if the axes are of different dimensions?
What are you talking about?
Orthogonal regression, orthogonal model, are you confused?
Yes, it's TLS, with a median refinement.
The figures are taken as an example. They are not relevant to the problem.
The axes in the figures are of the same dimensionality, it's just the scale of the drawings is a bit different.
It is not critical for understanding orthogonality.
Orthogonal regression, orthogonal model, are you confused?
Yes, I agree, wrong.
No, it is not a function decomposition.
It is an orthogonal regression, where at each step of the calculation, the angle of slope of the normal is calculated (phi).
The normal is the shortest segment from a line to a point.
Then the slope of the angle (phi) is used to calculate the coefficients of the model.
Cartesian coordinate system
Orthogonal fit ANC fit
Probably will really need to check, the values of these angles at the anomalous locations.
https://www.mql5.com/ru/forum/368720#comment_22203978, the bottom of the figure is the place where the "anomalous" divergence starts, is at almost a course jump, where the regression (linear or non-linear - it's all the same representations of Y as a function of x) gets screwed up, the misalignment increases dramatically. And the unconformity of approximation by both trigonometric and algebraic polynomials is proportional to the modulus of continuity (by the Jackson-Stechkin inequality, see wiki "Modulus_continuity"). Property of proximity of function behavior to that of continuous functions. In the case shown in this figure, the discrete counterpart of the discontinuity modulus increases sharply around zero.
Then you change the coefficients in the expansion (if linear - Y is decomposed into two functions : Y1(x) = 1; Y2(x) = x with coefficients a and b: Y(x)=a+bx) is already slow [continuous], with median smoothing. And the values of these coefficients acquired on the jump do not rush back to the values they would have had if your methodology started the approximation from any point after the jump, or if you replace the jump with a not so fast course movement to the same point.
By the way, it would be interesting to see pictures similar to the ones you give at https://www.mql5.com/ru/forum/368720/page2#comment_22207994 for the particular case where the course has changed almost by leaps and bounds.
https://www.mql5.com/ru/forum/368720#comment_22203978, the bottom of the figure is the place where the "abnormal" divergence starts, is at the near-course jump, where the regression (linear or non-linear - it's all the same representations of Y as a function of x) gets spoiled, the misalignment increases dramatically. And the unconformity of approximation by both trigonometric and algebraic polynomials is proportional to the modulus of continuity (by the Jackson-Stechkin inequality, see wiki "Modulus_continuity"). Property of proximity of function behavior to that of continuous functions. In the case shown in this figure, the discrete counterpart of the discontinuity modulus increases sharply around zero.
Then you change the coefficients in the expansion (if linear - Y is decomposed into two functions : Y1(x) = 1; Y2(x) = x with coefficients a and b: Y(x)=a+bx) is already slow [continuous], with median smoothing. And the values of these coefficients acquired on the jump do not rush back to the values they would have had if your methodology started the approximation from any point after the jump, or if you replace the jump with a not so fast course movement to the same point.
By the way, it would be interesting to see pictures similar to the ones you have given at https://www.mql5.com/ru/forum/368720/page2#comment_22207994 for the particular case where the course has changed almost by leaps and bounds.
Thank you for your lucid and comprehensive explanation!
I also suspected misalignment at the moment of a jump, but I failed to formulate it correctly.
Since median smoothing is really applied, the memory about the jump, depending on the window size, is still there.
I haven't got acquainted with the scatter plot on mql5 yet. Still in the process of learning. It would be interesting to see such graphs as well.
I don't know how soon I will be able to show the graph, as soon as I figure out the coordinates I will.
Without median smoothing, on pure coefficients, it seems to be true
but then you get this recovery pattern
Added.
I forgot to clarify, the raw data is just logarithmic without transformation for now, to reveal the weaknesses.
Logarithmic increments - not good enough?
You need multi-dimensional normality there. You can't buy it that cheap.)