Regression equation - page 3

 
Mathemat:
Interesting, interesting. Candid, remember my thread on Inhabited Island about a metamodel with a quasi-stationary process (diffurcas there, also a rabbit out of a hat we pulled)? Something very similar. The noosphere does exist, after all, and the thoughts in it are common...

I remember, how could I not.

But you used to call the island Uninhabited :)

 
Prival:

If you do it in MQL, you'll get in a lot of trouble. There are no matrix operations...


https://www.mql5.com/ru/articles/1365
 


I've seen it. It's a lot of work. Thank you for that work. But research, and here it is research is better done in another language where there are really matrix operations...

Z.I. I must have missed Desert Island. I'd like to read something that would make sense...

 
Prival:

http://www.nsu.ru/ef/tsy/ecmr/quantile/quantile.htm

If you do it in MQL, you'll be in trouble. There are no matrix operations here...

matrix operations can be reduced to ordinary arithmetic operations in each case:)

In general, the article suggests searching for model parameters by simplex method, but it is known to be exponentially long relative to the dimensionality of the problem. So, it seems to me that this is the first direction in which we should work. By the way, your article seems to be the only one on this subject in Russian, and in itself it is of rather low quality (probably, someone's term paper or diploma:)

 

I wish someone would write a simplex in MQL... I'm so lazy myself!

Well, ellipsoids would be great:)))

 
alsu:

I will try to explain it theoretically, as I am not ready to give the calculation data yet, they are raw.

...By approximating with MNC we force the regression polynomial to "cling" not only to the normal part of the process, but also to the Poisson outliers, hence the low prediction efficiency which, generally speaking, we need for . On the other hand, by taking quantile polynomials we get rid of the second, Poisson part of the process completely: quantiles simply do not respond to it, and absolutely. Thus, by identifying the places where the regression gives significant attempts, we can thus almost online localize "failures" with a high degree of confidence(we probably can't predict them yet, as there is no appropriate model, at least, not with me:).

I still do not understand the constructive criticism of the "poverty" of ISC...

;)

 
FreeLance:

Still don't get the constructive criticism of the MNC's "poverty"...

;)

I didn't finish the sentence in the middle, I guess I'm getting old:))) just don't read it starting with "which".

The criticism, as some readers of the thread have already realised, is aimed at the peculiarities of MNC, which consist in a) its poor performance when dealing with processes of non-Gaussian nature (MNC estimation is not efficient in this case), and b) the inability of MNC to "separate" two processes - Gaussian and non-Gaussian: the method responds to the additive mixture in its entirety, in contrast, the least-squares method or quantile regression will only respond to the Gaussian part, thus separating the second component from the process.

And in general, MNCs are generally only used because they are much easier to calculate. At the same time, in real life, many problems require the use of other methods, but people either out of laziness or out of ignorance poke MNA wherever they can...

 
I don't remember where I got this from, but I thought the MNC was just an implementation of the MMP (method of maximum likelihood) applied to a Gaussian value. I might be wrong.
 
alsu:

And in general, MOOCs are generally only used because they are much easier to calculate. At the same time, in real life, many problems require the use of other methods, but people either out of laziness or out of ignorance stick to ANC wherever they can...

I think the property of a quadratic function to find a minimum is exploited...Like the derivative is zero at point 0.

That is why all the analytically derived methods of calculating function parameters work regardless of the field of definition of the function. I wrote about this problem once.

But! If you fit the best MOC parameters for a function with a definition of -1 ....1 - you're in for trouble.

You may get worse ones. The minimum of the deviation will become the maximum.

Again I note - this is for "frontal" methods.

And since you are "using" your own distribution, and it is probably ;), in derivations it is not reduced to the possibility of calculating the MOC minimum for the parameters, and what is especially important, defined in "remarkable" limits - the minimum likelihood has a place.

Try to normalise the data so that no square of the deviation is less than 1.

;)

But the main issue is left out of the frame - let you pick up the distribution parameters.

How is it extrapolated to quotients? By arctangence?

DDD

 
What about the choice of polynomial?