Matstat Econometrics Matan - page 11

 
Aleksey Nikolayev:

At the request of the topicstarter I will continue about the principle of maximum likelihood. For brevity, I will use the English term MLE (maximum likelihood estimation).


By definition, the likelihood is the density of the joint distribution. For a sample of size N, it is a numeric function of the numeric N-dimensional space. Besides, it also depends on the parameters to be determined (estimated).

Consequently, the question arises - where does this function come from? The answer is "as it happens"), because it is impossible to cover the whole variety of ways.


Plausibility is the probability of falling within a confidence interval. And what is plausibility? In simple terms, without the density of the joint distribution.

 
Алексей Тарабанов:

Plausibility is the probability of falling within a confidence interval. What is plausibility? In simple terms, without the density of the joint distribution.

What exactly is difficult for you about the concept of density?

 

As always - first for normality, then for stationarity and then... as usual...

Oh, and by the way, not everyone calls Gaussian noise white. White is white and Gaussian is Gaussian.

 
Actually, I haven't seen any mention of using SB and other noise in quantum publications to create anything tradable, except simulations for testing. And that method is recognized as ineffective. Well, also to simulate something unreal to compare with the real and show that it is similar to the eye, but absolutely useless. Quants are used to thinking more realistically and looking for something where it really is. Econometrics is very useful in that it can predict a sine wave and is easily replaced by machine learning. And no one has yet figured out what to replace machine learning with. At this point we could end all philosophizing on this topic as unproductive and leading nowhere 😁

In general, those studying SB and econometrics evolve roughly as follows: Coolibin -> Econometrician -> Coolibin/econometrician = experimentalist Coolibin
 
Доктор:

Read your opus, you've practically proved that no amount of tic manipulation changes the persistence of the row. (chuckles) Congratulations.

What does persistence have to do with it? I didn't mention that word at all... Doc, I'm sorry, but you're even dumber than I thought.... It was about maximum preservation of the structure of the series, and nonentropy (or entropy) is known to be responsible for the structure.

It has been shown that dealing with M1 and above is no different from dealing with a Wiener process without demolition. And one should apply quite different methods on it than when working with ticks and thinned ticks.

People report success with the Warlock method with certain tweaks....

They have their own hangout and you are quite superfluous in it because you do not understand anything.

You should not touch the market at all, you do not like it, and it does not like you.

I'll stop talking to you.

 
Aleksey Nikolayev:

3) The standard version of the MLE.
Often used as a definition of MLE, but this narrows the applicability of the method too much.
The assumption used is that all random variables in the sample
a) are independent and
b) have the same univariate distribution with density p(x,a),
where a is the parameter to be estimated.
Then the likelihood function L=p(x1,a)*p(x2,a)*...*p(xn,a), where n is the sample size.
Substitute the sample (in the first sense) as x's, get L=L(a) and look for the amax at which L reaches a maximum.
Note that we can maximise LL(a)=log(L(a)) instead of L(a), because the logarithm is a monotonic function and, conveniently, replaces product by addition.

For an example, consider the exponential distribution p(x,a)=a*exp(-a*x), log(p(x,a))=log(a)-a*x,
derivative by parameter d(log(p(x,a)))/da=1/a-x.
Thus we need to solve the equation 1/a-x1+1/a-x2+...+1/a-xn=0 -> amax=n/(x1+x2+...+xn).

4) Next time I will describe how the method of minimizing the sum of modules is obtained instead of MNC)

So we maximise the centre of the distribution? essentially zero sigma?
Or will the maximum not always be around zero sigma?

Forum on trading, automated trading systems and strategy testing

Matstat-Econometrics-Matan

Alexei Tarabanov, 2021.05.14 22:25

Plausibility is the probability of falling within a confidence interval. And what is plausibility? In simple terms, without a joint distribution density.


And is it the same?
Probability that the variable is from a normal distribution == maximum likelihood ?
 
Alexander_K2:

And, about econometrics, matstat and matan (God, what names!) I support Automat - this nonsense is applicable only if the individual has grasped the physics of the process. Otherwise it is all nonsense and it is not worth paying attention to.

Amen.

No offence.

They do not understand this. What is more, they do not understand physics at all. It's no use to them.

Let's leave them alone. Let them frolic. We'll watch.

 
Maxim Dmitrievsky:
As a matter of fact, I haven't seen any mention of using SB and other noise in quant publications to create anything tradable, except simulations for testing. And that method is recognized as ineffective. Well, also to simulate something unreal to compare with the real and show that it is similar to the eye, but absolutely useless. Quants are used to thinking more realistically and looking for something where it really is. Econometrics is very useful in that it can predict a sine wave and is easily replaced by machine learning. And no one has yet figured out what to replace machine learning with. At this point we could end all philosophizing on this topic as unproductive and leading nowhere 😁

In general, the studies of SB and econometrics evolve roughly as follows: Kulibin -> Econometrician -> Kulibin/Econometrician = experimentalist Kulibin

No quant will ever publish a working model or approach. Generally they sign an NDA when they get hired.

What they publish either doesn't work anymore or has never worked, but is interesting in terms of theory.

 
Roman:

So are we maximising the centre of the distribution ? in essence zero sigma ?
Or will the maximum not always be around zero sigma?

Forget about normal distribution) Don't forget about it for good, just for a while) It keeps popping up, but actually there are plenty of distributions, both tabular and unnamed)

The point of MLE is that we have an infinite number of models "numbered" by the parameter. According to the results of the experiment (sampling in the numerical sense) from them we choose the one that maximizes the likelihood. Likelihood (density of distribution) is a basic concept of theorver (follows directly from the axioms of science) and one can only get used to its application without trying to explain through other less basic concepts.

The MLE method is so basic that it has even migrated into machine learning (along with the implicit notion of a joint distribution of traits and responses) )

This leaves the question of which parametric family of models to work with. This question is usually practical and depends on the object in question.

Roman:

And is it the same?

Probability that a variable from a normal distribution == maximum likelihood ?

The confidence interval is from the realm of interval estimation of a parameter, where one does not find a particular value of the parameter, but the interval into which it falls with a given probability. For example, everyone considers only the numerical value of Hearst and is very happy that it is not equal to 0.5. But in fact it must be shown that with high probability Hirst falls into an interval which does not contain 0.5. This is usually a big problem.)

MLE is from the realm of point estimation of a parameter. The problem is slightly different, but as for the previous one, its solution relies on the notion of joint sampling distribution (in the second sense). Hence, the statement "I know confidence intervals, but I don't know joint distribution density" consists of two mutually exclusive statements)

I suggest that you go through the methods one at a time, rather than making an incomprehensible jumble of them.

 
denis.eremin:

No quant will ever publish a working model or approach. Generally, they sign an NDA when they are hired.

What they publish either doesn't work anymore or has never worked, but is interesting in terms of theory.

This does not invalidate the first thesis. And quite working models are published.