Machine learning in trading: theory, models, practice and algo-trading - page 206
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Interesting fact.
The definitions of gamma distribution density values in the Russian translation of
Johnson N.L., Kotz S., Balakrishnan N. Univariate Continuous Distributions. part 1 and the earlier English version are different:
but the English version has a suspected typo due to different signs.
This is not a misprint. If you take the trouble to look through several different gamma tutorials... You'll see that differently defined "helpdesk"... Somewhere with zero and somewhere without zero.
You don't bother to look at these very materials and cite them, unlike @Quantum.
And you even make references to Excel and Python so as not to give a clear example.
Only you are practicing your wit so far.
Don't forget to cite the answer from R, if you get it, of course.
How the developers of R explain their results:
dgamma(0,0.5,1)=inf
pgamma(0,0.5,1)=0
if they have point 0 included (as seen in the definition), gives infinite density at point x=0, and then when integrating into pgamma(x,0.5,1) infinity is considered zero, as if it did not exist.
How R developers will explain their results:
Good question, but why do you ask it here on the forum? According to Renat, you have a whole team of scientists there parsing R algorithms, ask them, and then tell us. Understand the sources of R is the direct responsibility of your team, if you want to make a full-fledged port.
It seems to me now that all your "analysis of R algorithms" is to make functions with the same parameters as in R, with implementation by university textbooks without going into details. And then it leads to misunderstandings like calling "0^0=1" an error.
With the current trend, you will end up with functions with R-like interface that behave differently under certain conditions because of different implementations. And if someone wants to transfer his code from R to mql, he will end up with different results, will get tired of looking for why the results are different, and will spit on it all. Unit tests will only reveal a small fraction of such differences, because they will only cover some common non-problematic data.
This is a very strange approach, copying the interface of R, making your own implementation of functions without even studying the source of R, check the results by wolfram. What do you even want to get with this approach?
What you do I can call "self-written statistical library mql, with interface copied from R, and in unspecified situations adapted to wolfram". All other words about R in https://www.mql5.com/ru/articles/2742 are just marketnig, having nothing to do with R. Disappointment.
and then during integration in pgamma(x,0.5,1) infinity is considered zero, as if it did not exist.
x=1*10^(-90)
The number is very small, not zero, and there are no uncertainties.
Tungsten, the result is the same:
PDF[GammaDistribution[0.5,1], 1*10^(-90)]
5.6419×10^44
CDF[GammaDistribution[0.5,1], 1*10^(-90)]
1.12838×10^-45
Now, paraphrasing your question, without all the infinities in the formulas:
How can you integrate dgamma, which returns big numbers like 5.641896e+44, to end up with a very small number1.128379e-45?
The answer is no way.* Integrating dgamma() is not used to calculate pgamma(), there are other formulas, and infinity from dgamma() is not used in calculations.
I understand the calculation of pgamma(0, 0.5, 1) in this case as - If you take an infinite set of numbers [0;Inf), and randomly choose one of them, what is the chance of choosing a number <=0 ? The answer is 1/Inf , or 0. Which corresponds to the result of pgamma(). Correct me if something is wrong, I am not very good at working with infinities and bounds at the level of intuition and logic.
*there I stumbled, underestimating the rate at which the result of dgamma() decreases as x decreases. please ignore that statement.
Good question, but why do you ask it here on the forum? According to Renat, you have a whole team of scientists there parsing R algorithms, ask them, and then tell us. Understand the source code of R is the direct responsibility of your team, if you want to make a full-fledged port.
Now it seems to me that all your "analysis of R algorithms" is to make functions with the same parameters as in R, with implementation by university textbooks without going into details. And then it leads to misunderstandings like calling "0^0=1" an error.
With the current trend, you will end up with functions with R-like interface that behave differently under certain conditions because of different implementations. And if someone wants to transfer his code from R to mql, he will end up with different results, will get tired of looking for why the results are different, and will spit on it all. Unit tests will only reveal a small fraction of such differences, because they will only cover some common non-problematic data.
This is a very strange approach, copying the interface of R, making your own implementation of functions without even studying the source of R, check the results by wolfram. What do you even want to get with this approach?
What you do I can call "self-written statistical library mql, with interface copied from R, and in unspecified situations adapted to wolfram". All other words about R in https://www.mql5.com/ru/articles/2742 are just marketnig, having nothing to do with R. Disappointing.
About R we misled ourselves, including me. You can certainly blame this misconception on the meta-quotes, but the truth is different.
People who use R can remember the history of the promotion of R to its Olympus. Totally ripped off from the S in 1993, the whole R system was widely known in narrow circles for another 10 years. And only 10 years after its creation, having 20 years of history with S, from the beginning of the noughties began a gradual ascent and broke into the top ten five years ago, and today has the only competitor - python. Today R is a huge system representing the standard in the field of statistics.
Hence the conclusion: analogues of R within MKL are impossible.
What are we dealing with?
We are dealing with a very positive process of development of MKL5 in terms of mathematical functions. If metaquotes manages to enrich the set of mathematical functions with analogues of functions from the stats R package, this process should only be welcomed. That said, it is a very good choice to take R rather than any other mathematical package as the original for imitation. But this is not an import of functions from R - these are newly written functions that are analogs, and which have the right to coincide or not with the original. But not matching doesn't detract at all from the importance of the work started by the metaquotes. And people who decide to port code from R to MKL5 should remember that it is another implementation relative to R with its own nuances, its own bugs, its own language environment.
So there is no need to compare anything. MKL5 is being expanded with statistical functions, and that's fine. If the plot method is added to this, it will be a revolution in the means of MCL5 graphics.
PS
And you, me and many other users of R will not be disappointed only in one case: the terminal will be rewritten and its programming language will be R.
The first version of the raft has already appeared:https://www.mql5.com/ru/forum/97153/page10#comment_3831485
You will have to put up with errors in R. Belief in sinlessness is a bad companion. We have debunked the myth about the speed of calculations in R too. The code there is written very simply and carelessly.
The error using AS 243 is indisputable and proved by our research of quality of results and confirmed by external materials.
You are only arguing about zero right now, but you have to give up here too. You are already trying hard to get away from the point by suggesting that we argue about other things.
Once again, we've done a quality job, dealt with the topic, and covered everything with tests.
In the R language, which - correctly - comes from S and has been around for at least 15 years, or even 20, people with advanced degrees provide the codes for every statistical function. Professors, assistant professors in the departments of statistics, in many ways, at American universities. Their calculations are not just stupidly accepted into commits because they did it for free, but are accompanied by scientific publications in refereed journals. And this applies to any less important function and package. And this is important! For example, when I use a function to find the power of a test, I have to make an argument for the effect size. And I read in the documentation that pooled standard deviation is counted that way. I go on the internet, find the author of the method, read about it... And I argue about the results of this function.
The dgamma is based on the binomial distribution code provided by Catherine Loader. Her article for this method is from 2000. You can read it.
And now a question for MQL - you write your own algorithms, it is clear that you are borrowing almost all of them. In rare cases, you say that the algorithm is not accurate enough, but there is another algorithm described in this magazine, and we will use it. What about the other algorithms? Do you write in the documentation that you're borrowing them? I don't think you're going to reinvent the probability calculation in binomial partitioning...
Are there references like this in your reference?
pwr.t2n.test {pwr} R Documentation
Power calculations for two samples (different sizes) t-tests of means
Description
Compute power of tests or determine parameters to obtain target power (similar to as power.t.test).
Usage
pwr.t2n.test(n1 = NULL, n2= NULL, d = NULL, sig.level = 0.05, power = NULL,
alternative = c("two.sided",
"less", "greater"))
Arguments
n1
Number of observations in the first sample
n2
Number of observations in the second sample
d
Effect size
sig.level
Significance level (Type I error probability)
power
Power of test (1 minus Type II error probability)
alternative
a character string specifying the alternative hypothesis, must be one of "two.sided" (default), "greater" or "less
Details
Exactly one of the parameters 'd','n1','n2','power' and 'sig.level' must be passed as NULL, and that parameter is determined from the others. Notice that the last one has non-NULL default so NULL must be explicitly passed if you want to compute it.
Value
Object of class '"power.htest"', a list of the arguments (including the computed one) augmented with 'method' and 'note' elements.
Note
'uniroot' is used to solve power equation for unknowns, so you may see errors from it, notably about inability to bracket the root when invalid arguments are given.
Author(s)
Stephane Champely <champely@univ-lyon1.fr> but this is a mere copy of Peter Dalgaard work (power.t.test)
References
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale,NJ: Lawrence Erlbaum.
And if you borrow code, then, without specifying the source of origin of this code and the author of the method, your work is plagiarism. And how they will look at you in statistical community, if some Quantum will publish a dissenting article about incorrect density function derived from Catherine's work, it is a big question. I don't think they will...
For the gamma family of functions:
GammaDist {stats} R Documentation
The Gamma Distribution
Description
Density, distribution function, quantile function and random generation for the Gamma distribution with parameters shape and scale.
Usage
dgamma(x, shape, rate = 1, scale = 1/rate, log = FALSE)
pgamma(q, shape, rate = 1, scale = 1/rate, lower.tail = TRUE,
log.p = FALSE)
qgamma(p, shape, rate = 1, scale = 1/rate, lower.tail = TRUE,
log.p = FALSE)
rgamma(n, shape, rate = 1, scale = 1/rate)
Arguments
x, q
vector of quantiles.
p
vector of probabilities.
n
number of observations. If length(n) > 1, the length is taken to be the number required.
rate
An alternative way to specify the scale.
shape, scale
Shape and scale parameters. Must be positive, scale strictly.
log, log.p
logical; if TRUE, probabilities/densities p are returned as log(p).
lower.tail
logical; if TRUE (default), probabilities are P[X ≤ x], otherwise, P[X > x].
Details
If scale is omitted, it assumes the default value of 1.
The Gamma distribution with parameters shape = a and scale = s has density
f(x)= 1/(s^a Gamma(a)) x^(a-1) e^-(x/s)
for x ≥ 0, a > 0 and s > 0. (Here Gamma(a) is the function implemented by R's gamma() and defined in its help. Note that a = 0 corresponds to the trivial distribution with all mass at point 0.)
The mean and variance are E(X) = a*s and Var(X) = a*s^2.
The cumulative hazard H(t) = - log(1 - F(t)) is
-pgamma(t, ..., lower = FALSE, log = TRUE)
Note that for smallish values of shape (and moderate scale) a large parts of the mass of the Gamma distribution is on values of x so near zero that they will be represented as zero in computer arithmetic. So rgamma may well return values which will be represented as zero. (This will also happen for very large values of scale since the actual generation is done for scale = 1.)
Value
dgamma gives the density, pgamma gives the distribution function, qgamma gives the quantile function, and rgamma generates random deviates.
Invalid arguments will result in return value NaN, with a warning.
The length of the result is determined by n for rgamma, and is the maximum of the lengths of the numerical arguments for the other functions.
The numerical arguments other than n are recycled to the length of the result. Only the first elements of the logical arguments are used.
Note
The S (Becker et al (1988) parametrization was via shape and rate: S had no scale parameter. In R 2.x.y scale took precedence over rate, but now it is an error to supply both.
pgamma is closely related to the incomplete gamma function. As defined by Abramowitz and Stegun 6.5.1 (and by 'Numerical Recipes') this is
P(a,x) = 1/Gamma(a) integral_0^x t^(a-1) exp(-t) dt
P(a, x) is pgamma(x, a). Other authors (for example Karl Pearson in his 1922 tables) omit the normalizing factor, defining the incomplete gamma function γ(a,x) as gamma(a,x) = integral_0^x t^(a-1) exp(-t) dt, i.e., pgamma(x, a) * gamma(a). Yet others use the 'upper' incomplete gamma function,
Gamma(a,x) = integral_x^Inf t^(a-1) exp(-t) dt,
which can be computed by pgamma(x, a, lower = FALSE) * gamma(a).
Note however that pgamma(x, a, ...) currently requires a > 0, whereas the incomplete gamma function is also defined for negative a. In that case, you can use gamma_inc(a,x) (for Γ(a,x)) from package gsl.
See also https://en.wikipedia.org/wiki/Incomplete_gamma_function, or http://dlmf.nist.gov/8.2#i.
Source
dgamma is computed via the Poisson density, using code contributed by Catherine Loader (see dbinom).
pgamma uses an unpublished (and not otherwise documented) algorithm 'mainly by Morten Welinder'.
qgamma is based on a C translation of
Best, D. J. and D. E. Roberts (1975). Algorithm AS91. Percentage points of the chi-squared distribution. Applied Statistics, 24, 385-388.
plus a final Newton step to improve the approximation.
rgamma for shape >= 1 uses
Ahrens, J. H. and Dieter, U. (1982). Generating gamma variates by a modified rejection technique. Communications of the ACM, 25, 47-54,
and for 0 < shape < 1 uses
Ahrens, J. H. and Dieter, U. (1974). Computer methods for sampling from gamma, beta, Poisson and binomial distributions. Computing, 12, 223-246.
References
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
Shea, B. L. (1988) Algorithm AS 239, Chi-squared and incomplete Gamma integral, Applied Statistics (JRSS C) 37, 466-473.
Abramowitz, M. and Stegun, I. A. (1972) Handbook of Mathematical Functions. New York: Dover. Chapter 6: Gamma and Related Functions.
NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/, section 8.2.
So, dear ones, porting code is a different level task altogether than assembling this code on an array of the works of statistical scientists.
This part is scientific nonsense:
6. Обнаруженные ошибки расчетов в R
В процессе тестирования расчетов в R была обнаружена ошибка расчета функции плотности для распределений Gamma, и ChiSquare и Noncentral ChiSquare в точке x=0.
Значение вероятности гамма-распределения в точке x=0 считается верно (gamma_cdf=0), но значение плотности гамма-распределения (функция dgamma() в R) в точке x=0 должно быть равно 0 (а показывает gamma_pdf=1) по определению плотности вероятности гамма-распределения.
Для функций ChiSquare и Noncentral ChiSquare плотность вероятности в точке x=0 также вычисляется с ошибкой:
В точке x=0 функция dchisq() выдает ненулевые значения 0.5 и 0.3032653, при этом функция pchisq() вероятности вычисляет правильно (они равны 0).
It should be called - differences in conventions when calculating density functions at the extreme point for one-way distributions. And explain for statisticians - and why you stick to other conventions and not at the 3rd year level (because Wolfram thinks so).
Now, this is the only joint I've found that applies to the section on law:
Для расчета вероятности нецентрального T-распределения Стьюдента в языке R используется алгоритм AS 243, предложенный Lenth [6]. Достоинством этого метода является быстрый рекуррентный расчет членов бесконечного ряда с неполной бета-функций. Но в статье [7] было показано, что из-за ошибки оценки точности при суммировании членов ряда данный алгоритм приводит к ошибкам (таблица 3 в статье [7]), особенно для больших значений параметра нецентральности delta. Авторы статьи [7] предложили скорректированный алгоритм рекуррентного расчета вероятности нецентрального T-распределения.
У нас в в статистической библиотеке MQL5 используется правильный алгоритм для расчета вероятностей из статьи [7] , что дает точные результаты.
I'm telling you, you're trying as hard as you can to get away from a concrete discussion.
All right, at least you acknowledged one mistake. Forgot only to admit that we have experts who are able to carry out checks, to understand and find a more correct decision.
Alexei, wait for an answer from R. And notice how you stopped answering @Quantum's questions. He is deliberately leading you neatly toward a known goal.
So far on our side is Mathematica + Wolfram Alpha + Mathlab + MQL5, while on yours is the opsorced R. Which code is sloppily written and not at all polished, which would be expected from a 20 year old project.