You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
This is what we are talking about.
The researchers have chosen a period without a clear trend, which is why the results are interesting.
Briefly - what is in the screenshot?
Сейчас идет разговор о том. как работает алгоритм.
Насчет применимости, найдется какая-нибудь задача, для которой пригодится. Цены кластеризовать не пойдет.
This is what we are talking about.
The researchers have chosen a period without a clear trend, which is why the results are interesting.
Bayesian regression is the same as Probabilistic Neural Network (PNN) or General Regression Neural Network (GRNN). If you don't like normal error distribution, you can use any other distance function instead of exponential kernel, for example exp(-|distance|), exp(-distance^n), etc. The result will not change much. A rapidly decreasing distance function will give higher weights to closer events in the past. I dabbled with this network and its variants. As a regression it is not particularly suitable. But as a classifier it is better, but still the result of its use in the market is not better than any other tool or coin flip. Look it up in the forum on 4. People have been discussing it there in their time.
Especially do not believe the university articles on market trading. Most of these articles are written by students to satisfy PhD requirements (3-4 papers + thesis). This also applies to the sciences: millions of student articles, and zero value. Trust people working in these industries. Any trader with experience knows more than an MIT professor.
From here: http://datareview.info/article/10-tipov-regressii-kakoy-vyibrat/
Due to the assumption of normality of errors, I question the applicability of this method to financial markets.
In addition, in any model where the dependence is estimated only as a hyperplane, there is a chance of missing a non-linear edge, which can make the model profitable.
Which error assumption do you think might be appropriate for financial markets?
"Due to the assumption of normality of errors, I question the applicability of this method to financial markets."
Financial markets sell and buy. Errors happen, and that's normal.
// Double treatment of the comment on the quote
)
Thus the original post is interpreted differently.
Any mathematical processing or any other interpretation of a quote is the same and should not be done!
"Due to the assumption of normality of errors, I question the applicability of this method to financial markets."
Financial markets sell and buy. Mistakes happen and that's OK.
// Double treatment of the comment on the quote
)
Thus the original post is interpreted differently.
Any mathematical processing or any other interpretation of a quote is the same and should not be done!
Yes! That's the kind of home-style interpretation that's going on here.
But I still wonder who will draw the result first
No one will.
One should use a method in which the density of the error distribution is not important. Non-parametric methods.
I don't do regression and price values (or its transformations) at all in my experiments, I predict the sign, but you can say that this is also part of the price information.
My errors look like this:
0 1
0 0,58 0,42
1 0,43 0,57
Or roughly as originally written:
1 - true, 0 - error: 1, 1, 1, 0, 0, 0, 1 , 1, 1, 0, 1
And the resulting probability distribution should be as different as possible from 0.5 / 0.5.
If we obtain mutual insensitivity of such outcomes, we will come to a binomial distribution, and for it there are many, many formulas and statistical tests.
But if I'm going to build some kind of regression model for price, the assumption about the PDF form for the errors shouldn't affect me.
UPD: https://en.wikipedia.org/wiki/Errors_and_residuals
https://en.wikipedia.org/wiki/Robust_statistics
We don't know the error distribution for forex at all. Formally - and strictly - errors are differences between modelled values and model values obtained on the gene population, i.e. purely theoretical values. Residuals are obtained on distinctions of modeled values from model values on the available sample, but they will hardly be normal as well, as financial time series (their returns, to be more exact) are not normal (!) and are thickly lined and peaked, while it is very difficult to model such athickly lined and peaked series.
I even bothered and derived for hourly increments the original distribution (turquoise =)) and the normal one with the same mean & sd parameters. As you can see it is far from being normal. And the normality test is far from being passed.
Methods that rely on normality of errors are classical, from the 20th century, methods such as linear regression and analysis of variance. But we can do without them.
Read the wiki.)