Bayesian regression - Has anyone made an EA using this algorithm? - page 39

 
-Aleks-:
That's what you need to think about - for the data to be similar - you should take a pattern, in my opinion, rather than just a window of n bars.
I absolutely agree. How many bars to take for the analysis is the Achilles' heel of not only the discussed regressions. Although I want to calculate not the regression, but the probabilities using Bayes formula. For the time being, I will stupidly take the current window of n bars. And at the stage of testing and trials, for the likelihood function and a priori probabilities, I will take the periods from volatility spike to volatility spike. This is usually the interval between important news events.
 
Alexey Burnakov:
I was recently discussing the history and development of linear regression with colleagues. To make a long story short, initially there were few data and few predictors. Ordinary linear regression managed with some assumptions. Then, with the development of information technology, the amount of data increased and the number of predictors could easily exceed tens of thousands. Under these conditions ordinary linear regression will not help - over-learn. Therefore regularised versions, versions robust to the requirements of distributions, etc. appeared.
This is partly correct. L2 regularisation does not help to reduce the number of predictors in the model. Neurocomputing initially used Hebb's learning rule, which led to unlimited growth of neural network weights. Then, realizing that the brain has limited resources to grow and maintain weights of neural subunits, L2 regularization was added in the 60s and 80s. This allowed the weights to be limited, but there was still a lot of negligible weighting. This is not the case in the brain. In the brain, neurons are not connected to all other neurons, even if by negligible weights. There are only a limited number of connections. Then, in the 2000s, L1 and L0 regularisations were introduced that allowed for discharged connections. Crowds of scientists began to use linear programming with L1 regularization for everything from image coding to neural models that described brain processes quite well. Economists are still lagging behind the rest of the sciences due to their "arrogance" (everything has already been invented before us) or simply their poor understanding of mathematics.
 
Vladimir:
It's partly correct. L2 regularisation does not help to reduce the number of predictors in the model. In neurocomputing at first they used Hebb's learning rule which led to unlimited growth of neural network weights. Then, realizing that the brain has limited resources to grow and maintain weights of neural subunits, L2 regularization was added in the 60s and 80s. This allowed the weights to be limited, but there was still a lot of negligible weighting. This is not the case in the brain. In the brain, neurons are not connected to all other neurons, even if by negligible weights. There are only a limited number of connections. Then, in the 2000s, L1 and L0 regularisations were introduced that allowed for discharged connections. Crowds of scientists began to use linear programming with L1 regularization for everything from image coding to neural models that described brain processes quite well. Economists still lag behind the rest of the sciences due to their "arrogance" (everything was already invented before us) or just a poor understanding of mathematics.
I could only mistake L2 for limiting weights. And it's ridge (Tikhonov) regression. https://www.quora.com/What-is-the-difference-between-L1-and-L2-regularization

But sometimes L1 becomes preferable also because of the addition of penalising absolute errors and non-squares. error squares can give too long tails, that is, in the case of quotes that have heavy tails, adding residuals squares can have a bad effect on the quality of the model. Well, that's just talk.
 
Vladimir:
Economists are still lagging behind the rest of the sciences due to their "arrogance" (everything has already been invented before us) or just a poor understanding of mathematics.
Yes. I personally talked to a manager (swd manager) who previously worked for a stockbroker. He said price increments are considered normal and that's it. The methods and misconceptions of the last century are used. I told him there is no normality there. Not a single test passes. He doesn't even know what we're talking about. But he's not a hardcore mathematician, he's a development manager.
 
I have a suspicion that the indicator line (if you don't go too deep, the same mash, for example) is roughly a regression line. At least, it is a rough approximation
 
Alexey Burnakov:
Yes. I personally talked to a manager (swd manager) who used to work for a stockbroker. He said, price increments are considered normal and that's it. The methods and misconceptions of the last century are used. I told him there is no normality there. Not a single test passes. He doesn't even know what we're talking about. But he's not a hardcore mathematician, he's a development manager.
So what if there is no normality? Even some development manager writes about it, Vladimir wrote about it here. How do you even use regression if you don't understand its principles or meaning? You walk around like a zombie in the dark night with this normality/normality. It may be a distribution in cubes, squares, zigzags or in the form of a Repin's picture. The ability to predict regression does not depend on that.
 
Dmitry Fedoseev:
So what if there is no normality? Even the head of some development writes about it, Vladimir wrote about it here. How do you even use regression if you don't understand its principles or meaning at all? You walk around like a zombie in the dark night with this normality/normality. It may be a distribution in cubes, squares, zigzags or in the form of a Repin's picture. The ability to predict regression doesn't depend on it.
That's why it does. It's easier to think at night. Even executives know this. )
 
Yuri Evseenkov:
Totally agree. How many bars to analyse is the Achilles' heel of not only the regressions under discussion. Although, I do not want to calculate the regression, but the probabilities using the Bayes formula. For the time being, I will stupidly take the current window of n bars. And at the stage of testing and trials, for the likelihood function and a priori probabilities, I will take the periods from volatility spike to volatility spike. This is usually the interval between important news events.

And probability will express what, the forecast for the next bar, or the motion vector of the next bars?

 
In general, we should first define the purpose of the regression: to find a curve that would most accurately describe the selected slice of the market, or to predict the future price position? How can the quality of the approximation determine the accuracy of the forecast?
 
Vasiliy Sokolov:
First of all, we should decide on the purpose of building the regression: to pick a curve that would most accurately describe the selected block of the market, or to predict the future price position? How can the quality of approximation determine the accuracy of the forecast?

And how can you build a curve that is as accurate as possible in describing the past and as accurate as possible in predicting the future?

Or how can you predict the future without analysing the past?

Approximation is the analysis of the past