Machine learning in trading: theory, models, practice and algo-trading - page 500
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Sorry, but you have not yet proved your worth in any of my questions, at least I have not seen it
And there's no need to write like Captain Obvious and turn everything upside down just to seem important again.
You're confusing the basics. Learn - you need it, for your own good. Learn instead of snapping back...
ss
And I don't need to prove anything to you.
You're confusing basic concepts. Learn - you need to learn, for your own good. Learn instead of snapping back...
ss
And I don't need to prove anything to you.
Another out... instead of answering the simple question of whether or not the scaffold can extrapolate :D
I gave a clear example: no. I was argued, but no one could explain, and now I have to go read a book, because no one knows shit :)
It was a very simple question for those who know a lot about MO, but it turned out that no one does.
And of course everyone is dumb, especially on hubra and all those who write articles, and apparently Leo Brayman is dumb too
another higher... Instead of answering the simple question of whether or not the scaffolding is capable of extrapolation :D
I gave a clear example: no. I was argued, but no one could explain, and now it turns out I have to go read books, because no one knows shit :)
It was a very simple question for those who know a lot about MO, but it turned out that no one knows shit :) I'm learning and made conclusions for myself, I do not need more ... it was a very simple question for those who know a lot about MO, but it turned out that no one knows
If you put a question like that, you're already demonstrating the level of your understanding and awareness.
If you put the question like that, then by doing so you are already demonstrating your level of understanding and awareness.
all, goodbye )
Goodbye.)
Bless you. Learn.
Almost right, there is also a bias which is additionally added to the result
Most likely y1, y2, y3 values are used in the inner neuron layer, and these values themselves should also be used as input values for the next layer
Or if Y1,Y2,Y3 are output values, then several output neurons are used for classification - for example if the greatest value among Y1,Y2,Y3 is Y1, then the result is "class 1", if the greatest value is Y2, then the result is "class 2", if the greatest value is Y3, then the result is "class 3". If the neuron will be used for regression instead of classification, the output neuron will be only one. If there are only two classes, you can do with one output neuron (if the result is <0.5, then class1, if >=0.5, then class2).
It is very easy to add a sigmoid for the activation function of a neuron, you need such a function
And with it you already have a full-fledged neuron with an inner layer (with three perceptrons) and one output perceptron.
result = perceptron4[0]
Thanks for the reply! It's quite informative for me.
I take it it's a bias neuron? The description says it helps when the input is zero. What do you think the bias neuron is used for, and what kind of weights should it take? It's basically just a weight.
What is the best way to check the threshold value after sigmoid transformation or after?
The number of weights in a neuron can be tens of thousands or more. In mql and R there are special bibiloteks for creating and training neurons, it is better to work with them, and not program your own neuron from scratch.
I meant that for example in mql4 it was possible to optimize simultaneously up to 15 parameters, and in mql5 more.
And it turns out that one layer is adjusted, and then the second layer with the optimized first one, etc. It would be nice if we could optimize all the layers at once, but the computing power is not enough.
I have an assumption that when layers are optimized one by one, in this case some pattern is no longer seen by the system.
Even if one layer is parsed, the next layers will be based on the assumptions of the first layer.
another higher... instead of answering the simple question of whether or not the forests can extrapolate :D
another high... instead of answering a simple question - can forests extrapolate or not :D
And you could also ask the question: are random forests sweet or salty? in general you could ask a bunch of idiotic questions and even pick up references on the internet.
One would not have to answer if several other systematically educated forum members weren't mush on the subject.
Random forests CANNOT extrapolate, because the word EXTRAPOLATION does not apply to them at all. Random forests, like other machine learning models, can predict future values, but they are NOT EXTRAPOLATION, and furthermore the term EXTRAPOLATION is not applicable in statistics at all.
And here's why.
Originally, the term EXTRAPOLATION applied to functions, ordinary functions that have a formula.
For example.
You can use this formula to calculate the values of a function inside the original field of definition (interpolation) and outside - extrapolation.
There are no such formulas in statistics.
And the whole "can a random forest extrapolate" thing has to do with that, because in statistics the analog looks like:
To distinguish a linear function from a linear regression uses a tilde instead of an equality.
This distinction captures the fact that, "a" in a linear equation is not "a" in a linear regression, as indicated by the tilde. The same applies to "b."
While in the first equation "a" is a constant, in the second equation "a" is the mathematical expectation, which is accompanied by a variance value and an estimate with the probability of the null hypothesis that that value of "a" we see does not exist. If the probability of this NOT existing is more than 10%, then the value of "a" can be disregarded.
Now to your question:
- Can we extrapolate from the regression equation?
- No you can't. But you can predict the future value of a random variable that will take on a value within the confidence interval. If that confidence interval is 95% (5% probability under the null hypothesis), then we get "y" inside that confidence interval. And if you get an estimate for "a" with a variance multiple of that value, then you can't predict anything at all.
I hope I explained in detail that your question, which makes sense with functions, doesn't make sense in statistics at all.
Now to your question:
- Can you extrapolate from a regression equation?
- No you can't. But it is possible to predict the future value of a random variable that will take on a value within the confidence interval. If that confidence interval is 95% (5% probability under the null hypothesis), then we get "y" inside that confidence interval. And if you get an estimate for "a" with a variance multiple of that value, then you can't predict anything at all.
Now, pay attention, there was no such question... )
There was a question, as, for example, pointed out
Dr. Trader:
Extrapolation implies predicting new data beyond the predictor values known during training.
I'll add that not predictors, but targets, because if predictors then we are dealing with interpolation, not extrapolation.
So, a random forest can interpolate (it doesn't even need to normalize the inputs), but it can't extrapolate.
Instatistics, extrapolation is the extension of established past trends to a future period (extrapolation in time is used for prospective population calculations); extrapolation of sample data to another part of the population that has not been observed (extrapolation in space).
If you take a regression tree, it cannot extend its results to NEW data, such as quotes above 1.4500, and it will always produce a forecast of 1.4500 but never more, and never less than 1.3000, because it was trained on a sample of 1.3000-14500, for example (as a target) and that comes from the principle of building decision trees
Unlike trees, linear regression and a neural network can easily handle this because they are built on different principles
Once again: new data outside the training interval can be fed to the inputs on the new RF sample and it interpolates them perfectly. But it does not extrapolate on the output, i.e. the predicted values will never go beyond the interval of outputs on which it was trained.
Now, pay attention, there was no such question... )
There was a question, as, for example, pointed out
Dr. Trader:
Extrapolation implies predicting new data beyond the values of predictors known during training.
I would add that not predictors, but targets, because if predictors then we are dealing with interpolation and not extrapolation.
So, a random forest can interpolate (it doesn't even need to normalize the inputs), but it can't extrapolate.
Instatistics, extrapolation is the extension of established past trends to a future period (extrapolation in time is used for prospective population calculations); extrapolation of sample data to another part of the population that has not been observed (extrapolation in space).
If you take a regression tree, it cannot extend its results to NEW data, such as quotes above 1.4500, and it will always produce a forecast of 1.4500 but never more, and never less than 1.3000, because it was trained on a sample of 1.3000-14500, for example (as a target) and that comes from the principle of building decision trees
Unlike trees, linear regression and a neural network can easily handle this because they are built on different principles
Once again: new data outside the training interval can be fed to the inputs on the new RF sample and it interpolates them perfectly. But it does not extrapolate on the output, i.e. the predicted values will never go beyond the interval of outputs on which it was trained.
You didn't understand anything from my post. Nothing at all.
Sorry about your presence in this thread.