Machine learning in trading: theory, models, practice and algo-trading - page 3354
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The probabilities of the model are given by the sigmoid, not this.
Yeah, well, what number do you put in the function, where does it come from?
Yeah, well, what number do you put in the function, where does it come from?
How did you realise that the classifier gives the correct probabilities? Not just values in a range. Do you read what is being written to you?
I've checked a lot of times. This is the base for the TC.
Again, if it's not, it's retrained.
What you get in the output of the models are not class probabilities. An analogy is regression, which gives a single value. The classifier works on the same principle, it gives a raw value passed through a sigmoid, not a probability.
By passing through the sigmoid we get the class, not the probability of the class.
Passing through sigmoid we get a class, not probability of the class.
Checked it a lot of times. That's the basis for the TC.
Again, if it's not, it's retrained.
Are you going to answer a question with a question? I know the unambiguous answer, if anything.
What you get at the output of models are not probabilities of classes. An analogy is regression, which gives one value. A classifier works on the same principle, it gives a raw value passed through a sigmoid, not a probability.
Do you know how the value is obtained in CB model leaves, can you reproduce it?
The point is that probabilities are estimated by history, but only a theory with a representative sample can guarantee that they will continue to be so. We don't have such a sample. Therefore, any adjustments in this direction will not give accuracy on new data. The correction may be relevant for the reason that there is debris in the leaves, and this is what needs to be corrected by overestimating or underestimating the sigmoid classification point.
Or again, it's not clear what it's about.
If you have found something clever, please share :)
Do you know how the value in CB model leaves is derived, can you reproduce it?
The point is that history probabilities are estimated, but only a theory with a representative sample can guarantee that they will continue to be so. We don't have such a sample. Therefore, any adjustments in this direction will not give accuracy on new data. Correction may be relevant for the reason that debris has got into the leaves, and it is this that should be corrected, either by dependence or underestimation of the sigmoid classification point.
Or again, it is not clear what we are talking about.
If you've found something clever, share :)
I was hoping someone would at least google the tip.
Somehow you are not paying attention to my posts, focusing on probabilities. It doesn't matter what the probability is called, what matters is that if it doesn't improve, the model is overtrained, into the bin. The prediction error on OOV, OOS and VNU should be about the same.
Here's another histogram
Different algorithm - different histogram, although the labels and predictors are the same. If you are looking for some kind of theoretical probability, implying that different classification algorithms will produce the same histograms ... that doesn't occur to me, since you have to work with specific algorithms and they will predict and they have to be evaluated, not some theoretical ideal. The main evaluation here is the overfitting of the model, not the closeness of the probabilities to some theoretical ideal.