Machine learning in trading: theory, models, practice and algo-trading - page 3029
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
What is quantisation?)
There was a code a while back from catbust.
In the context that I mention, it is a piecewise evaluation of a range of data in order to identify a piece (quantum segment) whose probability of belonging to one of the classes is x per cent higher than the average over the whole range.
There was a code a while back from catbusta.
Take a look at it and you'll see what's going on.
There was a code a while back from catbusta.
It's complicated. Sort the column and divide it into 32 parts, for example, if there are duplicates, then all of them are thrown into the quantum. If the column has only 0 and 1, then there will be 2 quanta, not 32 (because duplicates).how's the boost and profit maximisation going?
No luck so far, especially for bousting) It needs smoothness, so that there are gradient and hessian. Profit will not be so, so you need to think how to smooth it.
The local variant of the single tree, which I wrote about here recently, is enough for me for now.
In the context I mention - a piecewise evaluation of a range of data to identify a chunk (quantum segment) whose probability of belonging to one of the classes is x per cent greater than the average over the whole range.
In essence, it turns out that a tree is separately constructed on each predictor.
No way so far, especially for bousting) You need smoothness there, so you need gradient and hessian. Profit will not be so, so we need to think how to smooth it.
The local variant of the single tree, which I wrote about here recently, is enough for me for now.
Have you watched the video? The one I gave you the link to?
The man there was just talking about how to convert a non-smooth tree into a smooth one via RL.
folk wisdom says you can't see the forest for the trees. I wonder if you can see a tree by picking leaves. I'm not asking about the forest.
Is this the only algorithm you know? Or is it the most efficient? Why are you fixated on it?
It's a passing thought.
Good luck
The question is quite relevant. For me, the answer is roughly that if the predictors are homogeneous (e.g. pixels of a picture or the last N candles), then the shape of the classes can be arbitrary, so the rules are not very appropriate. If the predictors are heterogeneous (e.g., price and time), then the classes are more likely to have a rectangular shape given by the rules.
Of course, there is no clear justification for this, just a hypothesis.
Did you watch the video? The one I linked to?
There the man was just talking about how to convert non-smooth to smooth via RL
It's different maths, I think. I can't explain it well because I don't fully understand it myself. In bousting it is gradient by function, but in the video it is usual gradient by network weights.
Data range or value range of the fiche?
The range of values of the predictor that describes the data.
I have practically described the algorithmhere - there is a picture with RSI.
That's a complicated description. Sort the column and divide it into 32 parts, for example, if there are duplicates, then all of them are thrown into the quantum. If the column has only 0 and 1, then there will be 2 quanta, not 32 (because duplicates).
You mean the method, and I mean the goal. Methods can be different. Let me put it this way - empirical methods are often better than mathematical ones. Perhaps because we don't have complete data on the general sample.