Machine learning in trading: theory, models, practice and algo-trading - page 2975
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So far the attempts have been unsuccessful, do you want to give it a try?
No. Try it yourself (while the next calculations are going on and you will take time), the languages are similar.
You have given the simplest method - yes, it is not difficult.
Others have the same simple essence. I have described a couple of options in the text - skipping duplicates and recalculating the base quantum size taking into account duplicates. There is also splitting not by number, but by range and combining these methods.
No. Try it yourself (while the next calculations are going on and it will take time), the languages are similar.
Others have the same simple essence. I described a couple of options in the text - skipping of doubles and recalculation of the basic quantum size taking into account doubles. There is also splitting not by number, but by range and combining these methods.
Just the problem is not the mechanical execution, but how to calculate non-proportional meshes. Probability density distributions are constructed there and they are already quantised, as I understand.
I don't have much free time and prefer to spend it on coding what I understand how to do. Maybe, if this is the last task, I will sit down and dumb down for days on end, but for now I prefer to move on to other directions of this project. Good thing that quantisation tables can be unloaded.
Probability density distributions are constructed there and they are already quantised,
What quantisation method? They're all on that page.
if I can't use it?
which quantisation method? They're all on that page.
GreedyLogSum method - as an example, you can see that the grid is not uniform. I assume that a lognormal distribution is constructed by approximation of the sample metrics, and then the grid is made on it somehow. I can't read formulas.
Here are the formulas in detail.
You can talk, but I'm not sure you can hear.
Yes, the decision tree idea could be a working idea for building a quantum table. Thank you for the idea!
Even if I have found an unknown package and even created a tree.
Next I need to deal with loops in R, saving trees.
And in what format are they saved? Probably in the form of rules, which means that I need to make a parser that transforms these rules into the required format.
Wouldn't it be easier for me to solve the problem through a histogram with uniform 0.5% intervals, combining columns similar in metrics/conditions?
And in general, I asked initially about metrics that characterise the sample that fell into the quantum cutoff. No ideas in this direction or you don't want to think - tell me.
Otherwise we are used to making a show here - that's why this thread is rubbish.
GreedyLogSum method - as an example, you can see that the grid is not uniform. I assume that the lognormal distribution is constructed by the sample metrics through approximation, and the grid is made on it in some way. I don't know how to read formulas.
Here are the formulas in detail.
That simple function will also make an uneven grid by values. Uniform is Uniform.
∑ i = 1 n log ( w e i g h t ) , w h e r e i=1∑nlog(weight),where
- n n - The number of distinct objects in the bucket.
- - w e i g h t weight - The number of times an object in the bucket is repeated.
It works with the number of repeats/duplicates. Everything is about the same. I couldn't find the function (with a quick browse), so I can't say for sure... I've described the variants of duplicates accounting earlier, I think it's one of them or something close.
That simple function will also make the grid uneven in values. Uniform is Uniform.
It works with the number of repeats/duplicates. Everything is about the same. I couldn't find the function (by quick browse), so I can't say for sure... I described the options of duplicates accounting earlier, I think there is one of them or something close to it.
I think it's just a matter of taking into account weights in case of repeating values, i.e. some bulkiness appears and the grid is compressed on this segment.
I think you would be able to figure it out!
I think it's just a matter of accounting for weights in the case of repeating values, i.e. some bulkiness appears and the mesh shrinks on that segment.
I think you would be able to figure it out!
Maybe, but I don't see the fish there. I don't use quantisation at all. I prefer to explore float data.
Maybe, but I don't see any fish there. I don't use quantisation at all. I prefer to explore float data.
As far as I understand, "quantisation" (histograms) is used in bousting for speeding up, so that there are fewer variants for splits. If so, the solution is good for its universality, but may be bad in a particular case - the real boundary may be lost.