Machine learning in trading: theory, models, practice and algo-trading - page 3030
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Essentially, it turns out that a tree is built on each predictor separately.
Yes, that's how C4.5 trees are built for discrete values. One split.
You're talking about the method, I'm talking about the goal. There can be different methods. Let me put it this way: empirical methods are often better than mathematical methods. Probably because we don't have complete data on the general sample.
For non-stationary data, there is no concept of "master sample" at all, nothing but tails. That's the problem, which is why any estimates from training are extremely difficult to obtain in the future.
For non-stationary data, there is no concept of "master sample" at all, nothing but tails. This is the whole problem, which is why any estimates obtained in training are extremely difficult to obtain in the future.
We don't know it. More precisely we don't know the true density of the distribution, and we only observe the excerpts - that's why such fluctuations...
I don't live by concepts :)
So tell me how such a phenomenon is called, which we cannot observe, because we are in the process of it, and it was completed long ago in the far reaches of space....
For non-stationary data, there is no concept of "master sample" at all, nothing but tails. This is the whole problem, which is why any estimates obtained in training are extremely difficult to obtain in the future.
That's right, SanSanych.
Non-stationary data is always subject to the cumulative effects of othernon-stationary data. On which the tails will depend.
The range of predictor values that describes the data.
Here I have practically described the algorithm - there is also a picture with RSI.
I get it. Separate everything and everything and study it separately.
I don't understand why they're quantum.I don't understand why they're quantum.
Because the kid doesn't live by the rules, he wrote)
I get it. Separate everything and everyone and study it separately.
I don't understand why they're quantum.Well, it's probably the translations. terminology.
There are quantisation and their different methods, the table containing points of division - quantum table - it is already from CatBoost instruction.
Quantum segments - from quantum table, but the extreme ones have limits. It is already my invention.
Well, it's probably the translators' fault. the terminology.
There are quantisation and their different methods, the table containing points of division - quantum table - it is already from CatBoost instruction.
Quantum segments - from quantum table, but the extreme ones have limits. It is already my invention.
not quantum, quantised probably, like here.
5.4 Quantisation of convolutional neural networks
Classically, due to obvious optimisation difficulties, when quantising neural networks one does not just use integers, but rather an approximation of floating point numbers through integers. An approachwidely used in the literature [52, 54, 60] for approximating floating-point numbers through integers of arbitrary depth is the algorithm proposed in the Google GEMMLOWP library [59]. Having an input array 𝑋, boundary values [𝑣,𝑣], number of bits 𝑀, the result is defined as follows:
𝑠𝑐𝑎𝑙𝑒 =(𝑣 - 𝑣)/2,(14) 𝑧𝑒𝑟𝑜_𝑝𝑜𝑖𝑛𝑡 = 𝑟𝑜𝑢𝑛𝑑(min(max(-𝑣/𝑠𝑐𝑎𝑙𝑒, 0),2)),(15)𝑜𝑢𝑡𝑝𝑢𝑡𝑝𝑢𝑡 = 𝑟𝑜𝑢𝑛𝑑(𝑋/𝑠𝑐𝑎𝑙𝑒 + 𝑧𝑒𝑟𝑜_𝑝𝑜𝑖𝑛𝑡).(16)
Thus, for each array of floating point numbers, we get an integer array 𝑜 𝑢𝑡𝑝𝑢𝑡𝑝𝑢𝑡, integer 𝑧𝑒𝑟𝑜_𝑝𝑜𝑖𝑛𝑡, accurately representing zero, a double precision𝑠𝑐𝑎𝑙𝑒 number that defines the scale of quantisation.
https://dspace.susu.ru/xmlui/bitstream/handle/0001.74/29281/2019_401_fedorovan.pdf?sequence=1&isAllowed=y
not quantum, quantised, like here.
5.4 Quantisation of convolutional neural networks
Classically, due to obvious optimisation difficulties, the quantisation of neural networks does not use just integers, but rather an approximation of floating point numbers through integers. An approachwidely used in the literature [52, 54, 60] for approximating floating-point numbers through integers of arbitrary depth is the algorithm proposed in the Google GEMMLOWP library [59]. Having an input array 𝑋, boundary values [𝑣,𝑣], number of bits 𝑀, the result is defined as follows:
𝑠𝑐𝑎𝑙𝑒 =(𝑣 - 𝑣)/2,(14) 𝑧𝑒𝑟𝑜_𝑝𝑜𝑖𝑛𝑡 = 𝑟𝑜𝑢𝑛𝑑(min(max(-𝑣/𝑠𝑐𝑎𝑙𝑒, 0),2)),(15)𝑜𝑢𝑡𝑝𝑢𝑡𝑝𝑢𝑡 = 𝑟𝑜𝑢𝑛𝑑(𝑋/𝑠𝑐𝑎𝑙𝑒 + 𝑧𝑒𝑟𝑜_𝑝𝑜𝑖𝑛𝑡).(16)
Thus, for each array of floating point numbers, we get an integer array 𝑜 𝑢𝑡𝑝𝑢𝑡𝑝𝑢𝑡, integer 𝑧𝑒𝑟𝑜_𝑝𝑜𝑖𝑛𝑡, an integer accurately representing zero, a double precision𝑠𝑐𝑎𝑙𝑒 number defining the scale of quantisation.
https://dspace.susu.ru/xmlui/bitstream/handle/0001.74/29281/2019_401_fedorovan.pdf?sequence=1&isAllowed=y
I'm telling you, it's a matter of translation - they're all synonyms. Here are the CatBoost settings.