Machine learning in trading: theory, models, practice and algo-trading - page 3519
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The original idea is that if there is some structure in the data markup and the dataset is profitable, it will persist on new data.
Whereas if there is no structure in the labels, then they are just fitted to the noise.
Changed the autopartitioning method and entropy decreased compared to the other way.
I will do more detailed tests with results later. For now it's hard to compare.
the problem is not the signs, but the quality of the markings.
Unfortunately, the problem is there and there.
You need other datasets, you can do the math on yours.
You have to measure on the traine before training and see how this metric affects OOS trading. When entropy decreases, the OOS should improve.
You have to measure on the traine before training and see how this metric affects OOS trading. When entropy decreases, the OOS should improve.
I took three samples and measured the index. Or what?
Other sampling and labelling
Is the indicator not affected by class balance?
I took three samples and measured the index. Or what?
ChatGPT:
Permutation entropy measures the complexity or unpredictability of a time series based on permutations of its values. This indicator is based on the frequency of occurrence of different permutations in the data series.
The dependence of permutation entropy on class balance depends on which classes you consider in your time series and how often they occur. If classes occur about equally often in the data, this can lead to a more even distribution of permutations and, as a result, a lower permutation entropy.
However, if one or more classes significantly dominate the others, this may lead to a more uneven distribution of permutations and hence a higher permutation entropy.
Thus, it can be expected that balanced classes may lead to lower permutation entropy, while unbalanced classes may lead to higher permutation entropy. However, this may depend on the specific data and its distribution.