Machine learning in trading: theory, models, practice and algo-trading - page 635
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
With your help, maybe we will figure it out :-) That is, if I understood correctly, it is necessary to choose those inputs, which rub near zero on one and other side. Is it so?
:)))) In this case we should call the Sorcerer for help :)))).
I can only say one thing - it is the non-entropy that is responsible for the trend/flat state. The trend is a "memory" of the process, its "tail" of distribution and the non-entropy is huge, while in the flat state it is nearly zero. I only deal with it myself, but I understand the importance of this little-studied parameter.
What can I say. The preliminary result...
If you choose the model on the principle of proximity to zero, no matter from which side, then out of 24 deals 9 errors.
If I choose by the principle of the smallest number, then there are only 7 mistakes out of 24. At the same time, with an extimally negative entropy one time was correct and one time an error. But again, this is a stupid calculation of entropy, while we need to calculate mutual information. I think it's this metric that can clarify a lot. Which models are garbage and which ones are on a pedestal.
Can someone explain what you need to do with the data to calculate VI????
What can I say. The preliminary result...
If you choose the model on the principle of proximity to zero, no matter from which side, then out of 24 deals 9 errors.
If I choose by the principle of the smallest number, then there are only 7 mistakes out of 24. At the same time, with an extimally negative entropy one time was correct and one time an error. But again, this is a stupid calculation of entropy, while we need to calculate mutual information. I think it's this metric that can clarify a lot. Which models are garbage and which ones are on a pedestal.
Can someone explain what we need to do with the data to calculate the VI????
Brilliant, Mikhail!
Will you be able to do the TS with entropy/non-entropy faster than me - PAMM account or signal in the studio! I'll be the first to sign up for it will be the truth.
If you choose a model by the principle of closeness to zero, no matter from which side, then 9 mistakes out of 24 trades.
These statistics are not enough - we should at least increase them by 100 times.
If we choose a model by the principle of closeness to zero, no matter from which side, then 9 mistakes out of 24 trades.
If we follow the principle of lowest number, only 7 errors out of 24.
Try the highest number - maybe it won't be much worse?
Brilliant, Michael!
Will you be able to make a TS with entropy/non-entropy faster than me - PAMM-account or signal in the studio! I will be the first to sign up for this will be the truth.
Unfortunately I can't optimize based on these metrics, because I use the optimizer in the box, but pre and post processing (model selection) I think I can, but I need help to calculate the mutual information by example. And after some research we can draw some conclusions. At least it will be possible to make the most important conclusion, whether these metrics are relevant in data preparation before training, as well as after training in model selection.....
Alexander, is there a way to explain it?
It's not enough statistics - at least 100 times more.
Well, that's just me... just a quick note. Personally I think the following.....
If with the help of VVI we can select actual inputs which will contain maximum information about outputs, then models on such inputs will more often work than not. And then in the process of model work on feedback with the help of VF to track the moment when the model will lose relevance. After all, this may happen temporarily. Have noticed that after a series of errors the model as if nothing happened again begins to work correctly, I think that just such a metric as VVI can help us all in such a difficult case.... All that remains is to calculate conditional entropy... Does anyone know how to do it with two columns of excel?????
You think I've been up all night doing bullshit? No, I've been working on the VBA. I can't say I'm a guru, but I already know how to do a lot of tricks. I've got entropy counting there, all I have to do is calculate the conditional one and it's done....
See, Michael, how I do it:
We calculate the probabilities of occurrence of an event, i.e. this or that increment in the time series.
For example, for the AUDCAD pair:
Then for a certain sample volume, successively obtained increments, I count the non-hentropy according to the formula fromhttps://ru.wikipedia.org/wiki/Негэнтропия
I noticed that when H(x) increases sharply, the trend starts.
But, I repeat, my research is only at the very beginning and it is still far away from the loud statements, which I usually like to make.
Then for a certain sample volume, consecutively obtained increments I count the nonentropy according to the formula fromhttps://ru.wikipedia.org/wiki/Негэнтропия
I noticed that when H(x) increases sharply, the trend starts.
But, I repeat, my research is only at the very beginning and it is still a long way to the boisterous statements, which I usually like to make.
Surprisingly, you talk about non-entropy as a separate calculation, I just count entropy and it turns out negative. how do you understand this?
And about extreme values here you are absolutely right. In my observations there are two extremums out of 25 signals: one is -923 and another is -1233 and these very signals were super-trend.
Surprisingly, you talk about non-entropy as a separate calculation, I just count entropy and it turns out negative. how do you understand this?
I don't know yet. I look at nonentropy as an additional parameter to Hearst, asymmetry, kurtosis, etc., and this parameter is most mysterious and, how should I say it? - is awesome, yes.