Machine learning in trading: theory, models, practice and algo-trading - page 3518
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
There is, in general, an ingeniously simple thing that allows you to determine if your MO will work on new data.
To do this, you only need to have a set of labels :) Because they are a binarised representation of the original series.
If it's overtrained, it means less training.
I wonder, if I am not heard, then others are not heard either - so everyone is talking to himself here.
I wrote about quantum segment - actually the range of any predictor - there is no other here. We take the range, take all zeros and ones from the sample - weight them for balance and plot the balance - if the label is "1" then +1, if it is "0" then -1*K_Balance.
In fact, this is a graph of the change in the probability bias in dynamics - over the chronology of the sample.
check it out for yourself, it seems to be not empty, according to my first tests.
Topological Data Analysis (TDA) for predicting crises on the Russian stock market. #sber, #rosn (youtube.com)
There's basically a simple-to-genius thing you can do to determine if your MO will work on new data.
To do this, you only need to have a set of labels :) Because they are a binarised representation of the original series.
Need prufs with graphs :)
This is an experiment to illustrate that even a relatively good classification will not give a guaranteed profit.
The code contains an array (sequence) of "-1" - loss and "1" - profit, summing up to zero. But there are two normal distributions - from one of them we extract a losing financial result and from the other a profitable one.
The experiment is carried out 100 times and we observe different balance graphs and histogram of their distribution.
By default the distribution for profit is bigger than for loss, you can play with different variants - code below. I don't have fixed stop levels, so this experiment shows very clearly that a model cannot be evaluated only by balance or only by classification metrics, we need some kind of complex criterion.
lstm works with radically unbalanced classes - for 2 classes by a factor of ten?
Drop a sample, at least I'll estimate the potential for training with my method.
Need prufs with graphs :)
No prufs yet, because I do random sampling and the entropy of such a series always tends to 1, i.e. random
Check your own, it's not empty, according to my first tests.
I don't get it.
No prufs yet, because I do random sampling and the entropy of such a series always tends to 1, i.e. random
If I find a correlation between the results on OOS and the estimation of labels, for example, through entropy, it will be eloquent
PE - permutation entropy of labels before training
Below is the R2 of the trained model on them, taking into account the OOS
all values are very similar and it is impossible to determine anything :)
This is because the way of partitioning is always the same. You need other datasets, you can calculate on yours. Example calculation.