Machine learning in trading: theory, models, practice and algo-trading - page 3187
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
ZЫ In general, if there is interest to try to find differences between the two rows, can provide them.
Have a look at what I wrote to you. Will only be able to look at it myself in the autumn.
Forester#:
I did an experiment with the sample on which I published the gifs, there are already 47% units in the sample, the data is summarised in a table.
Description of the content of the columns:
Let me explain that for one predictor more than one quantum segment can be selected in total, and these segments should not overlap in the range of the predictor value.
What I don't like is that in the neighbourhood of 50% of the targets are left in place, which can negatively affect the evaluation of the result.
In fact, it turns out that quite a lot of quantum segments were found on random targets, but because they were some clusters (presumably), different tables overlapped their coordinates, so after selecting non-overlapping ranges, it turned out that the quality (utility) of these quantum segments is worse (less) than the original ones by a factor of 10. Accordingly, on average, on the sample with the original target, quantum cuts were found more for different predictors by 3.5 times.
What do you think of the results?
Added:
The binary sequence plot of the random target and the original looks like this
I conducted an experiment with the sample on which I published the gifs, there are already 47% units in the sample, the data is summarised in the table.
Description of the content of the columns:
Let me explain that for one predictor more than one quantum segment can be selected in total, and these segments should not overlap in the range of the predictor value.
What I don't like is that in the neighbourhood of 50% of the targets are left in place, which can negatively affect the evaluation of the result.
In fact, it turns out that quite a lot of quantum segments were found on random targets, but because they were some clusters (presumably), different tables overlapped their coordinates, so after selecting non-overlapping ranges, it turned out that the quality (utility) of these quantum segments is worse (less) than the original ones by a factor of 10. Accordingly, on average, on the sample with the original target, quantum cuts were found more for different predictors by 3.5 times.
What do you think about the results?
Question for Alexei. I am not good at statistical theory. I just suggested mixing the target instead of generation.
I see.
I have another suggestion to you, what if we make more manageable the process of forest construction, and take a specific subsample of the selected quantum segment as a root for each tree?
Make the depth around 2-3 splits, so that the examples of classifiable class by leaf would be at least 1%.
I think the model will be more stable.
I conducted an experiment with the sample on which I published the gifs, there are already 47% units in the sample, the data is summarised in the table.
Description of the content of the columns:
Let me explain that for one predictor more than one quantum segment can be selected in total, and these segments should not overlap in the range of the predictor value.
What I don't like is that in the neighbourhood of 50% of the targets are left in place, which can negatively affect the evaluation of the result.
In fact, it turns out that quite a lot of quantum segments were found on random targets, but because they were some clusters (presumably), different tables overlapped their coordinates, so after selecting non-overlapping ranges, it turned out that the quality (utility) of these quantum segments is worse (less) than the original ones by a factor of 10. Accordingly, on average, on the sample with the original target, quantum cuts were found more for different predictors by 3.5 times.
What do you think of the results?
Added:
The binary sequence plot of the random target and the original looks like this
Ten simulations is nothing, you need thousands for statistical significance.
Also not ready to give an expert opinion on a particular case, but just pointed out possible problems and common ways to solve them.
What do you think of the results?
Added:
The binary sequence graph of the target random and the original looks like this
Ten simulations is nothing, you need thousands for statistical significance.
I am also not ready to give an expert opinion on a particular case, but only pointed out possible problems and common ways of solving them.
Thousands - it takes too much computational resources - one pass - about 40 minutes - basic calculation on a video card.
I generally thought that this test only allows you to check the possibility of such clusters on different ranges of the predictor.
And it is necessary to look at the probability of hitting a particular range of the quantum segment, which has already been initially selected.
And still I would like to hear the opinion on the question of difference of the target in percentage expression for reliability of such test.
You're making some pointless and relentless nonsense. Saber at least had it happen in half an hour and forgot about it.
Keep your assessments of other people's performance to yourself, especially when you don't understand what the other person is doing.
I am open to constructive criticism, and there is none coming from you.
Keep evaluations of other people's performance to yourself, especially when you don't understand what the other person is doing.
I'm open to constructive criticism, and you're not.
You're making bullshit. It has been written several times that you will get ANY results at random. Open your eyes to see. Nothing to add :)
If you think the market is random, then why are you wasting your time - any model will not work, except by chance.