Machine learning in trading: theory, models, practice and algo-trading - page 895

 
Maxim Dmitrievsky:

What was the humor in that? That the other word is out of place, not even a consonant word?

You don't get it... There's no sense in explaining it.

 
Maxim Dmitrievsky:

In the second article genetics and enumeration of a bunch of parameters are replaced by a single vector of values (inputs to the forest, predictors), and outputs are selected considering the maximum number of profitable trades. It is possible to introduce other criteria, correcting reward function (including DD, Sharp Ratio, anything).

The algorithm optimizes any strategy with very high quality in just a few passes in the optimizer (usually 5) (as opposed to GA and the more so in the full search). In the example given in the article that is seconds. Moreover, increasing the number of predictors almost doesn't increase the number of optimization passes. I recommend trying it out for your League of Strategies in another thread. Also, you can come up with even more efficient optimization algorithms for the League, based on the proposed approach. Regular optimizer can be discarded as obsolete, especially without cross-validation (wolf-forward), and it loses (at least) in speed multiple times, and in quality is no better. If we replace forest with NS with kfold, we get analog of wolf-forward, and very fast. But I haven't gotten my hands on it yet.

Mutual information is a measure of entropy between target variable and predictors, the same as you've shown in the picture as a table of predictor importance. But you can just use recursive feature elimination in las, and watch for errors. If so you need to sort and remove uninformative predictors. (google deciphering)

upd

After reading the article I have more questions than answers, not all code is clear, but I understood that I need to rewrite completely TC to implement this approach described in the article. I guess I haven't reached the level of the article yet.

It turned out that in Deductor Studio you can build your own tree in manual and semi-automatic modes, which really fascinated me! However, the process is labor-intensive, not enough drag and drop in the interface, but working with data in this mode allows you to better see the patterns. True lack of opportunity to change the rule, ie if the sample says that it is 1 then do not change to zero, and I would have moved some of the rules towards zero both on the rarity of falling out, and the credibility and look at once the statistics, and then all have to do through processing scripts. Maybe there are other similar programs with such features, but where you can quickly build nodes in a tree by hand?

 
Mihail Marchukajtes:

Well, here are the first mistakes... Okay, then let's determine the moment of transition of the mark from the bottom to the top signal to buy, from the top to the bottom signal to sell. But only on one bar. Ie we leave only the moment of crossing the mark 50. I hope this happens within one bar?

In fact, from your rule, once the price has crossed 50% mark from the bottom up, it simultaneously becomes lower than that level. So it will be enough to cross the 50% mark from the bottom up to buy. On the contrary on the contrary.... How do you like this plan?

Work on the opening of a new bar. The entry is not so important, in my system, the elimination of garbage is made on statistics, I do not consider it a mistake. The crossing will not give anything great, except that it will reduce the frequency of signal generation for entry, but its quality will also decrease.

Mihail Marchukajtes:
I'll add. Why do you need a constant signal? I think the most suitable moment for the analysis is when the signal changes from buy to sell, if it is constant. I think the moment of signal change is limited to 1 bar. Think about it. Why do I need to analyze everything in the buy signal, if the main thing to do is to roll over. That is, the main thing is the moment of the signal change. It can be highlighted.

It is wrong to be in the market all the time - the risks are too high.

 
Aleksey Vyazmikin:

Work on opening a new bar. The input is not so important, in my system, the trash is eliminated by statistics, I do not consider it a mistake. The crossing will not give anything globally, except that it will reduce the frequency of signal generation for entry, but its quality will also fall.

It's a mistake - it's wrong to be in the market all the time - excessive risks.

Well, you said that your signal is permanent. Sorry, but I see a misunderstanding, so I can hardly help you... This misunderstanding is too big....

 
Mihail Marchukajtes:

Well, you said that your signal is constant. Sorry, but I see a misunderstanding, so I can hardly help you... It's too much of a misunderstanding....

I explained that the decision is actually made by the filter cascade. All I want from MO is to identify sections of the markets where it is worthwhile to engage some of the filters and where it is not. I need to arrange what is already working and not wait for a miracle from a black box.

 
Aleksey Vyazmikin:

I explained that the decision is actually made by the filter cascade. All I want from MO is to identify parts of the markets, where it is worth to use some of the filters and where it is not. I need to streamline what is already working, rather than wait for a miracle from a black box.

Decision making should be transferred entirely to the NS, instead of filters, then it makes sense. Here on a simple yes or no question huge resources are spent, and you want the NS to identify the market where which filters to use. I think it is too complicated. It can be done, but it is easier to stupidly build the correct model instead of your there filters. IMHO !!!!

 
Aleksey Vyazmikin:

I have more questions than answers after reading the article, not all the code is clear, but I understood that I need to rewrite the whole TS to implement the approach described in the article. I guess I haven't reached the level of the article yet.

It turned out that in Deductor Studio you can build your own tree in manual and semi-automatic modes, which really fascinated me! However, the process is labor-intensive, not enough drag and drop in the interface, but working with data in this mode allows you to better see the patterns. True lack of opportunity to change the rule, ie if the sample says that it is 1 then do not change to zero, and I would have moved some of the rules towards zero both on the rarity of falling out, and the credibility and look at once the statistics, and then all have to do through processing scripts. Maybe there are other similar programs with such features, but where you can more quickly build nodes by hand in the tree?

no, i have never done it.

I don't really understand why it's needed, because, for example, a forest is already a universal classifier or approximator, and there's nothing to fix by hand

while single trees are rather weak and primitive algorithms.

 
Alexander_K2:
If you yourself are not able to solve such problems, find me the VisSim NeuralNet module, I will show you how to do it.
Files:
 
Artem:

Artem, my deepest admiration and respect. Behind me - the working model of my TS with examples. I'll send it over at the weekend. Just look at it, if you don't like it, throw it away.

 
Vizard_:

How are we going to do if you don't tell us anything. Tell us about the latest development. In detail, with examples.

Why? I'll tell you one thing. Doc helped me transfer the models from R to MT and let me tell you that these models work exactly the same way as Reshetov's ones on OOS. Just the same. So R models can be trusted. It's all about the data feed... All the same....