The perfect filter - page 7

 
hrenfx:

Research and trade are completely different activities.

Still thought the thread was about research.

That's exactly right, that's exactly what you think. Maybe I was wrong if it seemed like I was getting attached to a platform, a language, etc. I'm experimenting with those means that I find the most convenient, or as they appear to me at the moment. Then I rewrite the logic for another language or another platform is of secondary importance, maybe even 10-second - this is a routine, non-scalable activity worth a penny. The main thing is the algorithm.

That's why I started to speculate about quantitative and visually represented efficiency criteria for a particular filter for financial csvr. To have a quick and visual way to evaluate filtering throughout BP, to see how and where it builds up and drains, and not just the final amount as in the tester. As you are right, TS built with some algorithm may fail in one market and rise in the other, for better understanding of where and how it happens, I'm looking for a way to assess just price filter, then I will think about estimation of complex, taking into account spread, delays, MM and so on.

On the contrary, I advocate platform-invariant algorithms, concise and beautiful in a scientific sense when possible.))

So I'm overconvinced by default.

 
J.B: ......

Rewriting the logic into another language, or for another platform, is a minor matter, maybe even 10 major ones; it is a routine, non-scalable activity, costing pennies. The main thing is the algorithm.

......

I wouldn't go so far as to say that the toolkit is secondary to the implementation of the algorithm. Besides, there is a close connection between implementation tools and the way the logic is made of them. I, for example, also have to use different environments for research, but it is rather a disadvantage than an advantage, it would be much cooler to combine the advantages of many environments and work in one, divided into contexts, tailored to specifics.

Only very primitive research can be done on paper, yet the modelling environment (platform) is very important. Over time, working with certain tools, a set of functions, code templates, etc, you get used to them and begin to think at the level of automatisms, then move to other logical building blocks and rebuild the logic from them all over again, not the most pleasant thing to do. IMHO

 

With all due respect, but I would like to hint at a deviation from the mainstream of the discussion, that is, I am very interested in debating about software too, but in a different thread preferably. I understand that this is a lively discussion and not particularly careful to follow the context, but the debate about the usefulness / harm statistics (as in the beginning of the topic) and what software is better, it agrees has little relation to the sub-item about analysis of the effectiveness of filtering algorithms. Sorry for the moralizing)

 
J.B:

With all due respect, but I would like to hint at a deviation from the mainstream of the discussion, that is, I am also very interested in debating about software, but in a different thread preferably. I understand that this is a lively discussion and not particularly careful to follow the context, but the debate about the usefulness / harm statistics (as in the beginning of the topic) and what software is better, it agrees has little relation to the sub-item about analysis of the effectiveness of filtering algorithms. Sorry for the moralizing)

So what is still unresolved? Take BP, filter, calculate "breakpoints" negative on troughs, positive on tops, calculate the sum for each bar of all previous breakpoints taking into account spread deduction. If you want a ratio relative to the ideal, calculate the same, for ZZ with the relevant depth and divide. That's all the alchemy. The topic can be closed. What everyone has been looking for is declared found! Amen.

 
gunia:

So what is still unresolved? We take BP, filter it, calculate "breakpoints" negative on troughs, positive on tops, calculate the sum for each bar of all previous breakpoints taking into account the spread deduction. If you want a ratio relative to the ideal, calculate the same, for ZZ with the relevant depth and divide. That's all the alchemy. The topic can be closed. What everyone has been looking for is declared found! Amen.

Conceptually it seems to be true, but either I am dumb, or there is something wrong with the way the problem is formulated. When I return from the holidays to the office, I will try to explain in detail, in a refined way, with pictures, what the problem is. Maybe someone will suggest something.

 
J.B:

Conceptually it seems to be true, but either I am dumb, or there is something wrong with the way the problem is formulated. When I return from the holidays to the office, I will try to explain in detail, in a refined way, with pictures, what the problem is. Maybe someone will suggest something.

Well, my job is to point the way, you have to follow it yourself.
 
gunia:
Well, my job is to point the way, it is up to you to follow it.

I am grateful for the guidance. In general, it turns out about the same if we do not have a spread:

If we subtract from each trade a spread of 2 old points (Eurobucks), then the picture is spoiled:

In the picture it is clear that the algorithm in this filter taps quite predictably in certain hours of the daytime in an active market, and loses at night on the flats. But 95% of losses are spreads. So I am satisfied. So I'm happy with it. So I just need to use this algorithm on an active trending market and switch it off at night.

P.S Did tests with simple EMA and MACD there even without taking into account spread MO profit is zero, with spread a hard drain.


 
J.B:

If we take away from each transaction spread of 2 old points (eurobucks) then the picture is spoiled:

Where does this cruelty to their own products come from? Forget the spread and do as you say:

hrenfx:

Build ZigZags in the same way, but put the tops on Bid BP and the bottoms on Ask BP. Then the sum of the knees will take into accountthe "floating spread".

Take the Bid and Ask data to FXOPEN ECN ...

But even if you don't want to bother with it that way, take into account that for the same EURUSD the average spread is ~ 0.3 + standard (or you can reduce it) commission ~ 0.65. So even if you simply take away the spread, it won't be more than one pip.

But there are also other symbols...

 
hrenfx:

Where did you get such a cruelty to your own handicrafts? Forget the spread and do as you say:

But even if you don't want to bother, consider that for the same EURUSD the average spread is ~ 0.3 + standard (and you can reduce it) commission ~ 0.65. So even if you simply take away the spread, it won't be more than one pip.

But there are also other symbols...

Thanks. If 1 point, then it is no other than a grail, if you include it from 9 am to 8 pm. And this daily frequency is very stable. Now we need to understand what the catch is. After all, the trend is not even considered in this summator - orders are simply caught at breaks and reversed. If we filter it out by trend, I think the efficiency will rise by 10-20%. I'm sure it has to be a set-up somewhere, it cannot be that simple.

 
Better still, do it without the stupid spread, then you'll get around a lot of the idiotic nuances at once. As for the catch, it's unrealistic without the source code.