You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Not really, I took ATR for days 3 and 100 (I tried 50% and 61.8% from them) - 100 showed better of course, which says more about static deviation, but this ATR(100) will be different for different pairs, and the fixed value of 90 pips for all pairs turned out to be more effective, which surprised me.
But it's not about adding averages, then adding oscillators, checking which ones don't work and throwing out the ones that don't work.
No one has talked about this approach yet, but as a matter of principle I see no reason why it cannot work. If I had enough resources I would check all standard indicators for their usefulness, but I don't and I only use MAA and ATR, although I have my own ATR that I developed in 2010 without knowing about the existence of ATR...
However, this is not about filters and their selection approach for optimization, but about how to determine the efficiency of a filter using mathematical (statistical) method.
This is an interesting thing.
No one has talked about this approach yet, but as a matter of principle I see no reason why it cannot work. If I had enough resources I would check all standard indicators for their usefulness, but I don't and I only use MAA and ATR, although I have my own ATR that I developed in 2010 without knowing about the existence of ATR...
However, this is not about filters and their selection approach for optimization, but about how to determine the efficiency of a filter using mathematical (statistical) method.
If about the fact that 90 is more effective, then there is an assumption about some lazy global participants - whose indicators are simply the same in all pairs, however, this is an assumption, but I have not found a real explanation.
This approach can give good results, the method of enumerating random strategies is a separate topic and can give good results, but the machine has to enumerate them, it is too time consuming.
Of course the machine, manual trading only helps to form a hypothesis.
Well yes, there are definitely global features that work, you can't argue with that. There is always a hole in the system. Actually it is an interesting pattern. Well, at what time interval did static values work better than dynamic ones and what was the system?
I took a period of 3 years, I didn't divide it less - maybe it was a random fluctuation, but then it would have been for one pair, and here it was for 13:
The "average" is an average of all currency pairs and the "best %" is the percentage that shows how many of the best currency pairs there were.To evaluate the effectiveness of filtering, I compare the following indicators:
Profitability_AVR - shows the average profitability of the optimization results.
I have listed above a table with obtained data but I doubt that these figures can be equally compared, i.e. I doubt that variant "MAf_3_3_100" has the largest value of the indicator "Profit_AVR" that is better than "MAf_Pips" or equal value of the indicator "Profit_procplus".
Initially I decided to use a simple approach - find the best values for each indicator and see which filter option gave the best overall performance, the results were awarded one point for each best indicator. Similarly I used a table expressed as a percentage, it shows which filter variant gave the best results considering the statistics on each currency pair. After that I combined the two tables and calculated the number of points, the variant with the higher number of points was recognised as the best filter variant.
However, this approach has a number of drawbacks, two of which are obvious:
1. it does not take into account the relative importance of an indicator to others - that is, a difference of 1 per cent and 10 per cent is treated equally
2. it does not take into account the significance of the indicators - profit_AVR is equivalent to profit_procplus, but is it really so?
I buy the first flaw by calculating the deviation of the value from the mean for the worse - the more deviations for each indicator, the greater the chance of exclusion from selection.
To solve the second drawback I decided to use weights, but there is a question of how to distribute them - I take coefficients from 1 to 7 - I determine which indicator is the most significant and already give not one point for the best indicator, but a point multiplied by the coefficient.
What do you think are the more significant indicators from the ones I gave above, what weights should be given to them?
Since there are no comments after the last post, there are two assumptions - the topic is not interesting or I don't understand what I'm writing about. So I decided to post a live example in a file, which shows how the data is compared and the best options are chosen.
I will be glad if the topic will be developed.