Don't tell me then that TA doesn't work - page 16

 
hrenfx:

It is not a question of the efficiency of the deduction. Again, the question is directly about the comparison of SB vs CER.

Let's speak frankly.

  1. Any finite SB-row (even if done via a pseudo-random MathRand()) is not 100% SB. That is, it can be part of a completely non-SB row.
  2. Any finite CVR-row is not 100% non-SB. As it may always be part of an SB.

Roughly speaking, Tolstoy's War and Peace may or may not be contained in the SB.

So it doesn't make much sense to talk about SB vs CPR at all.


SB to check peeks, incorrect testing, etc. If the method were to find good solutions with every SB implementation, the answer would be in peeking, which can be quite tricky.

The question is not about SB, but how to evaluate the usefulness of the "subtraction" method of fitting statistically. It is clear that on some rows it will find, on others it will not. But this is not a statistic, the more so some of the real series are dependent. The question is in the representative statistics, which can be collected by increasing the number of instruments (not particularly suitable for the handicap), increasing the number of independent systems for testing, and deepening the history of each instrument.

 

Please explain. Why in the description of the operating principle 2 sections of optimisation are used, but in reality 3 optimisations are used on section1, section2 and on section1+section2 ?

 

Again, we are speaking in different languages. A method has been proposed which makes no claim to any dithyrambs. A method is always a method.

About statistics, can you show the statistical differences between EURUSD and GBPJPY?

 
hrenfx:

Again, we are speaking in different languages. A method has been proposed which makes no claim to any dithyrambs. A method is always a method.

About statistics, can you show the statistical differences between EURUSD and GBPJPY?


Geez, what's that got to do with the praises? It's about performance testing. Under what conditions it makes sense to apply and whether it makes sense at all. The ultimate goal is to put it into practice, which requires answering the above questions.
 
So you offer no methodology whatsoever. Stuffing the SBs, showing that there is something on them - what is it?
 
hrenfx:
So you are not suggesting any methodology. Stuffing SBs and showing that there is something on them is what?

are you reading what is already written? SB checks for peeking and incorrect testing. Not just for this method, but for any method. And just as well demonstrates that it is quite easy to get a positive result on a short history. This is clear to those who know the properties of SB, but many people do not know and profit on OOS is necessarily considered to be regular, especially if it is obtained for several instruments

The methodology is

"The question is about representative statistics, which can be gathered by increasing the number of instruments (not particularly suitable for a fora),increasing the independent systems to test, and delving into the history of each instrument."


 

You are proposing to search for common patterns among a huge number of TWRs. In other words, you propose to solve practically the main task of TC writing.

With such a global approach, when there is nothing definite, but some general phrases, you are definitely not going anywhere.

 
hrenfx:

You are proposing to search for common patterns among a huge number of TWRs. In other words, you propose to solve practically the main task of TC writing.

With such a global approach, when there is nothing definite, but only some general phrases, you are definitely not going anywhere.


are you sure you read everything? Increasing the CER is one way, and quite realisable (there are thousands of instruments on the same am.stocs). The other options are the results of runs by independent TCs. Yes, and the history for one instrument - you can find quite a lot of pieces for 9 months))). How to generalize the results is also clear. For example, the cumulative profit for all of them. Or other variants. As a result of statistical data collection, not only the method itself is checked, but also the boundaries of its applicability - for what instruments, what sizes of history to take, what systems are better.

And what do you suggest for practical application? Or purely theoretical - a method is proposed, but who knows how useful it is and how best to apply it? :)

 

What are the results of independent TC runs? What do you mean by independent? How many TCs to stop at?

And most importantly, what will the result of this study tell you? Again, this will in no way speak to the lack of fit, the limits of application, etc.

The EA was suggested above, you can optimise it on a hundred intervals, then "cross" all the variants. Only the result will not say anything.

For practical application I already suggested it and more than once in other threads. Even a stationary combination, the existence of which was denied. And for this method - it's just another method, which is quite interesting for its simplicity of implementation and approach. For which thanks are due to the topicstarter.

 
hrenfx:

What are the results of independent TC runs? What do you mean by independent? How many TCs do you stop at?

And most importantly, what will the result of this study give you? Again this will in no way speak to the lack of fit, the limits of application, etc.

An EA was suggested above, you can optimise it on a hundred intervals, then "sweep" all the variants. Only the result won't tell you anything.

An increase in properly collected statistics increases the validity of the conclusions based on them. Seems simple ;)