To follow up - page 33

 

Let us summarize the intermediate result.

The branch proposes a trading strategy generation and verification paradigm that is fundamentally different from the "standard" one (according to Pardo). In the standard one, the selection of the optimal strategy is done by scanning the FP parameters of the strategy itself through numerous test runs on history and further selection of the FP points that optimize the strategy to be changed. In the proposed thehanalysis ("block of analysis and preliminary intentions") does not change in any way, because the trades remain in place, and only the context-filter ("decisive block") changes.

In principle, it is possible to achieve something similar to optimization here, if the required form of context-filter constraints is set in advance (for example, all points of context FP should be inside a "multidimensional parallelepiped" with changeable coordinates of vertices). But this would only be a particular case of the clustering we want to achieve.

Therefore, in our case an optimizer is simply not needed. Only one test run with context filter switched off and outputting coordinates of deals in context FP is enough, after which we can perform filter clustering by independent means.

I have the following question: what makes us so confident that the new paradigm will provide us withgreater robustness of the strategy?

We assume that in further trading, in real trading, the sequence of deals (we will stick to the Candid's variant) passed by the new context-filter will show rather high profit-factor. What can happen that will not be so?

Maybe, we can try to analyze the internal structure of a "profitable context cluster" by individual trades - by numbering them, marking the profitable ones and, say, looking at how the trade was performed inside this cluster on the history? If there are many segments of the trajectory containing almost only unprofitable deals within this cluster, it is not good.

In short, at least some additional arguments are needed to justify the acceptability of the new paradigm and to set limits on its use.

 
Mathemat >>:

У меня следующий вопрос: откуда у нас такая уверенность в том, что новая парадигма обеспечит нам бо льшую робастность стратегии?

In general there is no certainty. As, however, in the case of the standard approach.

I suppose two things are necessary for robustness of this procedure:

- The optimum zone must be real. That is, we must not be dealing with a fluctuation, but with a statistically valid dependence.

- The FP parameters used must themselves be robust. That is, if they do change over time, then slowly enough.


That is, a lot of work needs to be done to find and verify the right parameters. Which does not invalidate the usefulness and theoretical considerations.


P.S. By changing parameters over time, of course, we mean changing their distribution function.

 
Candid >>:

- Оптимальная зона должна быть реальной. То есть мы должны иметь дело не с флуктуацией, а со статистически достоверной зависимостью.

Well, yes, if the outer boundaries of the optimum zone are the same as a snowflake, you can't count on anything robust.

The optimum zone, if it does migrate in FP space, should not do so too abruptly.

In short, it should be solid. Some kind of hypersphere, hyperparallelepiped or something similar, moving at low speed.

 

Maybe it should be like this:

1) Determine ideal entry points on history taking into account spread, profit maximisation, number of trades, drawdown, etc. (100% sure, it won't look like a zzz by far)

2) Using Kohonen Maps or other methods, determine the relation of the obtained deals to the current context (total indicator readings or other)

3) Trade using the patterns found.

 

Andrei, are you hoping that the coordinates in context space identified in step 2 will allow you to enter more accurately?

 
joo писал(а) >>

Maybe it should be like this:

1) Determine ideal entry points on history taking into account spread, profit maximisation, number of trades, drawdown, etc. (100% sure, it won't look like a zzz by far)

2) Using Kohonen Maps or other methods, determine the relation of received deals to the current context (total indicator readings or other)

3) Trade using the patterns found.

No way out (I tried it myself)

There are a lot of regularities of different time duration + randomness, and every single ideal entry point may have a cause in one or more regularities masked by randomness. As a result of highlighting the context we only get a fit to this random mix, rather than highlighting the individual patterns and the context of their use. Every pattern has its own context. imho.

 

There is another thing here - as the number of parameters grows, so does the dimensionality of FP. Following it the requirements for statistics of deals grow. I think that already at 3 parameters we can speak about statistics only for unbridled pipsing :).

On the other hand, it is desirable to have parameters on the number of degrees of freedom. To hope that there are only 2 or 3 of them, you have to be a very frosty optimist. Imho, of course. Although, there is no indicator of the number of degrees of freedom yet.

This is the real tragedy of any "objective" approach, including the standard one.

 

The boundary parameters of the optimal zone are in one way or another transformed into the latent parameters of the TS itself, no matter how they are obtained, by the standard approach or by clustering the context. Thus, it turns out that we still have not escaped from TC parameterization.

About 2-3 parameters: there is a hope that if the system enters mainly in trend sections, these parameters will be "almost enough", because in times of catastrophes the number of degrees of freedom of the market probably decreases significantly (it becomes simpler).

And in general, I would not focus on the number of degrees of freedom. We are not looking for a function that fully explains the market, but only a more or less robust TS on it. It may be wrong sometimes (and it surely will be!), but we can hope that 2-3 parameters will be enough for most cases.

 
joo писал(а) >>

Probably both. :)

Thanks for the other one, it's easy to do:

A 500x500 point square = a total of 250,000 points. Each point is drawn by a "trend line" object (not a ray, of course). Horizontal lines with length 1, i.e. segments linking adjacent points are used. Why not a point object? Because a point object cannot be seen on a plot. Depending on values of x and y coordinates the colour of the object is calculated. Thus, there are 250,000 objects on that square, each one has its own colour. And MT4 can handle it without any problems !

The only limitation is that MT has a limited stack of graphical styles. The maximum value is 512. It means you cannot use more colours than this number.

 
Mathemat писал(а) >>

I have the following question: how can we be so sure that the new paradigm will provide us withmore robust strategy?

We assume that in further trading, already in real life, the sequence of trades (let's stick to Candid's variant) that have passed the new context-filter will show a fairly high profit-factor. What may be so, that it won't?

In short, at least some additional arguments are needed to justify the acceptability of the new paradigm and to set limits on its use.

As always, the criterion of truth is practice. Apart from it, nothing will give either certainty or validity.

I want, however, once again to pay attention to such moment. The new paradigm, as Alexey called it, is only another methodology. Methodologically correct approach can provide a positive result only if it is based on correct starting points. (By the way, methodologically incorrect approach, which is the sin of classical TA, cannot provide a positive result even on the right starting point. Hence, it is helpless and "unworkable" in the market.)

Using FP in accordance with its definition and function is, imho, the methodologically correct approach. However, methodology alone does not produce a positive result !

It should be based on a pair: input-output system - FP parametrization. This is the right foundation. If even one component of this pair is wrong/inadequate/dependent, the whole design will not work.

Therefore it makes no sense to discuss the robustness, acceptability, stability of this paradigm in isolation from the same properties of the input-output system and FP parameters.

Mathemat wrote >>

The optimal zone, if it does migrate in the FP space, should not do so too abruptly.

In short, it should be solid. Some kind of hypersphere, hyperparallelepiped or something similar, moving at low speed.

I'd say a hyperellipsoid, it connects radiality and different scales along the axes.

However, if the cluster is migrating along the FP, then most likely the FP parameter system is incomplete or the parameters themselves are not chosen correctly. IMHO