a trading strategy based on Elliott Wave Theory - page 197

 
But, because of this, it is possible, without exceeding a predetermined market risk, to increase the capitalization of open positions! And this dramatically increases the profitability (in $ per unit time) of a multi-currency TS compared to any single-currency TS from this portfolio.


This is only if p=const. for all pairs. And this is unlikely.
Imagine p=0.55. Then fluctuations of just 2-3 points fundamentally change the situation for the pair. Besides, I am not against diversification in general, but against the choice of diversification instead of p=0.8.

If you had the opportunity to choose, what would you prefer:
1. work with 2-3 indicators, which provide reliable prediction 0.8 and an acceptable frequency of deals
2. Diversification by a set of instruments with the same 0.55 prediction accuracy.
 
<br / translate="no">Neutron
But, what makes you think our results diverge?


Sergei, I didn't say that either, I was discussing the details with Yuri. You've been brought in to explain the details of the experiment. Thank you. :о)

PS: I so suspect that Yuri wrote more than one indicator and tries to "fit" them after your research (that's a joke :o)).
 
In the code we used the same type of non-correlated indicators, and the Monte Carlo method was just used to simulate their triggering. All indicators were forcedly polled on each bar, and if all of them had a signal to enter the market simultaneously, a position was opened. Then the amount of successfully opened positions was calculated and related to the total amount of open positions. This is how accuracy of the P-forecast by a group of indicators was determined.


Interesting! So I have misunderstood the methods of your experiment. Now I have a lot of questions.

Which positions were considered successful in opening, and which were not? Successfulness is a notion of indefinability. And if it goes in the wrong direction, it can also turn around. And vice versa.

How did you ensure a fixed probability for your indicators? After all, if you could do it with a guarantee, it means that they are not from a list of standard ones, but something artificial. This is all the more interesting because you experimented on market data, which means that the probability p for them fits your definition of a lucky opening.

How did you ensure their independence ?

Unless, of course, this is all a secret.
 
в коде использовались однотипные не коррелирующие индикаторы, и методом Монте Карло просто моделировалось их срабатывание. Все индикаторы, на каждом баре принудительно опрашивались, и если сигнал на вход в рынок имелся у всех одновременно, открывалась позиция. Далее подсчитывалось количество удачно открытых позиций и относилось к полному количеству открытых позиций. Так определялась достоверность прогноза Р по группе индикаторов.


Interesting! So I misunderstood the methodology of your experiment. Now a lot of questions arise. Which positions did you consider to be successfully opened, and which ones not? Luck is an indefinite concept. And if it goes in the wrong direction, it can also turn around. And vice versa. How did you ensure a fixed probability for your indicators? After all, if you could do it with a guarantee, it means that they are not from the list of standard ones, but something artificial. This is all the more interesting because you experimented on market data, which means that the probability p for them fits your definition of a lucky opening. How did you ensure their independence ? Unless, of course, this is all a secret.









Not a complete analysis of the most popular today's TS allows us to state with some certainty that the entire variety of market behavior comes, in fact, to predicting the direction of price movement after the position opening and the probable amplitude of this movement. The answer to the last point can be statistically proved by the analysis of the standard deviation at the selected timeframe:
s=SQRT{SUM{(Close[i-k]-Open[i-k])^2}/(n-1)}.
For a single player, we can get an estimate of the average time spent in the market. Thus, having generated the price series on the TimeFrame equal to the average time of keeping the position, we open a position (if there is an indicator signal) at the opening of the next bar and close it at the closing of the same bar. It is clear that the adequate solution of this problem will maximize the profitability of TS.
The code has the whole price series, and the "indicator" knows in advance the "future" colour of the candle. A random number generator with the expectation shifted by a fixed value, "mixed up" the indicator so that the probability of correct prediction coincides with the requirement of the problem condition. In this definition, the type of price series does not matter - it can be a meander of single amplitude and of such length to satisfy the requirement of statistical validity of the results.
In this context, a positive result is considered when the colour of the next bar coincides with the indicator prediction, and their independence comes from the very formulation of the experiment.
 
Но, из-за этого можно, не превышая заранее заданный рыночный риск, увеличить капитализацию открываемых позиций! И это кординально увеличивает доходность (в $ за единицу времени) мультивалютной ТС по сравнению с любой одновалютной из этого порфеля.


This is only if p=const. for all pairs. And this is unlikely. Imagine that p=0.55. Then fluctuations of just 2-3 points
fundamentally change the situation for the pair. Besides I am not against diversification in general, but against the choice of diversification instead of p=0.8. If you had a choice, what would you prefer: 1. work with 2-3 indicators, which provide 0.8 forecast reliability and acceptable frequency of deals 2. diversification by a set of instruments with the same 0.55 forecast reliability





If p=0.55, or even worse, you will have to use 7-8 indicators. Where can we find such independent indicators? Well, even if we take them, we will have to wait for the simultaneous operation of all of them for the whole year (it's my intention and purpose). And all for what? To reduce the drawdown. Let's estimate by how much.
The average drawdown value D is roughly proportional to the average time of these drawdowns in power of 1-P, where P-forecast reliability of the indicator or group of indicators:
D(t)=t^(1-P).
In case of multicurrency portfolio the drawdown size depends on the number of n instruments used as:
Dm(t)=SQRT(1/n)*t^(1-P).
In its turn the profitability of TS using the MM principle falls exponentially fast with increasing drawdown. Moreover, we remember that the yield (in $ per long period) of multidicator TS decreases exponentially fast with increasing P or the same with the increasing number of used indicators n (see the last post with a picture). Assuming that characteristic t time for the first and second cases is comparable, we obtain that for multicurrency TS the logarithm of return increases with the number of instruments:
SQRT(n)*const^(1-p).
And in case of multi-instrument, as:
const^(1-P)-n.
The first function grows monotonically as the number of pairs increases, while the second function decreases as the number of indicators is increased. Consequently, it's better to increase the number of used instruments, rather than the number of indicators! This is why I choose a lot of currencies and few indicators.

Yura, I am well aware of the ahastic severity of this statement. But you must agree that at least it reflects the general dynamics and allows us to analyze the criteria of optimal behavior in the market in details.
 
Yura, I am well aware of the ah-ha rigour of the above statements. But you must agree that, at least, it reflects the general dynamics and allows a more detailed analysis of the criteria for optimal behaviour in the market.

You have convinced me quite well. I need to reconsider my intuitive approach in this matter.
There are occasional discussions on this and the parallel MQ forums about the value of mathematics in trading.
I believe that what you have said is enough for even a biased opponent to acknowledge this value.

I can only say one thing about your experiment: very instructive. Logical, structured and, most importantly, simple. Almost obvious. There is something to learn from it. Thank you, Sergey.
 
Portfolio management should not be confused with system-building.
There is a fairly well-developed theory and practice about the use of many instruments and TSs in a portfolio. For example, we know that an optimal portfolio should consist of minimally correlated instruments or TS. So increasing it to the maximum will not be good. It needs to be specifically selected and managed with the amount of capital for each TS according to the considerations described above. But the only aim of diversification will be smoothing of the resulting equity (which reduces the risks).
Concerning building a system based on several indicators or patterns. There is a misconception that the system just displays UP or DOWN signals. This is certainly not the case. Each system tries to take advantage of a possible scenario of price behaviour. If two systems show the possibility of one and the same scenario, it means they are compatible and consequently it is necessary to choose the most reliable one. If two systems are showing the possibility of different scenarios, but overlapping in some way (e.g. from different TF), then it will still be necessary to trade some particular scenario (system) instead of a mixture of them. And its probability will remain the same. And the effective mixed scenario may not exist at all. We trade different systems with buying and selling at discrete points in time, not arbitrary up/down predictions.
 
You are right: Equity smoothing in this case is the most valuable idea. By having minimal equity volatility, we can increase the size of positions.
 
After all the discussions, I decided to calculate what I was going to do for a long time.
The positive result is that I finally understood the difference and why the centering X[i]=Open[i]-Open[i-1] is done. Correspondingly, I understood where I was wrong in my previous presentations.

The negative result is that everything is not as it seemed to me.

1. I performed two variants of centering: the above and by deleting the linear regression built on the whole interval. The results are fundamentally different.
The autocorrelation coefficient r[k] for the series X[i] does not depend on the correlation interval k and (except for k=1) does not exceed 0.01. I haven't calculated FAC separately, but for EURUSD at t=5,15,30, etc. the results are the same as presented by Neutron. And at t=1 it is -0.16, which is a bit higher than Neutron's one.

For series Y[i] obtained by removing the LR the picture is completely different. r[k] slowly decreases from 1 to 0.70 for GBPUSD, M15 and 0.97 (!!!) for EURUSD, M1 at k=1000. From my point of view this result makes no physical sense. The autocorrelation of the price series cannot be so strong and fall so slowly. Consequently, this variant of centering is not appropriate? Why not? Sergei, can you explain what this is about ?

2. I have calculated correlation coefficient of several standard oscillators, as well as my own, with series X[i]. In all cases I got that r[k] is almost independent of k, the differences in values appear only in the fifth sign (even at k=0). Though the value of r[k] depends on the timeframe. At the same time r[k] values of different oscillators also differ from each other.

This is not what I expected. In the worst case - the same situation: a maximum at k=0 and rapidly decreasing towards zero when k increases. Constancy of r[k] at different k makes me think that something is wrong ? What ?
 
I used only two types of time series:
X[i]=Open[i] and X[i]=Open[i]-Open[i-1].
The autocorrelation coefficient was found using the formula:
r(Step)=SUM{(X[i+k]-X[i-Step+k])*(X[i+Step+k]-X[i+k])}/SUM{(X[i+k]-X[i-Step+k])^2}, where the sum is taken over all row members k=Step...n-Step, n- the full number of row members, Step-the correlation horizon.
The first case is referred to as the autocorrelation function, which normally ranges from -0.5 to 0, while the second case is referred to as the correlogram, which is sign-variable. Both series decay exponentially fast.
Jura, a large and non-decreasing autocorrelation value is obtained if the constant component is not removed. Indeed, all terms of the series are almost equal and equal to 1.23, for example.

By the way, I analytically obtained the expression for the probability of correct prediction P for a group of N independent indicators with arbitrary prediction p each:
P=1-2^(N-1)*P{1-p[i]}