Market patterns - page 3

 
pantural:

"Finding what you don't know what" is apparently just my case.

And concerning the nuances of forum communication, flood filtration, etc., frankly speaking, the designer forums are more rubbish as it seemed to me, after all the pseudo-bogeyman cronies are more feminine and grumpy, and the traders because of the risk of sober thinking (IMHO), dough oopsyvayut apparently ... In general, I do not care much about the flood. The main thing that among it was a grain of truth.

Why obscurantism? It's a great starting point! Let's think, for a start, how it can be concentrated into a set of rigorous inferences. With which you can process time series and make predictions.

1) Three types of trends.

It is quite questionable in my opinion. It may be done by 2, by 5, or by 10. There are thousands of big and hundreds of thousands of small influences on the price, the flow of trade orders which are based on arbitrary forecasting, thousands of strategies, all timeframes and so on. The distribution is close to normal in most parameters. I don't want to divide it into 3 types. I do not understand what the point is. I reject it.

2)Three primary trend phases from the same thread. (IMHO) It's a bit far-fetched.

3)The market takes into account all the news. Yes, it makes sense. It's a kind of TA ratification, but no more.

4)FIs and their baskets are correlated. О!!! Now that's a sweeping thought! Let's figure out exactly how, and what are the common ways to find the most meaningful correlations, relying as far as possible only on the TzVRs without additional explanations preferably. This is because if the information is public, it is obvious that it has been completely cut, nibbled and scratched clean, i.e. nothing is left, the whole strength is in search methods, or rather in bricks of such methods, we will not finalize the methods themselves, as it would spoil the idea, but the constructor can be discussed.

5)BP of the volume complements the prediction. Also good. As for me in general need to find a way to quantify how all available information can be related to the BP under study. And use all data, with a certain threshold of course.

6)A single-valued cessation signal is very subjective. "Single-value" is not the same as single-value, that's how you define it, in short I don't agree.

The main Dow principle is that the market is more likely to stay in the same condition than to change sharply. That the trend is statistically more likely to continue.

In principle it is something to think about, how to make it into a brief formal statement and put it in the gallery.

Great! A start has been made, I think we can do it!

If the market takes into account all the news, then its informational entropy will be maximum, and therefore it will not differ from a random wandering. In such a market you cannot make any profit, as well as everyone else. The idea "the market takes into account everything" is the cornerstone of EMH theory. For the record, this theory covers at least 97% of the market movements we observe (my subjective and a priori estimation), and that's a lot not to reckon with this theory.

The mere fact that instruments are correlated gives us nothing. Imagine a random number generator. Suppose two other BP processes are tightly correlated with this generator. We look at these two BPs and see that they are correlated with each other though the only link between them is this very generator. What does knowing the correlation between these two processes give us? Nothing, because the behaviour of the RNG underlying their correlation is still undefined.

Trends do tend to maintain their state. But the degree of this "propensity" is extremely small and in today's markets is about 53% (this is a calculated and proven value). This is a very small figure. If you learn how to filter out that 3% in your TS, you will be making money. That being said, hrenfx's statement that breakout systems are unlikely to be possible looks unprofessional. Moreover, based on some statistics, we can say, that in today's markets only trading strategies aimed at breakout levels can exist.

 
C-4:

Trends do tend to maintain their fortunes. But the degree of this "propensity" is extremely small and in today's markets is around 53% (this is a calculated and proven value). This is a very small figure.

how was this figure derived and for which market?

 
Avals:

How was this figure obtained and for which market?

Again, I'm referring to Hearst's value (it's just that whenever we talk about the probability of a trend continuation/reversal, we're actually referring to price determinism). That's why I think it's appropriate to mention this value once again (but only in passing) in this thread.

As for the method itself, it's a custom non-parametric numerical method based on the zig-zag indicator. The mathematics itself is specific, I don't think it's worth discussing here. The only important thing is that the calculation results coincide with the test data series with a decent accuracy. So the coefficient calculated for the series with the known value H=0.30 turned out 0.34, for rad 0.50 turned out 0.51 for rad 0.70 turned out 0.70 (i.e. the more the trend, the less the error). However, as far as I know, there are functions in R language, in particular in "pracma" package, which work with no less decent accuracy, but use tens of times less computing resources. Therefore, the method itself is secondary, the only important thing is that the values obtained on the test samples are close to the expected ones, and therefore we can assume that the value obtained for the markets is also close to the true one.

Different markets were tested. In this regard, they show characteristics similar to each other: all trendiness values are in the range of 0.5 - 0.54, but there are some other statistics, which differ very much. In the complex calculation of H, I also obtained some other values, which are very different from market to market, but what they mean I do not know yet. I am working on it now. However, there are first guesses. So I think I am close to a numerical definition of the Noah/Joseph effect, as well as a coefficient showing the noisiness of the market (if it is true, Forex is one of the noisiest markets).

 
pantural:

a) "Finding something without knowing what" is apparently just my case.

b) And the nuances of forum communication, filtering and other rubbish, frankly speaking on the designer forums more rubbish, as it seemed to me, all the same pseudo-godly cronies are more feminine and grumpy, and the traders at risk thinking more sober (IMHO), dough odupyuyut apparently ... In general, I do not care much about rubbish. The main thing that among it was a grain of truth.

Why obscurantism? It's a great starting point! Let's think, for a start, how it can be concentrated into a set of inferences which are as rigorous as possible. With which you can process time series and make predictions.

1) Three types of trends.

Quite controversial, in my opinion. You can have 2, you can have 5 and you can have 10 too. There are thousands of large and hundreds of thousands of small influences on price, stream of trade orders which are based on arbitrary prediction, thousands of strategies, all timeframes, etc. The distribution is close to normal in most parameters. I don't want to divide it into 3 types. I do not understand what the point is. I reject it.

...

a) Profit.

b) Traeders are more diverse, and quickly find someone to counterbalance "pseudo-bogeymen" and other miscellaneous. + "non-sleepers" (moderators) :).

c) Don't throw the baby out with the water. If you are considering a theory, take into account the time and market, when and for whom it was written. And at the same time, on what TFs.

1) You're just a theory breaker. You don't need to read any further.

 
C-4:
...

Trends do tend to preserve their state...

and what others have said about it...

The problem is that each market "state" is preserved in its own way, the functions are different. We should first formalize the function of state recognition itself to distinguish its presence in time series(s), any state has at least two variables (time scale and start time divided by the average life time of this state), on which the probability function of predicting the continuation/change of this state depends, but usually there are more variables. The function also depends on the way the state is defined. "Trend" can be defined in different ways and these trends will be predicted in different ways. This is what turns science into alchemy, as the options space explodes and it is difficult to define criteria for dominant (in terms of benefits) directions of research. This is the main trouble.

At the beginning we need to deal with the space of functions (their basic types), recognition of "states" on timeseries, and then to the attached to them prediction functions.

Methods of search of methods of search of functions, in my opinion too heuristic and shamanistic area, conversation can turn to a conversation (more precisely the flood) about "positive thinking" and motivation, though a direction of conversation is interesting. IMHO

Документация по MQL5: Стандартные константы, перечисления и структуры / Торговые константы / Свойства ордеров
Документация по MQL5: Стандартные константы, перечисления и структуры / Торговые константы / Свойства ордеров
  • www.mql5.com
Стандартные константы, перечисления и структуры / Торговые константы / Свойства ордеров - Документация по MQL5
 
Alex_Bondar:

The problem is that each market "state" is preserved differently, the functions are different. We should first formalize the function of state recognition itself to highlight its presence in time series(s), any state has at least two variables (time scale and start time divided by the average lifetime of this state), on which depends the probability function of predicting the continuation/change of this state, but usually more variables. The function also depends on the way the state is defined. "Trend" can be defined in different ways and these trends will be predicted in different ways. This is what turns science into alchemy, as the options space explodes and it is difficult to define criteria for dominant (in terms of benefits) directions of research. This is the main trouble.

At the beginning we need to deal with the space of functions (their basic types), recognition of "states" on timeseries, and then to the attached to them prediction functions.

Methods of search of methods of search of functions, in my opinion too heuristic and shamanistic area, conversation can turn to a conversation (more precisely the flood) about "positive thinking" and motivation, though a direction of conversation is interesting. IMHO

I think you are making it too complicated.

The market has only three states: trend, counter-trend and state of uncertainty. Any fourth state you invent will turn out to be a combination of those three. That's why I'm willing to admit that there may be three methods of analysis, each for one state, but no more than that. Moreover, if we have a single value defining one of three states, then it means that these states are described by a single function, and our search can be reduced only to its finding. Thus, we arrive at a simple market model which is in full agreement with the observed market and test data series whose statistics are known in advance.

Z.U. The state of uncertainty is simply a certain boundary around 0.5 probability. So, if your identification methods are so cool that the probability of movement continuation equal to 0.50001 is already significant for you, then this state is a trend state for you, you know how to work with it and you know how to derive profit from it. For others, this condition is uncertain. And the worse methods, which these others use, the more the market is in uncertainty for them. I.e. one of the three values is a purely subjective estimate, and the market model can be simplified to two values described by a single statistic.

 
C-4:

I think you are making it too complicated.

The market has only three states: trend, counter-trend and uncertainty. Any fourth state you can think of will be made up of these three.

I'm not referring to the general classification, but specifically how and what can be predicted. In practice we deal with petterns (vectors, arrays) of price and some functions of compressive description, that can be broken down in 100500 ways, the essence is not the point, we need the shortest way to identify and cluster such state recognizers and methods of their prediction. It is far from 3 and not 30. In fact there is no sense even to count them, they themselves are defined for each FI by statistical methods.

It is even simpler to say not 3 but 2! The first are functions of recognition, the second is a function of prediction for the recognized states.

But as often happens simplicity is deceptive.

 
Alex_Bondar:

I don't mean the general classification, but exactly how and what can be predicted. And in practice we deal with petterns (vectors, arrays) from price and some functions compressing the description, which can be broken down in 100500 ways, the point is not that, we need to get the shortest breakdown on ways to identify and cluster such state recognizers and methods of their prediction. It is far from 3 and not 30. In fact there is no sense even to count them, they themselves are defined for each FI by statistical methods.

It is even simpler to say not 3 but 2! The first are functions of recognition, the second is a function of prediction for the recognized states.

But as often happens simplicity is deceptive.

At the heart of any pattern is either a continuation of the trend or a reversal of it. No matter how many patterns there are, they will all use the same effects. To illustrate this, take any two trend robots based on different principles. Run them on the same symbol - their correlation will be high. The reason is that although they both use different patterns, they nevertheless make money thanks to the common notional H. It's that simple.
 
pantural:

Hello, everyone. I'm Pantural.

My third time trying to settle on Forex, I feel that this is something, but life bends its course, and beyond the sinking of 2 depo 200 $ it never came. Now I think the situation is better than it was 4 years ago, at least spreads are wow cool!(oh my god! It is the pantural! wow!)

I've been looking in here for a while now, whenever I get a spare minute. I haven't really gotten into automation yet. I don't have much experience in manual trading, either. I'm matured, like a juicy peach! So I decided to log in and start a dialog. I'm a hot Caucasian guy. I'm a hot Caucasian guy, demanding seriousness and deference.

I read the thread How to Write a Robust Expert Advisor, and some posts encouraged me to investigate.

In particular, I want to once and for all define what is meant by "market regularity", it is also called "inefficiency", as I understand it in the sense that efficiency = randomness, respectively inefficiency = non-randomness. In general, long ago, I immediately and without delving into the essence, as if I understood what it was about, but now I must admit that I no longer understand, everything splinters. I have to put it all back together again.

For instance, a comrade is speaking in black and white:

Is there a procedure (method, algorithm) for finding such patterns? Or it is a random search of all possible combinations of indicator parameters and Expert Advisors, candlestick patterns, etc. And if the equity increases and differs from the chart, the determined regularity is announced? Is there a logic and a plan to it?

Please speak without modesty and embarrassment, but without any memes or other jokes please.

This is a good starting point, it is a pity nobody usually replies on this subject.

I can simply write what regularities I have found. I have found 2 patterns, and neither one is a secret.

1) Volatility cyclicity. If we take the APP indicator with a period of 12, and place it on the hourly charts, EURUSD GBPUSD, we will see cyclicality. It can be easily predicted by the linear prediction method, or the Fourier method. From this, we can roughly predict the average volatility in the next 12 hours, with a certain margin of error. What to do with this is not yet clear.

2) Every pair has its own dominating fluctuations (i.e. undecreated movements), they change quite inertly and therefore are very easy to forecast with certain accuracy. For example, we can simply measure stripes (some pieces) of zig zag, average their length in pips and hours, obtain individual value for the particular pair at the moment and we can say that this average will not change much in the near half of the measurement period. For example, if we have taken 50 zig zags, it means that the average value will likely stay the same throughout the next 25.

Here. I suggest that each person should post his/her findings and it will be interesting for all.

 
C-4:

I'm referring to Hearst's value again (it's just that whenever we talk about the probability of a trend continuation/reversal, we're actually referring to price determinism). That's why I think it's appropriate to mention this value once again (but only in passing) in this thread.

As for the method itself, it's a custom non-parametric numerical method based on the zig-zag indicator. The mathematics itself is specific, I don't think it's worth discussing here. The only important thing is that the calculation results coincide with the test data series with a decent accuracy. So the coefficient calculated for the series with the known value H=0.30 turned out 0.34, for rad 0.50 turned out 0.51 for rad 0.70 turned out 0.70 (i.e. the more the trend, the less the error). However, as far as I know, there are functions in R language, in particular in "pracma" package, which work with no less decent accuracy, but use tens of times less computing resources. Therefore, the method itself is secondary, the only important thing is that the values obtained on the test samples are close to the expected ones, and therefore we can assume that the value obtained for the markets is also close to the true one.

Different markets were tested. In this regard, they show characteristics similar to each other: all trendiness values are in the range of 0.5 - 0.54, but there are some other statistics, which differ very much. In the complex calculation of H, I also obtained some other values, which are very different from market to market, but what they mean I do not know yet. I am working on it now. However, there are first guesses. So I think I am close to a numerical definition of the Noah/Joseph effect, as well as a coefficient showing the noisiness of the market (if it is true, Forex is one of the noisiest markets).

Agreed. Forex is influenced by too much information and it's very patchy, so it's very much like a random market. For now it seems to me that to determine the direction for the next 10 hours is almost impossible, only an approximate character of this movement can be determined. Although I would love to hear from someone who doesn't think so (and not just IMHO, and the method I would like to see).