You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
You showed a 3-dimensional probability space, as far as I understood, for an example - to make it clear. And in your calculations, what is the probability space of what dimensionality, if it is not a secret?
As an example, but this is an actual calculation for one modification of my system. As I wrote, I use (or at least try to use) systems with random structure as the basis. There are only three measurements for all modifications: probability density, time and price (or rather price conversion function).
Mine, for example (in a similar problem, but solved in a different way), is 6-dimensional.
If it's not a trade secret, can you tell me more about it?
I'm also curious about your computational resource, i.e. how long does it take to compute, and what affects the counting time (forecast range, for example, how much does it affect?)?
For one quote it takes 2 or 3 hours to make a 5 day forecast and detect a trend change :o(
On the picture posted it's 80-100 steps, that's pretty far. But you show 5 points on the forecast charts. What does this have to do with?
5 days is aggregation. A lot of architecture decisions haven't stabilised yet, . Still in a creative search :o)
The first "quick" results of the trend duration statistics (according to their classification system). For almost all quotes is a Weibull distribution with the following parameters:
The most critical for the system is the duration of trends within 1, 2, 3 days. It is unlikely that the system is so sensitive that it will be able to detect such trends. The probability of a trend lasting less than or equal to 3 days is 12%. I thought it would be less, around 2-7% :o(
I don't want to go into detail, and it won't work in two words, but I'll try. Some functional dependencies are defined along the row, which form the space for calculating parameters (for example, one of them is the necessary partitioning of the multidimensional density) of the other space. In this next space the multidimensional density is constructed and the parameters of its change are calculated. Then, as you have - a sweep of 2-dimensional density over time is constructed, according to the established trend (stochastic density trend). Now I'm thinking about the dependence to be established, which should determine which function of density under which conditions is the best candidate for the predicted value (when average, when mode, when other functions...) There are 6 measurements in total (with nested ones).
About the aggregation - roughly understood, it means that the prediction is not counted for 5 points, but for more. I have only 7-10 points in 2 hours can be counted, I use Close. I've thought long and hard about (H+L)/2, but decided to think twice before using it, because it's not clear how calculations floating relative to discrete timeframes can influence the system, which has a calculation scheme oriented exactly to work with discrete timeframes.
The entire calculation is in MT5, but I want a dll. The system works with roughly these distributions (cross section):
Sometimes there are smooth dependencies, but rarely.
I don't want to go into detail, and it won't work in two words, but I'll try. Some functional dependencies are defined along the row, which form a space for calculating parameters (for example, one of them is a necessary partitioning of multidimensional density) of another space. In this next space the multidimensional density is constructed and the parameters of its change are calculated. Then, as you have - a sweep of 2-dimensional density over time is constructed, according to the established trend (stochastic density trend). Now I'm thinking of establishing a dependency that should determine which density function under which conditions is the best candidate for the predicted value (when average, when mode, when other functions...) There are 6 dimensions in total (with nested).
Conceptually understood, I have a bit simpler, closer to "classic".
About (H+L)/2, imho thought long and hard, but decided to think again before using, ...
I ended up switching to Open[] solely because of calculation regulations.
Complicated the model by including neighbour influence quotes in the forecast, but did not have time to complete it over the weekend. There are no forecasts now, the "time machine" has been taken apart and it will take a couple of days to upgrade. :о(
Complicated the model by including neighbour influence quotes in the forecast, but did not have time to complete it over the weekend. There are no forecasts now, the "time machine" has been taken apart and it will take a couple of days to upgrade. :о(
The first "quick" results of the trend duration statistics (according to their classification system). For almost all quotes is a Weibull distribution with the following parameters:
The most critical for the system is the duration of trends within 1, 2, 3 days. It is unlikely that the system is so sensitive that it will be able to detect such trends. The probability of a trend lasting less than or equal to 3 days is 12%. I thought it would be less, around 2-7% :o(
I think you said that you analyse on the daily market. So there are no 3-day trends on the daily TF and they cannot be by definition. On the 4-hour TF a 3-day trend is defined and can be analyzed. Exactly as a trend.
A good cause should not have been taken apart...
I think you said that you were analysing on the daily chart. So the 3-day trends on the daily timeframe do not and cannot exist by definition. On the 4hour timeframe a 3-day trend is determined and may be analyzed. Exactly as a trend.
It's all about classification. I do not use a zig-zag to identify trends because price values at local extrema are literally completely random. One can put it another way, the price at local extrema of ZZ is so rare that it is simply impossible to find regularities. From a practical point of view, it is useless - any assumption based on it is pointing a finger in the sky.
Another classification is used which allows us to hope for predicting trends. But this classification has its own subtleties, it's rare, but a trend of 1 full day can appear on the daily counts (from the date of trend change). There is nothing to worry about, often such a trend of 1 day may be a bar with a very large spread, i.e. in it the "directional sum" of increments is very large. Then, after the "explosion" (or "shift") the system enters the "rhythm" of the market for a long time and it is possible to forecast. But the system will most probably be wrong at such periods.