An effective trading strategy based on multi-currency analysis of multiple DCs - page 4
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
But to make a long story short: it is built on the principle of analyzing 15 currency pairs, the forecast is made for each new bar, 12 steps ahead, as I have already written, by High, Low, Close, and also by the trend selected from Close using Vivelet transformation, meanwhile, the forecast of each of these signals is made independently by the first output of the system - for the first bar, for the second output - for the first and second, for the third - for the first and second and third, etc., until the 12th output, which makes the forecast for all 12 bars. All in all, the expert system has 48 outputs used to make independent decisions. Correspondingly, the shorter is the forecast interval, the higher is the accuracy. I get the resulting signal by summing up 12 values for the first bar, 11 values for the second bar, 10 for the third one, etc. I should do the same for all 4 signals. Such averaging allows you to get rid of random noise and errors of the forecast unit which are compensated by summing up. And the accuracy in this case is much higher than in case of one-time forecast. Then the results obtained are added up with the values of the forecast made for the previous bars in the past system operation cycles.
Also, if I may ask, why a simple averaging of the predicted 12-bar segments is adopted in the calculations?
After all, as far as I understand it, the closer a forecast is made to the present, the greater its credibility, and the highest is for the most recent forecast.
At the same time, the forecast made 11 bars ago (which last bit still affects the total forecast, i.e. the nearest unformed bar) is the least reliable. The forecasts themselves may differ significantly. Don't you think it's necessary to average the forecasts to get the resulting forecast curve, using weights proportional to the reliability or correlation coefficient relative to the simple average of the forecasts?
And another question about the lag of signals between DCs. Can you estimate the lag approximately quantitatively? Is it concerned with "fluffiness", i.e. lagging alternates with advancing, or there is a shift of the general average of one DC in relation to the average of the other, and if so, by how much (in seconds)?
I have not made quantitative estimations of lags of different brokerage companies' signals, but in my opinion, they are connected with "fuzzy" quotes and also with the difference of quotes sources and their processing method. The fact is that in properly selected brokerage companies the lag is always present, but the degree of lag is different at different moments that shows many factors influencing this lag. I perceived this lag visually, and even when it was reduced to a minimum it was detectable.
As for the idea itself, support for the idea is still needed, not in the reasoning, but in the thoughts that will visit, based on the graphic output, of course when there is nothing to reflect on, thoughts do not appear, it all depends probably only on the depth of thinking of the thinker.
I am also interested in this topic, but I have not come across any new ideas since then.
What do you mean?
Yeah, then it turns out that the speed missed in the output can be related to the number of clients and update rate, as well as the time of day and the number of operations, ultimately determined by server load, and it indicates a sharp drops and a large number of operations on the channel and many other factors.
The Expert Advisor system has 48 outputs, 12 for each signal, High, Low, Close and trend, for each output an independent parallel forecast is given, but the first one gives a forecast only for 1 bar forward, the second one - for 2 bars forward, etc. and all of them are made in the present moment and the values of the last formed bar in 15 currency pairs without any lag arguments are used as input parameters for forecasts for all outputs. Then all forecasts are averaged one bar forward for all outputs, all forecasts for the second bar, but for 11 outputs, from 2 to 11, as the first output gives no forecast for the second future bar, etc. I didn't enter weighting coefficients because my objective with this method was not to improve the forecast accuracy per se, since the forecasts for different outputs didn't differ significantly, but to compensate noise and distortions in the forecasting block, while they have the same amplitude of oscillations and this method allows to decrease them considerably.
I haven't made quantitative estimations of the lag of different DC signals. But to my mind it is connected with "fuzzy" quotes and also with the difference of quotes sources of different brokerage companies and their processing method. The fact is that in properly selected brokerage companies the lag is always present, but the degree of lag is different at different moments that shows many factors influencing this lag. I perceived this lag visually, and even when it was reduced to a minimum you could catch it.
OK. Thank you.
The forum prohibits discussion of specific DCs, and whether it matters, it is not difficult to find a suitable one:) The best is not very popular abroad and the most popular of ours, everyone has his own luck:)
As for the idea itself, support for the idea is still needed, not in the reasoning, but in the thoughts that will visit, based on the graphical output, of course when there is nothing to reflect on, thoughts do not appear, it all depends probably only on the depth of thought of the thinker.
I too am interested in this topic, but I have not come across any new ideas since then.
What are you talking about?
There was a time when I was working in a DC when I was doing a cluster analysis of quotes from 12 banks. At that time it was supposed that it was possible to find the connection between future fluctuations of quotes and the difference in time of arrival and the values of quotes from these banks themselves. So, even at that time it was determined that the pattern we've discovered can be used only for pipsing and only in strong market moves. And not only clustering was performed, but also data analysis using several other methods (or expert systems, as some of them are called by the author of the branch). And I cannot understand what one can see in the method proposed by the author for analysis and trading even with short periods (not speaking about medium- and long-term), if one cannot practically find anything useful even with clean, "unbrushed" (not averaged by the dealing centre) quotes?
I too am interested in this topic, but I have not come across any new ideas since then.
- Yes, you'll get hammered by requotes as a pips man :)
There was a time when I, working in a DC, was doing a cluster analysis of quotes from 12 banks. At that time it was assumed that it was possible to find a correlation between future fluctuations of quotes and the time difference in arrival and the values of quotes from these banks themselves. So, even then it was determined that the pattern we've discovered can be used only for pipsing and only in strong market moves. And not only clustering was performed, but also data analysis using several other methods (or expert systems, as some of them are called by the author). And I cannot understand what one can see in the method proposed by the author for analysis and trading even with short periods (not speaking about medium- and long-term), if one cannot practically find anything useful even with clean, "unbrushed" (not averaged by the dealing centre) quotes?
I too am interested in this topic, but I have not come across any new ideas since then.
- Yes, you'll get hammered by re-quotes as a pipsitter :)