![MQL5 - Language of trade strategies built-in the MetaTrader 5 client terminal](https://c.mql5.com/i/registerlandings/logo-2.png)
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I feel that this MQ wall will not get through without the support of the forum members. The code is short, the pros should be able to figure it out quickly. There are no flaws there. It is clearly shown that prices through positions are obtained much faster than from Market Watch. How MQ can't see the obvious - I don't understand.
1. Your test really counts a micro percentage of iterations due to the condition
In essence you only count anomalies where the processor is overloaded with tasks and put off performing the given task on the far shelf, as over 99% of iterations are performed in less than 1 microsecond.
And even if you set the condition >0, there is still no objectivity.
2. The time measurement of such fast operations should only be done as a full cycle time, not a single iteration.
3. But since the cycle in your example is limited to 10 seconds (Why! For ticks I think 0.1 sec is quite enough. Because it may well happen that 3 ticks will arrive in a second, and all the three ticks will be executed for 10 seconds each, and in parallel), so no timing is needed. It's easier to calculate how many iterations will be executed in a given time. The more, the more productivity.
I modified your code "a bit". I think my variant reflects reality better.
The calculation is done one at a time, so as not to mix the two variants. Even numbered ticks are forSYMBOL_BID, odd ones - for GetBid().
I added sums and their output just in case, as an attempt to trick the compiler against optimization.
The output result is cumulative.
My result:
As you can see, the difference in performance is three times in favour of the standard version.
As you can see the difference in performance is three times in favour of the original version.Does the original version from fxsaber show the GetBid advantage, or is it about a more powerful/less loaded PC?
Does the original version by fxsaber show GetBid advantage, or is it more powerful/less loaded PC?
His variant also showed GetBid advantage at full CPU load. But at the same time my variant shows three times the advantage of the regular function at the same load.
This is because my variant takes into account the average time of all iterations of getting Bid price and its only a tiny fraction with anomalous hangs.
Who knows for what reason the processor gets bogged down with the regular function (when the delay is more than 100 µ) in a difficult "minute". But still the average time is three times less for the regular function
So, for example, if (Interval##A > 100) this is the case:
whereas if (Interval##A > 0) is already quite different, showing a random distribution of abnormal delays between the regular and the alternative version of getting the Bid price
at the same time my test at the same CPU load shows:
Therefore, I think fxsaber's version of the test is far from objective.
I did not load CPU with agents, but with this script. It was more efficient.
after a slight modification of the fxsaber test to clearly demonstrate what percentage of iterations are accounted for in the calculations:
i.e. approximately 0.01%.
You bet.
If the average execution time of SymbolInfoDouble(_Symbol, SYMBOL_BID) is about 50 nanoseconds, only those with execution time over 100 000 nanoseconds are taken into account.
after a slight modification of the fxsaber test to clearly demonstrate what percentage of iterations are accounted for in the calculations:
i.e. approximately 0.01%.
You bet.
If the average execution time of SymbolInfoDouble(_Symbol, SYMBOL_BID) is about 50 nanoseconds, only those iterations greater than 100 000 nanoseconds are counted.
We could have simply made the condition not more than 100 µs, but more than 3 µs. The result was apparently the same. The thought was that a segmental study and in different execution conditions there may be a difference in different segments and in different sections. Execution priorities are often made depending on anything. At a light load some priorities, at high load others, at critical ones, those that do not let the computer hang and crash, and performance goes into the background.
Generally, trading at a load of more than 70% of the hardware is not right. It's almost critical performance. The iron load on combat EAs should not be more than 60%.
and do you already have HFT brokers?)
try to test SymbolInfoTick when there is only one symbol in the market overview and when there are dozens of symbols, but ask for one tool - like in your example
there is a high probability that the server is sending compressed traffic and that SymbolInfoTick is experiencing this intermittent slowdown when decompressing the data
i.e., when there are a lot of symbols, there will be even more frequent or deeper dips in test time
In recent builds, receiving tick stream has no effect even theoretically. Practically, SymbolInfoTick already works with cache, but some citizens keep looking for a black cat.
It's not even 80% in the test. There are 6 agents running on 4 cores, i.e. 100% guaranteed.
The only question is how his system's task scheduler is handling the situation. At the same time, the authors make claims that it is the implementation of the terminal that is to blame.
That is, a situation is artificially created when a computer is overloaded, when literally everything on it slows down, and then some claims are made in the form of "Oh, look, why is the terminal lags sometimes".
Let's close our eyes to the fact that even in such conditions it is "about 0.01%" - to hell with the details! Suffice it to say that "no one cares about the average hospital temperature", "lags cause problems when trading" and "we want HFT".
Moreover, of course we want HFT in 20 experts on an old office desktop or a dead virtual machine.
PS PositionSelectByTicket() in its implementation certainly has access to a shared resource with access synchronisation. And if you don't select the position on every call, you're reading the old price. It was easier to "snapshot" via SymbolInfoDouble.