Features of the mql5 language, subtleties and tricks - page 89
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So far I see only one mentioned disadvantage of GetTickCount - the fact that it is limited by the resolution of the system timer, that's a serious problem. And the rest - not of much practical use.No one does that when you run ultra-short tests of 15 milliseconds. Such results are unstable. You need at least half a second for the test, then we can already speak about something.
No one does ultra-short tests lasting 15 milliseconds. Such results are unstable. You need at least half a second for the test, then we can talk about something.
You are wrong. Within those 15 milliseconds, the GetTickCount() function is called over 6 000 000 times.
My calculation is correct. The GetTickCount() value changes every 1/(2^6)=1/64 seconds (15625 microseconds).
Developers, please confirm.
Wrong. During these 15 milliseconds,the GetTickCount() function is called more than 6 000 000 times.
When you measure the performance of a working code (not a spherical horse in a vacuum), it is run for a sufficient period of time (hundreds of milliseconds and over). This is conditioned by the fact that the system's performance and workload are constantly changing. It is one moment and another moment it is already different. And the results will vary greatly from test to test in short intervals. That is why you need a longer interval and microseconds do not matter there anymore.
Further, we have quotes in milliseconds. Pings are also calculated in milliseconds, so I do not see where microseconds should be put. Anyway, it is not the point.
Now I have to think how to get out of this situation to avoid the disadvantages of both functions. There's not much hope for the developers.
Now we need to think of a solution to get around the disadvantages of both functions. There is little hope for the developers.
I think it's possible with a formula like this in the first approximation:
So far it's just an idea out loud.
Some places where I use microseconds
Some places where I use microseconds
So, firstly, the timer is millisecond, and secondly, its error is the same as GetTickCount(), i.e. 15 milliseconds, so what's the point of microseconds is not very clear. Suppose you've calculated an interval accurate to micro, but in fact it will come several MILLIONS of seconds later or earlier.
So the timer, firstly, millisecond. Secondly, its error is the same as GetTickCount(), i.e. 15 milliseconds. So what is the meaning of microseconds - not very clear. Suppose you calculated the interval with micro accuracy, but in fact it will come a few MILLIONS of seconds later or earlier.
And also commands get queued up and execution may be in 5 seconds...
Now we have to figure out a way to get around the disadvantages of both functions. There is not much hope for the developers.
Alas.
I can only offer you this version of the feature:
Why Alas?
Because if you change the local time or just a program hangs, theRealMicrosecondCount function may have an error of up to 16 milliseconds. There is no way to avoid it.
But there will be no fatal consequences when switching to summer time, time zone change, time update via Internet.
Alas.
I can only offer this version of the function:
Why Alas?
Because in case of change of local time or simply software hangs, theRealMicrosecondCount function may have an error of up to 16 milliseconds. And there is no way around it.
But there will be no fatal consequences when switching to summer time, time zone change, time update on the Internet.
Not checked yet, but about 16 ms I'm not so sure. When I googled on this subject, usually the system timer error is given as about 10 ms, or 10-16 ms.
Here is a variant using a high-resolution winapi timer, giving an accuracyof 3.8e-07 seconds.