Features of the mql5 language, subtleties and tricks - page 89

 
Nikolai Semko:

  1. see my previous post.
  2. I have plenty of examples with practical benefits of microseconds.

So far I see only one mentioned disadvantage of GetTickCount - the fact that it is limited by the resolution of the system timer, that's a serious problem. And the rest - not of much practical use.No one does that when you run ultra-short tests of 15 milliseconds. Such results are unstable. You need at least half a second for the test, then we can already speak about something.

 
Alexey Navoykov:

No one does ultra-short tests lasting 15 milliseconds. Such results are unstable. You need at least half a second for the test, then we can talk about something.

You are wrong. Within those 15 milliseconds, the GetTickCount() function is called over 6 000 000 times.

My calculation is correct. The GetTickCount() value changes every 1/(2^6)=1/64 seconds (15625 microseconds).
Developers, please confirm.

 
ikolai Semko:

Wrong. During these 15 milliseconds,the GetTickCount() function is called more than 6 000 000 times.

When you measure the performance of a working code (not a spherical horse in a vacuum), it is run for a sufficient period of time (hundreds of milliseconds and over). This is conditioned by the fact that the system's performance and workload are constantly changing. It is one moment and another moment it is already different. And the results will vary greatly from test to test in short intervals. That is why you need a longer interval and microseconds do not matter there anymore.

Further, we have quotes in milliseconds. Pings are also calculated in milliseconds, so I do not see where microseconds should be put. Anyway, it is not the point.

Now I have to think how to get out of this situation to avoid the disadvantages of both functions. There's not much hope for the developers.

 
Alexey Navoykov:

Now we need to think of a solution to get around the disadvantages of both functions. There is little hope for the developers.

I think it's possible with a formula like this in the first approximation:

ulong RealGetMicrosecondCount=(GetTickCount()-StartGetTickCount)*1000+x+GetMicrosecondCount()%15625;

So far it's just an idea out loud.

 

Some places where I use microseconds

  • Custom TimeCurrent, with approximate accuracy to msec.
  • Calculation of trade order execution time.
  • Calculation of the synchronization time of the trading history in the Terminal.
  • Terminal's lag (~ 5 ms) - by how much the tick is out of date at the moment of its OnTick/OnCalculate.
  • Correction of OnTimer to make the distance between any (not only neighboring) Time-events a multiple of the specified time.
 
fxsaber:

Some places where I use microseconds

  • Adjusting OnTimer, so that the distance between any (not only neighboring) Time-events is a multiple of the given time.

So, firstly, the timer is millisecond, and secondly, its error is the same as GetTickCount(), i.e. 15 milliseconds, so what's the point of microseconds is not very clear. Suppose you've calculated an interval accurate to micro, but in fact it will come several MILLIONS of seconds later or earlier.

 
Alexey Navoykov:

So the timer, firstly, millisecond. Secondly, its error is the same as GetTickCount(), i.e. 15 milliseconds. So what is the meaning of microseconds - not very clear. Suppose you calculated the interval with micro accuracy, but in fact it will come a few MILLIONS of seconds later or earlier.


And also commands get queued up and execution may be in 5 seconds...

 
Alexey Navoykov:

Now we have to figure out a way to get around the disadvantages of both functions. There is not much hope for the developers.


Alas.
I can only offer you this version of the feature:

ulong RealMicrosecondCount()
  {
   static bool first=true;
   static ulong sum=0;
   static long delta;
   static long shift=0;
   static ulong  lasttickcount;
   ulong i=GetTickCount()+sum;
   ulong t=GetMicrosecondCount();
   if(first) // если первый вход, то вычисляем разницу GetMicrosecondCount и GetTickCount
     {
      lasttickcount=i;
      delta=((long)i*1000-long(t));
      first=false;
     }
   long curdelta=((long)i*1000-long(t));
   long d=curdelta-delta;
   if(fabs(d-shift)>20000) shift=d;
   if(i<lasttickcount) sum+=0x100000000;
   lasttickcount=i;
   return (t+shift);
  }

Why Alas?
Because if you change the local time or just a program hangs, theRealMicrosecondCount function may have an error of up to 16 milliseconds. There is no way to avoid it.
But there will be no fatal consequences when switching to summer time, time zone change, time update via Internet.

 
Nikolai Semko:


Alas.
I can only offer this version of the function:

Why Alas?
Because in case of change of local time or simply software hangs, theRealMicrosecondCount function may have an error of up to 16 milliseconds. And there is no way around it.
But there will be no fatal consequences when switching to summer time, time zone change, time update on the Internet.

Not checked yet, but about 16 ms I'm not so sure. When I googled on this subject, usually the system timer error is given as about 10 ms, or 10-16 ms.

 

Here is a variant using a high-resolution winapi timer, giving an accuracyof 3.8e-07 seconds.

#import "Kernel32.dll"
  int QueryPerformanceCounter(ulong &lpPerformanceCount);
  int QueryPerformanceFrequency(ulong &lpFrequency);
#import


ulong QueryPerfomanceCounter() { ulong value;  if (QueryPerformanceCounter(value)) return value;  return 0; } 

ulong QueryPerformanceFrequency() { ulong freq;  if (QueryPerformanceFrequency(freq)) return freq;  return 0; }  


long GetPerfomanceCount_mcs()
{ 
  static long freq= QueryPerformanceFrequency();
  return freq ? QueryPerfomanceCounter()*1000000/freq : 0;
}


void OnStart()
{
  Print("Resolution of perfomance counter:  ",1.0/QueryPerformanceFrequency()," s");
  ulong  perfcount= GetPerfomanceCount_mcs(); 
  
  while(!IsStopped())
  {
    Comment((GetPerfomanceCount_mcs()-perfcount)/1000000.0);
    Sleep(10);
  }
}