Testing GetTickCount64() vs GetMicrosecondCount() - page 2

 
Well, I can think of multiple usecases, some of them are: displaying the time on chart, synched with the system time. So the display gets updated when the clock changes. No need to say, resource saving timer with a one second interval will render unreliable updates.

Another usecase are realtime neuronal networks which are based on continuous updates based on a reliable timer, giving ie a LSTM a consistent timeline to read.

Another use case is the execution of certain tasks at a given point in time, like increasing available computational time around when the minute in the market is about to change, to make sure to get all ticks from higher frequency as ie 15m, 5m and 1m bar close, or around news events. In this case it the code in use to do the "usual" analysis is quite heavy, it might make sense to stop processing such and only react to open positions and changes in price.

All these need precise timer functionality.

BTW, it is untrue that the OS does not provide precise timers. For Windows, a 100ns timer is provided, giving a very reliable source to synchronize milliseconds, and even to some extent microseconds.

No matter, I sat the whole day yesterday and solved it to a satisfying solution, giving a reliable GetTickCount64() and a TimeLocalMilliseconds() which work great, without the usage of DLLs.

So, yes, I do have some use cases for it, and I have solved that issue.

Maybe you could think of other scenarios where such a timer would be good? Like measuring the real latency between trading server, terminal and the EA/indicator.

Maybe it seems over engineered, but I like when things are clear and defined.

In the end I also managed to have synchronized OnTimer() just as I need it, even with varying work loads. That was for sure not fun, since the terminal it self tries to adjust for that. But it does it in a poor manner, and has significant issues with varying workloads, call by call.

Well, I hope this gives some insight to why I have put in the effort to solve this for me.
 

Windows is not a real-time OS.

 
If a MQ trading system of a normal client (server delay 20, 30 msec ...) depends on microseconds it's not worth working on it. Just my 2 Cent.
 
Yes, Windows, Linux, MacOS and most other OSes out there are pre-emptive operating systems.

Just for a simple comparison:

If you would count to 1 billion, stating a number every second, starting with your birth, never sleeping, you would not finish the task in your lifetime.

A CPU or computer for that matter, running at 4 GHz, which can be considered normal nowadays, does an integer addition in 4 clock cycles. So a computer only needs one second on one core to do the job, you cannot fulfill within a lifetime.

Just to break this down, let's say, for simplicity, 100 years is equivalent to 1 second.
Then 1 millisecond would be equivalent to 0.1 years, or 5.2 weeks, which comes to around 37 days.

Now let's say this goes for microseconds. Then it would be 0.037 days, and that would be equivalent to roughly 3200 seconds wich is comparable to 45 Minutes.

So the "fluctuation" of roughly 16ms is easily taken for granted, but in reality a microsecond can already be seen as long and imprecise.

For my understanding at least.

The argument a strategy that relies on microseconds "for a normal user" is valid for a normal user.

But if you talk some more state of the art tech around trading, and gaining an edge, it is necessary to extend the available tools to what is required for beyond the scope of "normal".

As it is a saying, whoever stays still, misses the progress, because everyone else continues their walk.

Isnt that so?

As annotation, I gave examples for, to my understanding, very reasonable use cases. Nether the less, I have just noticed the inconvenience of the provided functions and have myself created solutions to that.

Maybe someone comes across a similar issue, and find this thread, at least the reader will find there is a way so solve it. Without the use of any special system calls, just by using the provided.

As you can see, the Sleep() function has quite some blurr to it, and 50ms are not like irrelevant. Pro gamers can "feel" the difference between 180ms and 120ms easily. So why should a neural network not be able to pick up on such blurr.

Or, how would you feel, if I'd give you a latency to your body, consisting of multiple hours, including a fluctuation of 45minutes on that ?

50ms is a frame rate of 20fps. Would you watch a movie at that rate?

50ms and a fluctuation of additional 50ms gives you a consistent 10fps.

Well, good. If that's what you want.
 

A point to pay attention is that that those 15-16ms are  not about precision but about being  "granular" ..  That means that nothing gets lost in the way so there are not serious issues with precision - but with a "catch" :  timer values (either OnTimer or Sleep ) someone can use in Windows are integers e.g.  50 or 150ms but the real timer values  are of binary format - either 1/64, 1/32 and so on for the sub  second values       

greets

 
To be honest, only by reading, I highly doubt that.

Nanosecond timer from Windows is integer based.

Microsecond Counter from MQL is integer based.

Both are precise.

The assembly statement "rdtsc" gives it's result in EAX and is an integer representation of the current CPU cycle counter.

So, down to the CPU cycles, all timers are precise, and are integer values.

And, no, I don't see a reason for an pre emptive OS to be imprecise. In the end, it is not relevant on when I poll a counter, but how much time has gone by.

And if I need to sync an operation to RTC, I will have to wait for the time to come to be able to execute the command in Question.

So, yes, there might be resources wasted for the waiting, but it should absolutely be (and for a matter of fact it is) possible to execute a statement or function at AGI en time, concerning the granularity of microseconds.


A proof to this is the well known "Fritz! ISDN" adaptor for the first Pentium processors. Due to the timing requirements of the ISDN protocol, a polling OS was not sufficient fast enough before to have "passive" cards. In fact for high load servers you were required to have active ISDN cards with a processor on it to handle the protocol and it's crucial timings.

But for modern PCs to be in sync with microseconds, which wasn't even my approach, but for milliseconds, it should not be very difficult.

Since MT5 and the MQL language want to be modern and as close to C/C++, I would argue, a timer function with an uncertainty of over 15ms is simply insufficient.

But, as said, that's my personal point of view, and everyone is legit to have their own opinions on that matter.

From my experience and perspective, there are lots of "holes" especially in the API and the terminal. I put in a lot of work to fix and enhance most of the functions.

As an example, register all Events for a chart and print them in OnChartEvent to the log. Then detach the chart and see what results you get.

But that's just one of multiple shortcomings from the terminal.

In my opinion, if the API would be implemented cleanly, most of questions on coding with it would disappear.

The compiler itself is not bad, lacks some features and offers some inconsistency, but over all it does it's job.

Example: I posted code that gives inconsistent errors by the compiler. That should not be the case.

I came across few more, but, instead of waiting for a fix, I created a workaround.

Example: namespaces.

When you #import an ex5 file, the filename will be used as an implicit namespace name. - Try to declare the namespace before the import statement, put a function or variable inside, check if you can access it after the import statement. Spoiler: you can't.

Also, why can the compiler declare "using namespace" implicit, but I cannot declare it explicit. - doesn make sense. I don't do it anyways, due to namespace pollution, but it's an inconsistency.

Inconsistent documentation

Example:
The documentation states, the order of loading global variables is undefined (variables in th global namespace of the program), but in fact it is defined. The order is given by the order of the files, as listed by the compiler. And then of course in the order of listing inside the individual files.

Have you ever noticed, TimeTradeServer lacks behind... Sometimes.
Check against SymbolInfoInteger. You will see, they are not in sync.

Did you notice the timestamps on the journal do not lack the imprecision the OnTimer, GetTickCount(64) or Sleep function give you?

So, the accurate time is available inside the terminal, why not for the API?

All these things do not really make sense, and then people ignore the fact, they work with broken tools, and if someone in fact tries to fix these issues, mostly the response is more like: What? Why? Nobody needs that...

I find that somehow strange.

If I drive my car, and I want it to go straight, then I want to hold the steering wheel straight and not tilted, or with a hysteresis to when I turn it, or even a delay...

But since EAs only play with money, I guess it's fine for people.

Anyone ever noticed how easy people make decisions about big investments, lime buying a real estate. But when it comes to chips, everything needs to be inspected. They know their chips better than their car, their TV their stove or AC.

Well, if anyone is interested, I am willing to share my code on request.
 

@  Dominik Christian Egert #

You seem very passionate about this and I respect that, however with every technical task one has to ask does effort exceed the benefit?

The goal is to profit but its is hard to see how having a hyper precise micro/nano sec timer will take one closer to that goal

Testing GetTickCount64() vs GetMicrosecondCount()
Testing GetTickCount64() vs GetMicrosecondCount()
  • 2022.05.25
  • www.mql5.com
A simple attempt to synchronize the OnTimer() call to TimeLocal to be called at around the next second has revealed a question, I cannot wrap my he...
 
Yes. That's a point to consider.

Here is one. Some 1M candles have high volume, above 300 ticks, maybe 500 or more. Let's say 300. That is 5 ticks per second in average, that gives you a timeframe of 200ms per tick. But sometimes they come in so fast, you don't even have 50ms between them.

Now let's say you have a complex model being this to analyze incoming prices. For example, let's say you utilize a neural network and it's execution time is around 70ms, or more. Or let's say your analysis is based on ticks and you have a dataset in memory with 300 to 500 mb of data. Just processing all these data is taking your CPU quite some time.

To be able to handle these scenarios, you definitely need something little more precise.

Like described in one of my previous posts, there are lots of scenarios in which it is benefitial to have precise time.

I do not think, this is uncommon. There are for very sure massive amounts of algos out there, with huge complexity and long calculation cycles. And I am not referieren to bad implementations. Some of them are very well coded.

But next to execution environment management, it also is notable, a realtime neural network relies on a constant frame rate for their input.

So in conclusion, it's not only the time where the price changes, but also the time where it does not change. At least in former scenario.

Otherwise a prediction and an anticipated action could not be taken in such moments.

When analyzing a news candle with a huge spike, you will see there are certain gaps in time and these gaps are quite telling, at least toy understanding.

Example: let's say you receive a tick with a jump in price of (for simplicity 10 * ATR), your latency to the server is 30ms, and you begin to receive hundreds of ticks with gradually sinking prices.

So the market is trying to compensate for that move. Lots of sell orders are coming in.

Let's further on say, as this first tick comes in, you would want to wait for maybe a second jump, and if that is not coming within a given time, you would want to assume, that's been the top and place a sell order. Now we are talking milliseconds here, somewhat in a range of maybe 75 to a 100 ms.

Using functions with a blurr of around 15 to over 50 ms, it will turn out to be very hard to make precise actions. Given the fact, you lag behind the server already by the latency of the underlying network.

As you now may be able to see, timing in such situations is crucial. And at least the milliseconds should be working in this precision.

But, this could be taken even further, imagine a neural network to predict the next ticks. Let's say you would want to do a forecast, and it predicts the next tick to be in x ms, with a price change of y. Now you feed in this prediction to get the following and so on.

So let's say you are "waiting" for the next tick and you need to know how much time you can spend on feedbacking your NN.

Now let's say you would want to place an order to grab one of those predicted ticks, you would need to be able to place that order in the trading server at the right time, else you might miss the opportunity. Or be late to the party.

As we all know, the position in the order book on the server is crucial to the executed price an order receives.

And so on....

Well, I could probably write hours about the concept of time in ticks and the overall importance of such in the markets.

But to round this up, I personally think, the whole platform, and all other given tools I know, are to some extend "soft". What I mean by that is, why are prices not in integer, but in double values?

To me it does not make sense to have price values represented in an unprecise manner. Same goes for time. Why does the terminal have precise times and the EA does not?

This goes for a lot of tools around trading. I would make the statement, they are designed to limit your possibilities to be successful.



 

Imo you think too deeply instead of clearly. If your model takes 70 ms you will always be late no matter what. Not even taking into account that it is not possible making a profit knowing what the next tick will be even with 100% accuracy.

 
Dominik Christian Egert #:
...But to round this up, I personally think, the whole platform, and all other given tools I know, are to some extend "soft". What I mean by that is, why are prices not in integer, but in double values?

To me it does not make sense to have price values represented in an unprecise manner. Same goes for time. Why does the terminal have precise times and the EA does not?

This goes for a lot of tools around trading. I would make the statement, they are designed to limit your possibilities to be successful.



I doubt there is a single piece of tech/software in existence which is without some flaw or quirk - and I daresay that extends to space shuttles, the hubble telescope, fighter planes and surgical robots... whatever... 


Professionally I have worked on some enormous software projects (costs approaching $500M+) and everywhere there is some imperfection, even from the most reputable well known vendors - the trick is to work around the flaws and focus on achieving the business goal.

Here you have a free product which is probably the most widely used in (non institutional) FX trading, and frankly in my experience its damn good - much better than many expensive software tools I have used.

We are lucky to have it