About the MT5 code profiler - page 7

 

What is this?!


 
fxsaber #:

The profiler shows function calls that didn't actually happen. I've even come across something like this:

Some kind of ArrayCopy, which isn't in the mqh-file source! I even drew a red line in the declaration of a static array.

I still can't use the profiler, unfortunately.

And think about how and what arrays move when constructing, assigning and moving objects (and you have an object).

Do you really think that the program consists only of your strings?

The examples are not complete.

 
Renat Fatkhullin #:

And think about how and by what arrays are moved when constructing, assigning and moving objects (and you have an object).

Do you really think the program consists only of your strings?

The examples are not complete.

Give clear instructions on what to do on my part to get you to tackle this topic without putting it off.

 
fxsaber #:

Give me a clear instruction on what to do on my part to get you to tackle this topic without putting it off.

You are making statements on a topic (compilers and their innards) that you do not understand.

Instruction doesn't help - you're not taking a compiler developer's course to understand the vast world of implicitly generated code in object languages. High-level languages use a lot of library and inline code. Build an average project in WinAPI and look at *.map file - there are thousands of auxiliary functions there and any of them can appear in profiling.

The words I repeated dozens of times about "the resulting code has nothing to do with your code, it is optimized, embedded and shuffled by the optimizing compiler" don't catch my ear either. The compiler's main job is to make the code as fast as possible, not readable. The profiler's job is to show real bottlenecks in the optimized(real) code, not to cheat by matching lines.

As a comparison, profiling C++ code at the moment is often a very difficult task, as the optimizer is a great match-fixer. And yes, Microsoft Visual Studio C++ is not a benchmark, the code it generates is very weak/bad. It is 20-30% worse than its LLVM/Clang competitors.


Once again we have a profiler that does not change the code being examined. The check time increases, but the code is not really spoiled by embedding counters, which would kill code optimization.

The method used for profiling is "Sampling". The profiler pauses the MQL-program (~1000 times per second) and collects statistics on how many times a pause occurred in a particular code fragment. This includes analysis of call stacks in order to determine the "contribution" each function makes to the total running time of the code. At the end of profiling you get how many times each function has been paused and how many times each function has been in the call stack:

  • Total CPU activity [unit, %] - the total number of times a function "appeared" in the call stack.
  • Own CPU activity [unit, %] - the number of "pauses" that occurred directly within the specified function. This counter is the most important for identifying bottlenecks because, statistically, stopping occurs more often in those parts of the program that require more CPU time.



Without one-step reproducible examples we do not consider issues. Simplified synthetics from a couple of calls on micro tasks cannot be considered in terms of percentage of time taken or contribution to total time either.

 
Renat Fatkhullin #:

You are making statements on a topic (compilers and their innards) that you don't understand.

Totally ignorant of the topic you have identified. The profiler shows some data which cannot be interpreted in any way.

Once again we have a profiler that almost doesn't introduce delays into the code we're investigating. The check time increases but the code is not really spoiled by embedding counters - that would kill the code optimization.

I'm trying to see the bottlenecks with the new profiler. No luck, although I'm trying very hard.

Without one-step reproducible examples we don't look at issues. Simplified synthetics from a couple of calls on micro tasks cannot be considered in terms of percentage of time taken or contribution to total time either.

Who do I send the data to for replay? The data from LS shows that LS messages may not be read for a long time.

Two green ticks indicate that the message has been read, one indicates that it has not been read.

 
fxsaber #:

Totally ignorant of the subject you have identified. The profiler shows some data that I can't interpret in any way.

I am trying to see the bottlenecks with the new profiler. No luck, although I am trying.

Who do I send the data to for replay? The data from the PM shows that PM messages may not be read for a long time.

Two green ticks - message read, one - not read.

just the same, the more often the hammer is struck, in one place or another, the more costly the function

what is the probability of a counter hitting cheap variables? almost 0

there are immediately understandable functions that the counter will hit, they are skipped, look at the following custom ones

 
Fast235 #:

simply all, the more often the hammer is hit, in one place or another, the more costly the function is

what is the probability of a counter hitting cheap variables? almost 0

there are immediately understandable functions that the counter will hit, they are skipped, look at the following custom ones

I'm talking about practical application, not beautiful theory that's clear at first glance.

 
fxsaber #:

I'm talking about practical application, not beautiful theory that's easy to understand the first time.

practical is what it used to be, how many times is it invoked?

it's purely a perfectionist interest,

I agree extra calls need to be seen, even if they are cheap.

 
Fast235 #:

The practical is as it was before, how many times is it called?

The previous profiler was able to find bottlenecks, but here we are talking about a new profiler whose data does not allow us to understand what is going on, even though it has been studied several times in theory.

 
fxsaber #:

The previous profiler allowed to find bottlenecks, but here we are talking about a new one, the data of which do not allow to understand what is going on, although in theory everything has been studied several times.

Renat should not show the new profiler in general phrases, but to make it clear even to the convinced like the sub) (I'm not belittling.)