MT5 and speed in action - page 75

 
Valeriy Yastremskiy:

I am not an expert in graphs. The importance is determined by the dependence of the start of other tasks on the end of the current one. other criteria are secondary. but there is also the task execution time. In general it is difficult and most sad, it is impossible to change the prioritisation algorithm on the fly. On the good side, I would like some clarification from the developers before any questions arise. It's complicated, but it's the right goal in the development of the environment.

Excerpt from a description of how this works in real-time systems.

Typically priorities are dynamic, which means that they can be changed at runtime by the processes themselves as well as by the OS.
Response to interrupts is separated from CPU intensive computations.
As soon as an event or interrupt occurs, its handler is immediately included in the queue of ready processes.
Interrupt handler programs are usually compact because they must provide a fast response,
for example input of new data, and transfer control to more complex CPU-intensive processes that are executed at a lower priority.

 
Roman:

Hello Nikolai. That's true.
But won't there be the same problem as with synchronisation, which Slava talks about, i.e. unreasonable brakes.
Or maybe there is no problem? )) Maybe it is easier not to use asynchronous model than to synchronize it with priorities? ))

Hi.
I'm not an expert in asynchrony and interrupts, although I have some knowledge and experience.
There shouldn't be a problem with the timer issue at all. As the sequence is not important, it's the periodicity that matters. It doesn't really matter how it is handled by the boss, who is responsible for resource allocation.
Moreover, as I understand it, the timer is based on system hardware interrupts. I think the whole asynchrony control system is implemented using hardware interrupts, including those from the timer.
I still wonder how resource intensive interrupts are.
For example, a system interrupt comes from CPU timer to perform an increment of one global variable. This increment itself will take the system about 1 nanosecond. But:

  • How long does it take to save all the parameters of running processes and/or threads needed to resume work?
  • Is this saving done by hardware or software?
  • How long does it take to restore the process?
  • Can we measure this resource-intensiveness? Probably not, because how do you catch the moment of interruption?
  • What is the order of these resource costs - tens, hundreds of nanoseconds, microseconds or tens and hundreds of microseconds? It would be interesting to get such information.

In general, I realise that I lack knowledge and experience. That's why I try not to get into questions with developers about asynchrony priorities. I understand that there are a lot of nuances, pitfalls and obstacles when trying to create a perfect system, especially when it comes to trade orders and getting trade information.
Although I have to admit that I still don't understand why they made some functions asynchronous in 5, which is a great inconvenience. I mean ChartGet..., ChartTimePriceToXY, ChartXYToTimePrice.
After all, it is logical to assume that the filling of chart state table should be asynchronous, and the commands should only read data from this table. And if the data at the time of reading is a few milliseconds out of date, that's not a problem.
The problem is that in the pursuit of imaginary data relevance, lags of tens of milliseconds occur, during which the extracted relevance becomes irrelevant to a greater extent, if these commands were not originally asynchronous, but simply read the last known data from the chart state table.
And judging by the execution time, these functions were not asynchronous in 4.

 
Roman:

Excerpt from a description of how this works in real-time systems.

Normally priorities are dynamic, which means that at runtime they can be changed by the processes themselves as well as by the OS.
Reaction to interrupts is separated from CPU-intensive computations.
As soon as an event or interrupt occurs, its handler is immediately included in the queue of ready processes.
Interrupt handler programs are usually compact because they need to provide a fast response,
e.g. input new data, and pass control to more complex processor-intensive processes that are executed at a lower priority.

It's just as I described)))) Of course the priority logic is dynamic. And this is the difficulty in setting the level. by setting the priority level we cannot determine its execution time in the dynamic prioritization logic in the environment below. the terminal is always above the wine or linux environment and cannot affect the prioritization logic of the environment below.

 
Nikolai Semko:


Not all of the questions posed are presumed to be answered.

How resource intensive the interrupt itself is.
Most likely depends on processor frequency.

How long the process takes to save, for further resumption.
According to quantization-based algorithms, the active process is changed if:

  • the process has terminated and left the system
  • an error occurred
  • process has been switched to the STANDBY state
  • process has exhausted a quantum of processor time, and

How to catch interruptions.
A
pre-divider is a clock frequency divider that acts as one or more T-triggers connected in series.

 
Roman:

Not all the questions posed are supposed to be answered.

Go study the subject (for at least 10 years) and don't litter this thread, please.

Questions are discussed here with a different training and a different class.

 
Nikolai Semko:
  • How long does it take to save all the parameters of running processes and/or threads needed to resume operation?
  • Is this saving done by hardware or software?

Nothing happens since 286 processor? I don't remember and never understood it, but since Pentium-1 (I read a book about it, a long time ago)

The processor works in protected mode. Every processor has virtual memory allocated; physical addresses of memory banks (RAM cells) are translated into virtual addresses (or rather vice versa?) by the processor itself (I don't remember, but it seems to be a special register and a virtual pointer to the address translation table). It's hardware all happening, it's not measurable, it's the so called processor core which distinguishes every Intel processor line, it's not the cache!

Nikolai Semko:
  • How long does it take to recover the process?

any program on Win should register as a process and create at least one thread

then the Win task scheduler will allocate resources to the process and queue messages to it, how the scheduler works is of no interest to me, it is enough that the process priority can be raised and it can be seen that with some effort the PC starts to plan, i.e. Microsoft gives resources to my application, this is enough to keep the OS running

Nikolai Sem ko:
  • Is it possible to measure this resource intensity? Probably not, because how do I catch the moment of interruption?
  • What is the order of these resource costs - , tens, hundreds of nanoseconds, microseconds or tens and hundreds of microseconds? It would be interesting to get such information.

Ehz, measuring what? Interrupts are hardware, it is handled by the OS, of course with the help of drivers.

timer? - if i'm not mistaken, timer can't hit the message queue unless the process is processing it, something about OS foolproofing, google WM_TIMER - should be detailed

the order of numbers? only the processor clock can be measured and then multiplied by the processor's calculation factor, it was discussedat https://www.mql5.com/ru/forum/352454#comment_18588098 , google tons of information on performance metering

 
Renat Fatkhullin:

Go study the subject (for at least 10 years) and don't litter in this thread, please.

We discuss issues here with a different training and a different class.

Everyone should be sent here, not selectively )) But as always gets his hat kicked in who asks adequate questions.
After I found out that handlers are executed in blocking mode, I didn't bring up this topic for nothing.
I touched on the real crux of the problem, and you don't like it. OK, I'll drop the subject.
But I do not see the point of achieving timely events in synchronous processing.
Slava, Nikolay, Valery thank you for the constructive dialogue.

 
Igor Makanu:

nothing happens since the 286 processor? hhz, I don't remember and I've never dealt with it, but definitely since the Pentium-1 (I read a book on it, long time ago though)
It's hardware all done, it's not measurable, it's the so-called processor core which distinguishes every Intel processor line, it's not the cache!

Good if it is.
I think it is. Almost everything is at the hardware level. Otherwise multithreading wouldn't be as efficient.

 
Nikolai Semko:

Good if that's the case.
I think it is. Almost everything is at the hardware level. Otherwise multithreading wouldn't be so effective.

only like this

google: processor protected mode

If I'm not mistaken, protected mode gives the OS kernel a separate privilege level and because of the virtual memory for each process - it's impossible to get the RAM data for the running program...well, unless you run it under debugger as a separate process.... that's another area of expertise ))))

but, unequivocally, everything works on hardware level, it's impossible to measure it, only OS tools - and switching virtual memory for processes is instantaneous, and processor itself works on internal frequency - CPU multiplier... and if you start thinking about the cache... why? - there is a problem, look for a solution! want to write a driver? )))

SZZ: you can write a driver, i remember when i used TCP-logger, it was installed as a driver and logged all traffic and then by processes in the table displayed all the traffic.... only one thing to think how writing drivers will help to develop profitable TCP ))))



UPD: Hubr "What is Protected Mode and what does it do"https://habr.com/ru/post/118881/

UPD: Hardware level(CPU) privilege for code execution -Protected Rings Wiki

 
Renat Fatkhullin:

You are always guaranteed to have failures on random single samples of any instruction, including the simplest assembler type inc eax. This is architecturally and due to the physical limitations of "honestly allocating time quanta of thousands of threads to a small number of cores".

Stop being stupid and keep catching single outliers per million requests.

Noticed that CopyTicks lags rarely. I wrote a test script

#include <fxsaber\Benchmark\Benchmark.mqh> // https://www.mql5.com/ru/code/31279

void OnTick()
{
  Sleep(1000);
  
  MqlTick Tick[1];
  
  _B(CopyTicks(_Symbol, Tick, COPY_TICKS_ALL, 0, 1), 100);
  _B(SymbolInfoTick(_Symbol, Tick[0]), 100);
}

and ran it in stress mode. SymbolInfoTick has noticeably more alerts than CopyTicks.


No complaints. Only I would like to understand, what affects different perception of stress load in implementations of these functions?