You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The only reasonable explanation which comes to my mind is manual control over number of threads running in the pool (if we don't trust default async|deferred) - for example, if we see that the system is heavily loaded, resend jobs with async flag, send them deferred.
In general, I'm somewhat disappointed by async() sluggishness, I'll create my own lightweight thread pool, it seems to me that it will be much faster.
The default async|deferred mode is described in the book so that if there are not enough physical resources, the task does not create a new thread, but executes in the main thread, blocking it.
And you should always be aware of possible locking of main thread in default mode, because most of the time you need solutions not to block the main thread.
That is, the default mode automatically switches where to perform the task depending on the load of a physical resource, i.e. processor.
For this reason, if we are sure that the physical resource will not be overloaded, we'd better specify the flag explicitly std::launch::async.
But if you overflow a physical resource of modern processors, you will have to search for such calculations to reach its full potential ))
I can't say anything about speed, because I'm still studying the theory ))
So if we are sure that the physical resource will not be overflowed, it's better to specify the flag explicitly std::launch::async.
And to overflow a physical resource of modern processors, it takes a long time to find such calculations to select all the potential ))
The processor can bear with an even great number of threads, but the capabilities of the operating system are more likely to become a bottleneck. Well, it cannot endlessly multiply threads, sooner or later async(lauch::async, ...) will run into an thrown exception.
The processor can bear even a large number of threads, but the operating system's capabilities are more likely to become a bottleneck. Well, it cannot endlessly multiply threads, sooner or later async(lauch::async, ...) will run into an thrown exception.
Yes, there is always a physical limit, but it is unlikely to exceed this limit in our tasks for mt5.
Also async and future in their return value return exceptions if they occur, no matter how we get this value, via a lambda function, ref() or .get().
And std::thread in its return value cannot return exceptions.
Yes, there is always a physical limit, but in our tasks for mt5, it is unlikely to go beyond this limit.
Also async and future in their return value return exceptions if they occur, no matter how we get this value, through a lamda function, ref() or .get().
And std::thread in its return value cannot return exceptions.
I don't think you should get too excited about async. It seems to have been made for convenience, but all this stuff seems to really hit the performance. It's not by a plus one.
And std::thread in the return value cannot return exceptions.
But if you have to, it's a dozen extra lines (while it would work faster - without all the allocation in the pile).
So as not to be unsubstantiated:
Didn't even need a dozen. Yes, you can do even more low-level stuff without all the packaged_task and future stuff, but the idea is that throwing exceptions isn't some super async thing, and thread has absolutely nothing.
Why complicate things when there are no complex tasks for MT4|MT5 which require delayed calculations, pools etc.?
Actually, the challenges are there. There are no possibilities for MT.
Guys, I'm sharing my research.
I wrote my own thread pool on my knees. I should note that it's quite functional version, you can pass any functor with any parameters, in response future is returned, i.e. all plushkas in the form of exception catching and waiting for termination are available. And it's as good ashttps://www.mql5.com/ru/forum/318593/page34#comment_12700601.
I don't know who and in what state wrote std::async, but my knee-developed thing is 4 times faster than standard one (with 10 working threads). Increasing number of threads over the number of cores only slows me down. With pool size == number of cores (2), async loses by about 30 times. So that's how it is.
If I want to pool threads, it won't be standard async for sure )).
Guys, I'm sharing my research.
I wrote my own thread pool on my knees. I should note that it's quite functional version, you can pass any functor with any parameters, in response future is returned, i.e. all plushkas in the form of exception catching and waiting for termination are available. And I used it as well as there https://www.mql5.com/ru/forum/318593/page34#comment_12700601.
I don't know who and in what state wrote std::async, but my knee-developed thing is 4 times faster than standard one (with 10 working threads). Increasing number of threads over the number of cores only slows me down. With pool size == number of cores (2), async loses by about 30 times. So that's how it is.
If I want to pool threads, it won't be standard async for sure )).
Thank you for the research. It's a good example, something to think about and learn from.
But from our general discussion, most of us have come to the conclusion that a flow pool is not really needed.
In my case, that's for sure, since I've realized that the pool is static in terms of number of threads, it doesn't work for me.
But yes, when I need a pool, your example will be just right. Thanks for showing examples.
I'm still getting the hang of it ))