Asynchronous and multi-threaded programming in MQL - page 36

 
Vict:

The only reasonable explanation which comes to my mind is manual control over number of threads running in the pool (if we don't trust default async|deferred) - for example, if we see that the system is heavily loaded, resend jobs with async flag, send them deferred.

In general, I'm somewhat disappointed by async() sluggishness, I'll create my own lightweight thread pool, it seems to me that it will be much faster.

The default async|deferred mode is described in the book so that if there are not enough physical resources, the task does not create a new thread, but executes in the main thread, blocking it.
And you should always be aware of possible locking of main thread in default mode, because most of the time you need solutions not to block the main thread.
That is, the default mode automatically switches where to perform the task depending on the load of a physical resource, i.e. processor.
For this reason, if we are sure that the physical resource will not be overloaded, we'd better specify the flag explicitly std::launch::async.
But if you overflow a physical resource of modern processors, you will have to search for such calculations to reach its full potential ))
I can't say anything about speed, because I'm still studying the theory ))

 
Roman:

So if we are sure that the physical resource will not be overflowed, it's better to specify the flag explicitly std::launch::async.
And to overflow a physical resource of modern processors, it takes a long time to find such calculations to select all the potential ))

The processor can bear with an even great number of threads, but the capabilities of the operating system are more likely to become a bottleneck. Well, it cannot endlessly multiply threads, sooner or later async(lauch::async, ...) will run into an thrown exception.

 
Vict:

The processor can bear even a large number of threads, but the operating system's capabilities are more likely to become a bottleneck. Well, it cannot endlessly multiply threads, sooner or later async(lauch::async, ...) will run into an thrown exception.

Yes, there is always a physical limit, but it is unlikely to exceed this limit in our tasks for mt5.
Also async and future in their return value return exceptions if they occur, no matter how we get this value, via a lambda function, ref() or .get().
And std::thread in its return value cannot return exceptions.

 
Roman:

Yes, there is always a physical limit, but in our tasks for mt5, it is unlikely to go beyond this limit.
Also async and future in their return value return exceptions if they occur, no matter how we get this value, through a lamda function, ref() or .get().
And std::thread in its return value cannot return exceptions.

I don't think you should get too excited about async. It seems to have been made for convenience, but all this stuff seems to really hit the performance. It's not by a plus one.

And std::thread in the return value cannot return exceptions.

It's not always necessary. But if you do, it's a dozen extra lines (even though it will work faster - without all the allocation in the pile).
 
Vict:
But if you have to, it's a dozen extra lines (while it would work faster - without all the allocation in the pile).

So as not to be unsubstantiated:

#include <thread>
#include <future>
#include <iostream>
#include <chrono>
using namespace std;

template <typename T>
void thread_fn(T &&task) {task(0);}

int main()
{
   packaged_task<int(int)> task{ [](int i){this_thread::sleep_for(3 s); return i==0?throw 0: i;} };
   auto f = task.get_future();
   thread t{thread_fn<decltype(task)>, move(task)};
   t.detach();

   try {
      cout << f.get() << endl;
   }catch(...) {
      cout << "exception caught" << endl;
   }

   return 0;
}

Didn't even need a dozen. Yes, you can do even more low-level stuff without all the packaged_task and future stuff, but the idea is that throwing exceptions isn't some super async thing, and thread has absolutely nothing.

 
Perhaps, from a practical point of view, sometimes it would be worth to get away from all these cushions and remember the Windows api - CreateThread, synchronisation primitives, interlocked functions. All the same, there are. Of course, when you write it for the winds. Why complicate things when MT4|MT5 don't have such complicated tasks requiring delayed calculations, pools and so on.
 
Andrei Novichkov:
Why complicate things when there are no complex tasks for MT4|MT5 which require delayed calculations, pools etc.?
Actually there are tasks. MT does not have the capability.
Here, all sorts of pools are really unnecessary. Standard multi-threading solutions, if applied correctly, are enough.
 
Yuriy Asaulenko:
Actually, the challenges are there. There are no possibilities for MT.
Here, all sorts of pools are really unnecessary. Standard multithreading solutions, if applied correctly, are enough.
That's what I mean.
 

Guys, I'm sharing my research.

I wrote my own thread pool on my knees. I should note that it's quite functional version, you can pass any functor with any parameters, in response future is returned, i.e. all plushkas in the form of exception catching and waiting for termination are available. And it's as good ashttps://www.mql5.com/ru/forum/318593/page34#comment_12700601.

#include <future>
#include <iostream>
#include <vector>
#include <mutex>
#include <set>

mutex mtx;
set<thread::id> id;
atomic<unsigned> atm{0};

int main()
{
   Thread_pool p{10};
   for (int i = 0;  i < 10000;  ++ i) {
      vector<future<void>> futures;
      for (int i = 0; i < 10; ++i) {
         auto fut = p.push([]{
                              ++ atm;
                              lock_guard<mutex> lck{mtx};
                              id.insert( this_thread::get_id() );
                           });
         futures.push_back(move(fut));
      }
   }
   cout << "executed " << atm << " tasks, by " << id.size() << " threads\n";
}

I don't know who and in what state wrote std::async, but my knee-developed thing is 4 times faster than standard one (with 10 working threads). Increasing number of threads over the number of cores only slows me down. With pool size == number of cores (2), async loses by about 30 times. So that's how it is.

If I want to pool threads, it won't be standard async for sure )).

 
Vict:

Guys, I'm sharing my research.

I wrote my own thread pool on my knees. I should note that it's quite functional version, you can pass any functor with any parameters, in response future is returned, i.e. all plushkas in the form of exception catching and waiting for termination are available. And I used it as well as there https://www.mql5.com/ru/forum/318593/page34#comment_12700601.

I don't know who and in what state wrote std::async, but my knee-developed thing is 4 times faster than standard one (with 10 working threads). Increasing number of threads over the number of cores only slows me down. With pool size == number of cores (2), async loses by about 30 times. So that's how it is.

If I want to pool threads, it won't be standard async for sure )).

Thank you for the research. It's a good example, something to think about and learn from.
But from our general discussion, most of us have come to the conclusion that a flow pool is not really needed.
In my case, that's for sure, since I've realized that the pool is static in terms of number of threads, it doesn't work for me.
But yes, when I need a pool, your example will be just right. Thanks for showing examples.
I'm still getting the hang of it ))