Question for developers - using all computational cores during optimisation - page 10
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
3. The"one core, one job" problem has not and probably will not be solved.
This is inefficient for the cloud, jobs have to be passed in packets, so they won't be. Part of the solution is to distribute only a portion of the jobs, and then toss in mini-packages to the agents who are freed. But for local agents and farms, one job should be given to each, the master is the boss, and traffic doesn't play a role.
This is inefficient for the cloud, jobs need to be handed out in packets, so they won't be. You can partially improve the situation by distributing only part of the jobs, and then giving the free agents mini-packages. But for local agents and the farm must be given one task each, the master is the boss, and the traffic is not royal.
You are absolutely right.
This is inefficient for the cloud, jobs need to be handed out in packets, so they won't be. You can partially improve the situation by distributing only part of the jobs, and then giving the free agents mini-packs. But for local agents and the farm must be given one job, the master is his own boss, and the traffic does not play royally.
That's how it works now - packets are added, if agents are free and others are busy, then freed agents are loaded, and if they're faster, then previously given tasks are canceled. This is if there are a lot of jobs, if there are only a few, this probably doesn't happen.
This is what happens now - packages are added, if agents are freed up and others are busy, then the freed up ones are loaded and if they are quicker, the previously issued tasks are cancelled. This is if there are a lot of jobs, if there are only a few, it probably doesn't happen.
I guess what I'm saying is that such a scheme (batch) is inadequate for local agents ....
i.e. 1-5 cores are busy all the time and dozens of other cores are idle .... moreover these 1-5 cores need to calculate many tasks ....
plus listen to me:
faster cores will now compute 3-4 times as many jobs in the same amount of time... that's why the calculation will be ten times faster.
and therefore you can't give more than one job to a kernel
I am probably just saying that such a scheme (batch) is inadequate for local agents ....
because 1-5 cores are busy all the time and dozens more are idle .... moreover these 1-5 cores need to calculate many tasks ....
plus listen to me:
faster cores will now compute 3-4 times as many jobs in the same amount of time... that is why the calculation will be ten times faster.
and that's why you can't give more than one job to a kernel.
Do you have cores idle for any type of optimization or only for genetics?
This is what happens now - packages are added, if agents are freed up and others are busy, then the freed up ones are loaded and if they are quicker, the previously issued tasks are cancelled. This is if there are a lot of jobs, if there are a few, it probably doesn't happen.
The packs were handed out almost all at once. Only the remainder from dividing the total number of jobs by the number of agents was left in reserve for distribution (in my opinion). This was done, I think, in the spring. Has anything changed since then? I haven't followed it much, my EA is optimized with even load, I don't use the cloud yet, and the local agent downtime situation is less relevant to me now.
The packages were handed out almost all at once. Only the remainder from dividing the total number of tasks by the number of agents was left in reserve for distribution (in my opinion). This was done, I think, in the spring. Has anything changed since then? I haven't followed it much, my EA is optimized with even load, I don't use the cloud yet, and the local agent downtime situation is less relevant to me now.
Recent observations with me do not reveal such a problem.
Do you have cores idle on any type of optimisation or only on genetics?
I use full overkill
Rebuilding the tester is our priority right now. We're rewriting a lot of things.
Confirm problem with idle cores. Question to developers: when will there be an update and is there any temporary solution to this PROBLEM? Promised a solution, I'm looking at posts from early 20 year... already 21!
I usegenetic algorithm for wholesale
As a result wholesaling takes a couple of hours and then it takes more than a day...
I'm using a total overkill
any luck finding a solution to this issue? tried disconnecting the cores and then running, the last of the running ones won't disconnect, everyone ends up waiting for one...
I only use cores from my pc's CPU without networking
out of 12 cores most are at idle...