Question for developers - using all computational cores during optimisation - page 6

 
Aleksey Vyazmikin:

This is not the right approach - instead of giving tasks one at a time, you should reallocate capacity if you have free resources, i.e. cancel tasks already given out and give them to others to execute. At the same time it is necessary to analyze each agent's performance in order to give the kernel the right number of new jobs.

this is nonsense, sorry

> You have to cancel existing jobs and give them to others to execute

I think this is not realistic, and why, when it is easier to create a batch of jobs, give one job to the first available thread, wait until it is executed, give the next job to the first available thread (I draw attention to the word thread, not the core processor, the restriction on threads should be removed - it is not the right of programmers, but the user rights, recall that now the network threads are only the real core, not threads, which artificially lowers performance by half)

>it is necessary to analyze the performance of each agent in order to give the kernel the required number of new jobs for execution.

and this is not needed at all, because the cores of the same processor with the same performance, depending on the tasks, count with different speed, there is no need to count something when you can not count anything at all

 
Boris Egorov:

that's bullshit, I'm sorry.

> cancel tasks that have already been issued and return them to others to be executed

I think this is not realistic, and what for, it is easier to create a batch of jobs, give one job to the first available thread, wait until it is executed, give the next job to the first available thread (note the word thread, not the core processor, the restriction on threads should be removed - it is not the right of programmers, but the user, recall that now the network threads are only real cores, not threads, which artificially lowers performance by half)

>it is necessary to analyze the performance of each agent in order to give the kernel the required number of new jobs for execution.

and you don't need it because the cores of one processor with the same performance depending on the tasks count with different speed, why should you count something there, when you may not count anything at all?

You seem to have little experience with the optimizer and don't understand that information about completed passes comes late, that after a job is done the agent sends a frame, which can be very heavy, all this will lead to delays in communication and slow down the optimization. Therefore, tasks need to be issued in batches and monitor their progress - issuing new tasks to those agents who are close to completing the work.

 
Aleksey Vyazmikin:

You seem to have little experience with the optimizer, and don't understand that information about completed passes comes late, that once a job is completed an agent sends a frame, which can be very heavy, all of which will cause delays in communication and slow down optimization. Therefore, tasks should be issued in batches and monitor the progress of the tasks - issuing new tasks to those agents who are close to completing the work.

>It sounds like you have little experience with optimizer,

are you kidding? 6 years uninterruptedly

>Information on completed passes comes late, that after completing a job an agent sends a frame, which can be very heavy, all this will lead to communication delays and slow down optimization. Therefore, assignments should be issued in batches and monitor their progress - issuing new assignments to those agents who are close to completing the work.

>this will lead to communication delays and slow down optimisation.

and it doesn't matter, the networks are fast now.

but having cores sitting idle while one poor core finishes a bunch of jobs from the bunch slows down the optimization because all the rest (dozens of cores) stay idle and the cores need to keep count continuously without stopping

it looks like you never optimized for a lot of parameters ... and don't argue you have no practical experience

 
Boris Egorov:

>It sounds like you have little experience with the optimiser,

are you kidding? 6 years continuously

>Information about completed passes comes late, that after a job is done the agent sends a frame which can be very heavy, all this will lead to communication delays and slow down the optimization. Therefore, assignments should be issued in batches and monitor their progress - issuing new assignments to those agents who are close to completing the work.

>this will lead to communication delays and slow down optimisation.

and it doesn't matter, the networks are fast now.

but having cores sitting idle while one poor core finishes a bunch of jobs from the bunch slows down the optimization because all the rest (dozens of cores) stay idle and the cores need to keep count continuously without stopping

looks like you've never optimized for a lot of parameters ... and don't argue you have no practical experience ...

You can't be a self-righteous egomaniac, networks are fast, how egocentric. On the contrary, networks are not fast when it comes to tens and hundreds of megabytes.

Primitive EA optimization is not all it's cracked up to be - broaden your horizons and use mathematical calculation.

Yes, and keep in mind that this is primarily a project for profit, not for the enjoyment of users, and as such, the mechanism must take into account the random distribution of tasks and the correct financial accounting of their performance...

 
Aleksey Vyazmikin:

You can't be a self-righteous egomaniac, networks are fast, how egocentric. On the contrary, networks are not fast when it comes to tens and hundreds of megabytes.

Primitive EA optimization is not all it's cracked up to be - broaden your horizons and use mathematical calculation.

Oh, and keep in mind that this is primarily a project for profit, not for pleasing users, and as such the mechanism must account for the random distribution of jobs and the correct financial accounting of their performance...

tens and hundreds of megabytes is nothing, the time spent is minimal and by the way it has nothing to do with this, you should think before you write that in a batch that one by one anyway this traffic will have to pass

>PrimitiveEA optimization is not all there is to it - expand your horizons and use mathematical calculations.

I wish you the same about horizons

About selfishness, too.

I'm not primitive, and what's the purpose of it, so enlighten us ignoramuses.


I find your initiative completely absurd in terms of time consumption and optimization speed.

Как в MetaTrader 5 быстро разработать и отладить торговую стратегию
Как в MetaTrader 5 быстро разработать и отладить торговую стратегию
  • www.mql5.com
Скальперские автоматические системы по праву считаются вершиной алгоритмического трейдинга, но при этом они же являются и самыми сложными для написания кода. В этой статье мы покажем, как с помощью встроенных средств отладки и визуального тестирования строить стратегии, основанные на анализе поступающих тиков. Для выработки правил входа и...
 
Boris Egorov:

You are just looking at your particular case of optimisation, while Alexey is looking at his (his EA is several hundred MB and takes a long time to transfer).

And MQ looks at overall optimizer usage and adjusts it to fit most, not you and Alexey.

Tasks are redistributed, at least for me on local cores. If somewhere they are not redistributed, give me an example to reproduce, so developers can take it into account as well.

 
Andrey Khatimlianskii:

You are just looking at your particular case of optimisation, while Alexey is looking at his (his EA is several hundred MB and takes a long time to transfer).

And MQ looks at overall optimizer usage and adjusts it to fit most, not you and Alexey.

Tasks are redistributed, at least for me on local cores. If somewhere they are not redistributed, give me an example to reproduce, so that the developers can take it into account as well.

I agree that my case is a special one.

There is a problem with task allocation if new remote agents are connected - this happens when resources are freed up from other tasks.

 
Andrey Khatimlianskii:

You are just looking at your particular case of optimisation, while Alexey is looking at his (his EA is several hundred MB and takes a long time to transfer).

And MQ looks at overall optimizer usage and adjusts it to fit most, not you and Alexey.

Tasks are redistributed, at least for me on local cores. If somewhere they are not redistributed, give me an example to reproduce, so that the developers can take it into account too.

>maybe I have a private one too, but really "probably"

>give an example to reproduce so developers can take it into account too.

not really ... i cannot display my EA, i am not interested in standard ones, i can make screenshots to show that they are not redistributed

If they were redistributed, it would be a solution to the problem

 

I want to ask the developers, why the optimizer has distributed a bunch of tasks only to certain cores and not one task each and therefore in this case increased the calculation time three times?

.... calculation time tripled will they ever get the optimiser to work properly???? a lot of free cores are idle ...

 

Second day it does not count anything, all cores in the number of 12 local and about 30 network cores are idle, I do not touch on purpose ... I don't know what it's thinking, probably looking for the meaning of life or a cure for coronovirus :-)

I think we should abandon the optimizer because of its inoperability and sluggishness

and the recent MT decisions to limit only physical cores, persistently and stupidly distribute a bunch of jobs to only certain cores instead of each core - one job - demonstrates a total lack of understanding of high-performance calculations by the developers