I brought back two old PCs to my home yesterday to extend my local farm and accelerate EA optimization. I was expecting the PC to each bring in about 10% more processing power and shorten the optimization time accordingly.
However, the old computers being old quad cores and using DDR2 actually slowed down the optimization, tremendously. During genetic optimization, the master PC would share a batch of 512 runs across all computers/agents with no regard for their speed. The result was that the best computers would run for a while, and then wait for about 50% of the time after the older computers to finish. On the next batch, I would have expected the master computer to send fewer tasks to the slower ones, so that everyone finishes at the same time, but no! The old computer got the same amount of work, and the better computers spent a bunch of time idling.
Genetic EA optimization needs a better load balancing algorithm. For example, a simple function that monitors the average run time and dispatches the number of tasks accordingly. Even better, if a cache could be created with a history of the performance of local farm agents, MT5 would remember the performance from past optimizations and spread the load correctly right from the start.
I hope this is the right place to share ideas like these. Cheers,
Marc
I have not done this in the MetaTrader scenario however have done a lot of load-balancing and optimization in enterprise systems. So I was wondering whether it could be possible to install an external load balancer in front of your servers?
Worth checking if that is an option
Not exactly - application level load balancing is what I am talking about.
As I said I have not looked into how MetaTrader sends requests out to its distributed servers but if you have information on how it does, please share as it would be interesting to see if this is feasible
Actually I did search but did not find anything so I put forward the question (rather than advice) in case someone with more knowledge could help explore the area. From your response I thought you may have some useful material on this so I asked if you could share it, please do if you have.
Secondly, if you read my original post, I did not give advice, I posed a question - is that not the purpose of a discussion forum?
Thirdly, from your statement about "Application level load balancing still takes place on a network based approach" I really wonder what experience you have in scaling large complex systems, please do share that too in case it helps the conversation.
Please, before suggesting anything complicated zoom out and look at it sensibly (instead of deeply). There is no way for GA to know beforehand how long the passes will take at least for the first generation to balance the load TIME wise.
Example. within the optimized parameters there is Timeframe parameter. The range is set from day to 1 minute timeframe. the date set is say 2 years. It is obvious to us the minute timeframe takes longer to compute then the daily timeframe. There is no way for the GA to know. Therefor when dealing out the tasks to variouse agents even on the same hardware with same specs, there will be a situation where one core is done waiting for the rest of the generation to complete.
There is no fix.
Please, before suggesting anything complicated zoom out and look at it sensibly (instead of deeply). There is no way for GA to know beforehand how long the passes will take at least for the first generation to balance the load TIME wise.
Example. within the optimized parameters there is Timeframe parameter. The range is set from day to 1 minute timeframe. the date set is say 2 years. It is obvious to us the minute timeframe takes longer to compute then the daily timeframe. There is no way for the GA to know. Therefor when dealing out the tasks to variouse agents even on the same hardware with same specs, there will be a situation where one core is done waiting for the rest of the generation to complete.
There is no fix.
Actually I did search but did not find anything so I put forward the question (rather than advice) in case someone with more knowledge could help explore the area. From your response I thought you may have some useful material on this so I asked if you could share it, please do if you have.
Secondly, if you read my original post, I did not give advice, I posed a question - is that not the purpose of a discussion forum?
Thirdly, from your statement about "Application level load balancing still takes place on a network based approach" I really wonder what experience you have in scaling large complex systems, please do share that too in case it helps the conversation.
Please, before suggesting anything complicated zoom out and look at it sensibly (instead of deeply). There is no way for GA to know beforehand how long the passes will take at least for the first generation to balance the load TIME wise.
Example. within the optimized parameters there is Timeframe parameter. The range is set from day to 1 minute timeframe. the date set is say 2 years. It is obvious to us the minute timeframe takes longer to compute then the daily timeframe. There is no way for the GA to know. Therefor when dealing out the tasks to variouse agents even on the same hardware with same specs, there will be a situation where one core is done waiting for the rest of the generation to complete.
There is no fix.
The problem is my view of these systems is the "zoom out" view, having spent decades tuning much larger systems with a lot more options and a lot more throughput, and comparatively a lot less time working with MetaTrader which I am still learning about. So please understand if I ask a question (and it was question rather than a suggestion or advice) it really is to learn more, which I now have done having finally found the MetaTrader application options and related documentation.
I can see what you mean, it is pretty limited in terms of options and is no where near as sophisticated as other scalable distributed processing stacks, but hey, nothing to get so excited about, as Mr Egert seems to like to do in his responses...
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
I brought back two old PCs to my home yesterday to extend my local farm and accelerate EA optimization. I was expecting the PC to each bring in about 10% more processing power and shorten the optimization time accordingly.
However, the old computers being old quad cores and using DDR2 actually slowed down the optimization, tremendously. During genetic optimization, the master PC would share a batch of 512 runs across all computers/agents with no regard for their speed. The result was that the best computers would run for a while, and then wait for about 50% of the time after the older computers to finish. On the next batch, I would have expected the master computer to send fewer tasks to the slower ones, so that everyone finishes at the same time, but no! The old computer got the same amount of work, and the better computers spent a bunch of time idling.
Genetic EA optimization needs a better load balancing algorithm. For example, a simple function that monitors the average run time and dispatches the number of tasks accordingly. Even better, if a cache could be created with a history of the performance of local farm agents, MT5 would remember the performance from past optimizations and spread the load correctly right from the start.
I hope this is the right place to share ideas like these. Cheers,
Marc