You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I have 24 threads and 52 RAM on my server motherboard. MT5 maxes both CPU and RAM at 100%. As I understand it is possible to take more, but my friend has it on Chinese motherboards and resets clock speed. And it is dangerous to overheat the computer.
Well, I'm leaning more and more towards the idea that there's no point in taking any kind of hat from Alick, it's better to get it from DHS or somewhere else, and assemble it. At least I have a place to complain if something goes wrong.
AMD Ryzen 9 3900X
This2673 v3 CPUis better in terms of performance and tests
You can run the optimisation yourself in several steps -https://www.mql5.com/ru/code/26132
But there's really no point in going through all that many combinations.
This is the first thought that comes to mind. But several stages is not the same thing. Because sometimes, if you change the parameters slightly, the results change significantly, and important parameters will simply be missed. There is no dependency so that you can find limits in large steps and then give smaller steps in that limit. For this reason, in order to test some hypotheses, the limit of the tester passes should be removed. Whether there is no point or not, only experience can judge. Research sometimes requires an unconventional approach.
And who will control what happens on the wind? Cool hardware won't help? Too bad, then what is the point of it? Security will get worse, there are a lot of things the average user doesn't know about. Security is first of all a matter of user classification.
Well, I'm leaning more and more towards the idea that there's no point in taking any kind of hat from Alick, it's better to get it from DHS or somewhere else, and assemble it. At least I have a place to complain if something goes wrong.
AMD Ryzen 9 3900X
by the way, while researching ryzen I found a pretty good deal on the ryzen 7 2700x... For the price of the 9 you can build a complete PC (without the external graphics card), yes there will be less cores/threads, but the cost per core/thread is lower.
Of course, if you need a number crusher for MT, then 2x processors on xeon will probably still be above competition on price/performance, but their future liquidity and using for other tasks, e.g. video processing, is under great question in view of limited set of processor commands and low (relatively modern CPUs) frequency...
PS also was getting closer to this topic from an EA optimization side, but while working on an old office computer somehow the topic has disappeared and the need for a lot of threads/core is not necessary anymore (or is it so far?).
That's the first thought that comes to mind. But a few steps are not the same thing. Because sometimes, if you change the parameters slightly, the results change significantly, and important parameters will simply be missed. There is no dependency so that you can find limits in large steps and then give smaller steps in that limit. For this reason, in order to test some hypotheses, the limit of the tester passes should be removed. Whether there is no point or not, only experience can judge. Research sometimes requires an out-of-the-box approach.
Divide your full table of over 100,000,000 passes by 100 passes of 1,000,000. You will end up with the same complete table of results (can be glued together programmatically).
Divide your complete overlap of over 100,000,000 passes by 100 passes of 1,000,000. You will end up with the same complete table of results (you can programmatically glue it together).
How to divide? Find first one parameter, then another, will not work, because there is a correlation of parameters with each other, and the correlation may be in the form of a complex function, the very search for which is rather a false approach. So, this kind of separation of optimisation is a mistake. A more correct way is described in my post, where you can try to find limits of good parameters in huge steps, but since results of insignificant parameter changes can change in leaps and bounds, this approach is not suitable either. One could of course try to work with deriving parameter correlation functions, but why, since such an approach is complicated, when one can do it in a more direct way, simply by doing a slow optimization with a large number of passes. This would seem to require writing one's own tester.