Specific test agent doesn't stay connected to the cloud

 

Hello all,

I have at home around 10 computers, on which I run the cloud agent.

For 9 of them, I have no issues. They are always connected to the cloud, and I can see them in the status page.

However, for my newest PC (a Ryzen 7900X), I can't get it to stay connected to the cloud for long.

After restarting the agents, sometimes they appear as connected but then they go to "disconnected" state and stay there. They almost never appear in the status page - Sometimes they appear and disappear quickly.

I already reinstalled Windows, and even running the agents in Linux under Wine, the behavior is exactly the same.

Looking at the logs, seems that the agent is killed right after recieving a job from the server.

CS 0 21:34:02.602 Network 4412 bytes of account info loaded

CS 0 21:34:02.602 Network 1478 bytes of tester parameters loaded

CS 0 21:34:02.602 Network 38084 bytes of input parameters loaded

CS 0 21:34:02.605 Network 17554 bytes of symbols list loaded (2260 symbols)

CS 0 21:34:02.605 Tester job 7191314681452057261 received from Cloud Server

CS 0 21:34:02.605 Tester file added, 103871 bytes loaded

CS 0 21:34:02.605 Network 46596 bytes of optimized inputs info loaded

CS 0 21:34:02.605 Tester successfully initialized

CS 0 21:34:02.605 Network 247 Kb of total initialization data received

CS 0 21:34:02.605 Tester AMD Ryzen 9 7900X 12-Core, 31864 MB

CS 0 21:34:02.605 Tester optimization pass 17944448 started

CS 0 21:34:02.607 Symbols NZDJPY: symbol to be synchronized

CS 0 21:34:02.662 Tester tester agent shutdown started

CS 0 21:34:02.662 Tester shutdown tester machine

CS 2 21:34:02.662 Symbols symbol NZDJPY synchronization canceled


When I try to use these agent from my local network, they work perfectly.

So it seems that there's some issue with the cloud agent for this type of hardware. One think worth noting is that the PR for this HW is way higher than the others. (250+)

All my other hardware has at most PR of 210. I wonder if cloud could be incorrectly limiting high PR agents to avoid fraud or something like that (Maybe PR is a byte counter, so limited to 256?) 


Does anyone have similar issues with this hardware? How could I contact the devs for looking in this case?

 
Hello, good evening.

Your post is very interesting, as I think there are many of us who are interested in solving this problem of disconnections, we do not know if it is because the platform is not being used in these times, or these disconnections are programmed by the people of MQL.

I currently have seven computers, and every day I find computers that all the agents are completely disconnected and even if you restart the agents, they do not reconnect until after many hours. Let's see if there is someone who can give us an explanation.

By the way, the processor that gives you a 250 PR is the 7900X?

Best regards.
 

I notice disconnection on my other machines every now and then, maybe 2x a week or so, but they do stay connected most of the time.

But specificially for the Ryzen 7900X, it's very rare that it can connect and stay connected for some time. 

Yes, the PR 250 comes from this Ryzen 7900X.

 
Emerson Gomes #:

I notice disconnection on my other machines every now and then, maybe 2x a week or so, but they do stay connected most of the time.

But specificially for the Ryzen 7900X, it's very rare that it can connect and stay connected for some time. 

Yes, the PR 250 comes from this Ryzen 7900X.

All the computers I have are Intel, specifically 10900K, and on all of them I suffer from disconnections on many days of the week. I also know some colleagues who suffer from the same disconnections, and we can't find an explanation for this.

How long have you had the 7900X connected to the network? I have never used an AMD processor for mql. Thanks for your answer.

 
Robertomcat #:

All the computers I have are Intel, specifically 10900K, and on all of them I suffer from disconnections on many days of the week. I also know some colleagues who suffer from the same disconnections, and we can't find an explanation for this.

How long have you had the 7900X connected to the network? I have never used an AMD processor for mql. Thanks for your answer.


I have some Intel and some AMD mixed. They behave more or less the same, meaning they stay connected most of  the time, but still disconnecting every 1/2 days. Sometimes they reconnect automatically, sometimes they need to be restarted. 

For the 7900X I have underclocked the CPU and now it's getting connected to the network, with PR 247.

Honestly I am not sure if this is really what fixed it or just some coincidence.


Another annoying thing is that sometimes the agent appears as connected but it's not.

I noticed that when I am running optimizations locally and some agents do not respond. They appear as "in use by another terminal", but checking the machine I can see they are completely idle. After restart they are working again.


So, yes, the agents are very unstable. Maybe a dirty workaround would be to have a watchdog restarting the agents when they disconnect or become irresponsible. 

Apparently there's no way to contact the developers to report this issue.

 

I had the same abort issue with my i9-13900k PC at stock settings.  After installation, it gets plenty of jobs, but all of them abort immediately after getting the job.  On a fresh install, it wouldn't even create the temp or history folders, although it would look for them and try to access them, resulting in IO errors.  However, having run it successfully for awhile, I can confirm that their pre-existence does not resolve the issue.  Within a few hours of aborting countless tasks, the cloud network seems to really slow down and send it jobs rarely—probably due to the repeated failures.  The only solution I have found thus far is to limit the maximum CPU speed to 5.7 GHz (it can easily do 6.0 GHz).  As soon as I bump it up to 5.8 GHz (the default speed), it goes back to aborting every incoming task at the synchronization phase.  I have run many stress/error tests (including single-core only, to test at maximum clock speed) to verify that the CPU is stable and not making any computational mistakes, and it has passed each time.  It seems that this failure starts to happen around a PR of 250 or above.  Interestingly, now that my eyes are opened to this mode of failure, I have noticed it randomly in logs even after limiting the maximum CPU speed to 5.7 GHz, and have even seen it on occasion in the logs of slower machines in the low 200 PR range.  The key is the last four lines of your log: History to be synchronized, tester shutdown started, tester shutdown, history synchronization canceled—all occurring within a second of receiving the job from the server.

My guess is that they're running a dead loop with say 100,000 iterations, and at the start of a job, they are measuring tick time at the beginning and end of the loop.  When a CPU is so fast it can complete the loop in a shorter time than the resolution of the system timer, the elapsed time of the loop measures zero and the task aborts.  This method may have worked fine in the days of 3rd-generation Intel processors, but processing speed has increased substantially since that time!  If this is in fact what they are doing, they need to either call the QueryPerformanceCounter API instead of the system timer for increased resolution, or increase the number of iterations in the loop so that the time can once again be successfully measured on a very fast CPU.  Or even better, reconsider the use of such a loop at the beginning of each task to begin with, since there is an existing PR number that has been more carefully calculated.  However, the issue doesn't seem connected to the PR number itself; I say this because this failure occurs randomly from time to time (which would be expected with evaluations of the system timer, depending on where the loop randomly caught it), and when bumping the CPU speed up by a mere 100 MHz, it immediately starts aborting every job, despite the PR number not being updated right away to reflect the increased processing speed.  Or perhaps this problem is occurring due to a multi-threading race issue, where again, a dead loop is used but expires before the other thread is given a slice of time.  Threads are supposed to give up their time by calling a Wait API, or SleepEx 1, 1—not by running dead loops that waste unnecessary power and that could get dangerously short on a very fast CPU.

@MetaQuotes@Renat Fatkhullin this is not a one-off issue affecting one user with a defective system, but a repeatable bug in the MetaTester 5 Agent when using a PC with a modern, fast processor having a PR around or above 250.  Could you guys please look into this, get the right developers made aware of this issue, and provide an update?  Below is a log from a couple days ago on one of my agents demonstrating this issue—in its random form, with my CPU limited to 5.7 GHz in the BIOS.  Over the last three days of logs, these two events in a row are the only failures of this type on this agent.  But if I return the CPU to its factory default speed of 5.8 GHz, every single job over every agent will fail in this way:

  • At 5:21:19.723, a job comes in.  At 5:21:20.498, the agent spontaneously shuts down, and an error (2) is reported as synchronization is canceled and the job forfeited.
  • From 5:22:24.709 to 5:25:28.857, the agent does some needless network fidgeting.
  • At 6:13:57.312, a job comes in.  At 6:14:05.525, the agent spontaneously shuts down, and an error (2) is reported as synchronization is canceled and the job forfeited.
  • At 6:22:25.549, a job comes in.  At 6:22:59.336, synchronization successfully completes.  At 6:23:07.418, a pass is completed, with more successful passes following.  This is the normal sequence.
CS      0       04:46:03.942    Network MQL5 Cloud Network server agent4.mql5.net selected after rescan (ping 24 ms)
CS      0       04:46:03.971    Network connected to agent4.mql5.net
CS      0       04:46:04.033    Network authorized on agent4.mql5.net for shalem2014
CS      0       05:21:19.723    Network 4412 bytes of account info loaded
CS      0       05:21:19.723    Network 1478 bytes of tester parameters loaded
CS      0       05:21:19.723    Network 91332 bytes of input parameters loaded
CS      0       05:21:19.723    Network 961 bytes of symbols list loaded (201 symbols)
CS      0       05:21:19.723    Tester  job 7195501617435642734 received from Cloud Server
CS      0       05:21:19.723    Tester  file added, 362285 bytes loaded
CS      0       05:21:19.723    Network 121932 bytes of optimized inputs info loaded
CS      0       05:21:19.724    Tester  successfully initialized
CS      0       05:21:19.724    Network 581 Kb of total initialization data received
CS      0       05:21:19.724    Tester  13th Gen Intel Core i9-13900K, 130812 MB
CS      0       05:21:19.724    Tester  optimization pass 557 started
CS      0       05:21:19.728    Symbols XAUUSD: symbol to be synchronized
CS      0       05:21:19.898    Symbols XAUUSD: symbol synchronized, 3720 bytes of symbol info received
CS      0       05:21:20.498    Tester  tester agent shutdown started
CS      0       05:21:20.498    Tester  shutdown tester machine
CS      2       05:21:20.498    History history XAUUSD synchronization canceled
CS      2       05:22:21.499    Tester  cannot stop tester process [0, 0]
CS      0       05:22:24.709    Network connected to agent4.mql5.net
CS      0       05:22:24.773    Network authorized on agent4.mql5.net for shalem2014
CS      0       05:23:26.167    Network connected to agent4.mql5.net
CS      0       05:23:26.229    Network authorized on agent4.mql5.net for shalem2014
CS      0       05:24:27.324    Network connected to agent4.mql5.net
CS      0       05:24:27.385    Network authorized on agent4.mql5.net for shalem2014
CS      0       05:25:28.793    Network connected to agent4.mql5.net
CS      0       05:25:28.857    Network authorized on agent4.mql5.net for shalem2014
CS      0       06:13:57.312    Network 4412 bytes of account info loaded
CS      0       06:13:57.312    Network 1478 bytes of tester parameters loaded
CS      0       06:13:57.312    Network 74948 bytes of input parameters loaded
CS      0       06:13:57.321    Network 18600 bytes of symbols list loaded (3154 symbols)
CS      0       06:13:57.322    Tester  job 7195515086453087051 received from Cloud Server
CS      0       06:13:57.322    Tester  file added, 740991 bytes loaded
CS      0       06:13:57.322    Network 106892 bytes of optimized inputs info loaded
CS      0       06:13:57.322    Tester  successfully initialized
CS      0       06:13:57.322    Network 956 Kb of total initialization data received
CS      0       06:13:57.322    Tester  13th Gen Intel Core i9-13900K, 130812 MB
CS      0       06:13:57.322    Tester  optimization pass 1423 started
CS      0       06:13:57.395    Symbols WIN$N: symbol to be synchronized
CS      0       06:13:57.747    Symbols WIN$N: symbol synchronized, 3720 bytes of symbol info received
CS      0       06:14:05.525    Tester  tester agent shutdown started
CS      0       06:14:05.525    Tester  shutdown tester machine
CS      2       06:14:05.559    History history WIN$N synchronization canceled
CS      2       06:15:06.526    Tester  cannot stop tester process [0, 0]
CS      0       06:15:08.735    Network connected to agent4.mql5.net
CS      0       06:15:08.808    Network authorized on agent4.mql5.net for shalem2014
CS      0       06:16:10.220    Network connected to agent4.mql5.net
CS      0       06:16:10.284    Network authorized on agent4.mql5.net for shalem2014
CS      0       06:22:25.549    Network 4412 bytes of account info loaded
CS      0       06:22:25.549    Network 1478 bytes of tester parameters loaded
CS      0       06:22:25.549    Network 8900 bytes of input parameters loaded
CS      0       06:22:25.549    Network 933 bytes of symbols list loaded (127 symbols)
CS      0       06:22:25.549    Tester  job 7195516933289033033 received from Cloud Server
CS      0       06:22:25.549    Tester  file added, 197742 bytes loaded
CS      0       06:22:25.549    Network 6924 bytes of optimized inputs info loaded
CS      0       06:22:25.549    Tester  successfully initialized
CS      0       06:22:25.549    Network 223 Kb of total initialization data received
CS      0       06:22:25.549    Tester  13th Gen Intel Core i9-13900K, 130812 MB
CS      0       06:22:25.549    Tester  optimization pass 60064634307 started
CS      0       06:22:25.551    Symbols GBPAUD: symbol to be synchronized
CS      0       06:22:25.597    Symbols GBPAUD: symbol synchronized, 3880 bytes of symbol info received
CS      0       06:22:37.528    History GBPAUD: load 27 bytes of history data to synchronize in 0:00:00.180
CS      0       06:22:37.528    History GBPAUD: history synchronized from 2015.01.02 to 2022.12.30
CS      0       06:22:37.561    History GBPAUD,H4: history cache allocated for 12509 bars and contains 1554 bars from 2015.01.02 08:00 to 2015.12.31 16:00
CS      0       06:22:37.561    History GBPAUD,H4: history begins from 2015.01.02 08:00
CS      0       06:22:37.601    History GBPAUD,Monthly: history cache allocated for 104 bars and contains 12 bars from 2015.01.01 00:00 to 2015.12.01 00:00
CS      0       06:22:37.601    History GBPAUD,Monthly: history begins from 2015.01.01 00:00
CS      0       06:22:37.635    History GBPAUD,Weekly: history cache allocated for 419 bars and contains 53 bars from 2014.12.28 00:00 to 2015.12.27 00:00
CS      0       06:22:37.635    History GBPAUD,Weekly: history begins from 2014.12.28 00:00
CS      0       06:22:37.668    History GBPAUD,Daily: history cache allocated for 2089 bars and contains 263 bars from 2015.01.02 00:00 to 2015.12.31 00:00
CS      0       06:22:37.668    History GBPAUD,Daily: history begins from 2015.01.02 00:00
CS      0       06:22:37.702    History GBPAUD,H1: history cache allocated for 50020 bars and contains 6200 bars from 2015.01.02 08:00 to 2015.12.31 19:00
CS      0       06:22:37.702    History GBPAUD,H1: history begins from 2015.01.02 08:00
CS      0       06:22:37.737    History GBPAUD,M15: history cache allocated for 200080 bars and contains 24800 bars from 2015.01.02 08:00 to 2015.12.31 19:45
CS      0       06:22:37.737    History GBPAUD,M15: history begins from 2015.01.02 08:00
CS      0       06:22:42.193    History GBPAUD,M5: history cache allocated for 600235 bars and contains 74395 bars from 2015.01.02 08:05 to 2015.12.31 19:55
CS      0       06:22:42.311    History GBPAUD,M5: history begins from 2015.01.02 08:05
CS      0       06:22:42.479    Symbols GBPJPY: symbol to be synchronized
CS      0       06:22:42.510    Symbols GBPJPY: symbol synchronized, 3880 bytes of symbol info received
CS      0       06:22:58.131    History GBPJPY: load 27 bytes of history data to synchronize in 0:00:00.586
CS      0       06:22:58.131    History GBPJPY: history synchronized from 2015.01.02 to 2023.01.18
CS      0       06:22:58.132    Symbols AUDJPY: symbol to be synchronized
CS      0       06:22:59.133    Symbols AUDJPY: symbol synchronized, 3880 bytes of symbol info received
CS      0       06:22:59.336    History AUDJPY: load 27 bytes of history data to synchronize in 0:00:00.043
CS      0       06:22:59.336    History AUDJPY: history synchronized from 2012.01.02 to 2023.01.19
CS      0       06:23:07.418    Tester  60064634307 : passed in 0:00:29.937 (history synchronized in 0:00:15.820)
CS      0       06:23:07.418    Tester  optimization finished
CS      0       06:28:04.520    Tester  optimization pass 166011277236 started
CS      0       06:28:17.565    Tester  166011277236 : passed in 0:00:13.041
CS      0       06:28:17.565    Tester  optimization finished
CS      0       06:28:17.603    Tester  optimization pass 124875273818 started
CS      0       06:28:29.461    Tester  124875273818 : passed in 0:00:11.857
CS      0       06:28:29.461    Tester  optimization finished
CS      0       06:29:20.657    Tester  optimization pass 118243728157 started
CS      0       06:29:32.780    Tester  118243728157 : passed in 0:00:12.120
CS      0       06:29:32.780    Tester  optimization finished
 
Shalem Loritsch #:

I had the same abort issue with my i9-13900k PC at stock settings.  At the beginning, it gets lots of jobs, but every last one of them aborts immediately after getting the job.  On a fresh install, it wouldn't even create the temp or history folders, although it would look for them and try to access them, resulting in IO errors.  However, having run it successfully for awhile, I can confirm that their pre-existence does not resolve the issue.  Within a few hours of aborting countless tasks, the cloud network seems to really slow down and send it jobs rarely.  The only solution I have found thus far was to limit the maximum CPU speed to 5.7 GHz (it can easily do 6.0 GHz).  As soon as I bump it up to 5.8 GHz (the default speed), it goes back to aborting every incoming task at the synchronization phase.  I have run many stress/error tests (including single-core only, to test at maximum clock speed) to verify that the CPU is stable and not making any computational mistakes, and it has passed each time.  It seems that this failure starts to happen around a PR of 250 or above.  Interestingly, now that my eyes are opened to this mode of failure, I have noticed it randomly in logs even after limiting the maximum CPU speed to 5.7 GHz, and have even seen it on occasion in the logs of slower machines in the low 200 PR range.  The key is the last four lines of your log: History to be synchronized, tester shutdown started, tester shutdown, history synchronization canceled—all occurring within a second of receiving the job from the server.

My guess is that they're running a dead loop with say 100,000 iterations, and measuring tick time at the beginning and end of the loop immediately at the start of a job.  When a CPU is so fast it can complete the loop in a shorter time than the resolution of the system timer, the elapsed time of the loop is zero and the task aborts.  This method may have worked fine in the days of 3rd-generation Intel processors, but processing speed has increased substantially since that time!  If this is in fact what they are doing, they need to either call the QueryPerformanceCounter API instead of the system timer for increased resolution, or increase the number of iterations in the loop.  Or even better, reconsider the use of such a loop at the beginning of each task to begin with, since there is an existing PR number that has been more carefully calculated.  The issue doesn't seem connected to the PR number.  I say this because this failure occurs randomly from time to time (which would be expected with evaluations of the system timer, depending on where the loop randomly caught it), and when bumping the CPU speed up by a mere 100 MHz, it immediately starts aborting every job, despite the PR number not being updated right away to reflect the increased processing speed.

@MetaQuotes, this is not a one-off issue, but a repeatable bug in the MetaTester 5 Agent when using a PC with a modern, fast processor with a PR around or above 250.  Could you guys please look into this, get the right developers made aware of this issue, and provide an update?

That's a very good hypothesis, that would make complete sense, and I can confirm that just by bumping the CPU clock by a little bit is enough to make the Agent refuse running any jobs right away.

 
But the 13900K processor is a hybrid between P-Cores and E-cores. Could it be that the Metatester software is not mature enough to work with that architecture? And in the case that you are OCing the processor, it is also possible that although no errors can be detected with the naked eye, those errors exist inside the processor.

I have the 10900K that gives between 225 and 230 PR, and the disconnection of the agents selling the power, is still random and meaningless. Unless there is an oversupply of agents around the world, and the servers can't handle that many agents. Anyway, the Metatester is being updated every week, so they should be optimising it for the new processors and their new architectures. But because this world is so opaque, there is no information anywhere.

It seems that the two processors you have, both the 7900X and the 13900K are very powerful, but really expensive if you want to invest to sell the power in the future. Although looking at the PR that both processors give, you're probably assured of a long time of getting work out of the platform. AMD has always been behind Intel on the Metatester, but that seems to be changing.
 

I haven't had any serious issues with the split architecture in any software, although I have heard it can affect protections in some games.  Anyway, one of the reasons I hadn't said anything until now about this speed-related aborting issue was because I was only aware of it happening on one processor, the i9-13900k.  But now with this thread, we have a report of the same thing happening on the new flagship AMD processors too, and they do not use the split architecture.  So it is not an architecture problem, but rather this is the first time that either Intel or AMD have managed to get single-thread execution this fast, and it appears to have revealed a bug in MetaTester 5.  Regarding overclocking, I mentioned that this issue is occurring at default/stock/factory speeds, and that I actually have to underclock the CPU to make MetaTester 5 work correctly.  Boosting the CPU voltage (to improve stability if it's on the edge of making mistakes) makes no difference here; MetaTester 5 simply needs the CPU to go slower.  Additionally, there are several softwares available for stability testing and system validation, which run the CPU through all of its paces and functions in complicated calculations, where the results are compared against known correct answers to make sure the CPU is both stable and also not making calculation errors.  As I noted earlier, my CPU is stable to 6.0 GHz, and this issue is happening down at 5.8 GHz, the factory default speed, regardless of voltage.  The only solution I have found thus far is to limit the maximum speed of the CPU to 5.7 GHz—and even still, this issue manifests randomly from time to time (as noted in my earlier post).  This is a particularly annoying solution as it reduces the speed of single-threaded software on my PC—part of what I bought this expensive CPU to maximize!

The long-term disconnection of agents is a separate topic/server-side issue that should be discussed in its own thread so as not to distract from the local issue at hand here: Agents immediately aborting all incoming cloud network jobs on PCs that are "too fast" (around or above a PR of 250).

This is my personal computer, so return on investment via MetaTester 5 really isn't my focus here—although it is nice to be able to sell unused processing power.  AMD has always had poor single-core performance and low IPC compared to Intel.  This generation is the first time they have made major strides in these areas instead of just doing what they do best: adding more and more cores to make a slow CPU seem faster.  They still haven't managed to get idle power consumption down to Intel's level though; while AMD CPUs have been more efficient than Intel CPUs at full load for some time now, they waste 10x as much power at idle.  For example, the 13900k idles down to about 3w, while the comparable 7950x wastes about 30w doing nothing.

 

Well, not really being the goal of this topic, but I do have issues with the big.little CPU design of Intel.

I have an i12900k. When running backtests, Metatrader will allocate 100% CPU on all E-Cores, leaving all P-Cores idle.

However, if I set the task affinity to allow only the use of the P-Cores, it will then allocate 100% of usage on all P-Cores, and, of course, the process speeds up by multiple times.

But if I allow all P-Cores and a single E-core, it will allocate 100% of the load only to the E-core, while all P-cores go idle again - which is bizarre.

It's not very clear how Windows (or... whoever) decides that a given load should either run on P or E cores. Clearly, it's not doing a good job in this case.

My super dirty workaround on this issue was to install 2 instances of the tester in different folder, and using Process Lasso, set one of the instances to use the P cores, and the other one the E cores.

 

I am having a similar problem on my Ryzen 7950X computer.  I previously had to underclock it to make it work on the MQL5 Cloud Network at all, but several days ago, its PR floated up to 251 and it stopped working again. It stays connected just fine, receives plenty of jobs throughout the day, but none of them ever even start--over the period of several days.  The agent logs all look like this:

CS      0       00:35:18.999    Network connected to agent2.mql5.net
CS      0       00:35:19.122    Network authorized on agent2.mql5.net for raMegaCPU
CS      0       01:15:18.857    Network 4412 bytes of account info loaded
CS      0       01:15:18.857    Network 1478 bytes of tester parameters loaded
CS      0       01:15:18.857    Network 20164 bytes of input parameters loaded
CS      0       01:15:18.857    Network 9071 bytes of symbols list loaded (850 symbols)
CS      0       01:15:18.857    Tester  job 7200633424749538813 received from Cloud Server
CS      0       01:15:18.857    Tester  file added, 56749 bytes loaded
CS      0       01:15:18.857    Network 9444 bytes of optimized inputs info loaded
CS      0       01:15:18.857    Tester  successfully initialized
CS      0       01:15:18.857    Network 175 Kb of total initialization data received
CS      0       01:15:18.858    Tester  AMD Ryzen 9 7950X 16-Core, 64675 MB
CS      0       01:25:53.281    Network connected to agent2.mql5.net
CS      0       01:25:53.379    Network authorized on agent2.mql5.net for raMegaCPU
CS      0       02:38:21.794    Tester  account info found with currency USD
CS      0       02:38:25.972    Network 1478 bytes of tester parameters loaded
CS      0       02:38:25.972    Network 22724 bytes of input parameters loaded
CS      0       02:38:25.973    Network 9071 bytes of symbols list loaded (850 symbols)
CS      0       02:38:25.973    Tester  job 7200650278201214304 received from Cloud Server
CS      0       02:38:25.973    Tester  file added, 58623 bytes loaded
CS      0       02:38:25.973    Network 24732 bytes of optimized inputs info loaded
CS      0       02:38:25.973    Tester  successfully initialized
CS      0       02:38:25.973    Network 125 Kb of total initialization data received
CS      0       02:38:25.973    Tester  AMD Ryzen 9 7950X 16-Core, 64675 MB
CS      0       03:21:38.351            rescan needed
CS      0       03:21:39.719    Network connected to agent1.mql5.net
CS      0       03:21:41.529    Network connected to agent2.mql5.net
CS      0       03:21:42.956    Network connected to agent3.mql5.net
CS      0       03:21:44.775    Network connected to agent4.mql5.net
CS      0       03:21:46.556    Network connected to agent1.mql5.net
CS      0       03:21:48.329    Network connected to agent2.mql5.net
CS      0       03:21:50.028    Network connected to agent3.mql5.net
CS      0       03:21:51.848    Network connected to agent4.mql5.net
CS      0       03:21:53.875    Network connected to agent1.mql5.net
CS      0       03:21:55.689    Network connected to agent2.mql5.net
CS      0       03:21:57.404    Network connected to agent3.mql5.net
CS      0       03:21:59.238    Network connected to agent4.mql5.net
CS      0       03:22:00.461    Network MQL5 Cloud Network server agent2.mql5.net selected after rescan (ping 29 ms)
CS      0       03:22:00.558    Network connected to agent2.mql5.net
CS      0       03:22:00.749    Network authorized on agent2.mql5.net for raMegaCPU
CS      0       04:28:50.815    Network 4412 bytes of account info loaded
CS      0       04:28:50.815    Network 1478 bytes of tester parameters loaded
CS      0       04:28:50.815    Network 130244 bytes of input parameters loaded
CS      0       04:28:50.819    Network 29003 bytes of symbols list loaded (5143 symbols)
CS      0       04:28:50.819    Tester  job 7200683276434948265 received from Cloud Server
CS      0       04:28:50.819    Tester  file added, 388808 bytes loaded
CS      0       04:28:50.819    Network 172860 bytes of optimized inputs info loaded
CS      0       04:28:50.820    Tester  successfully initialized
CS      0       04:28:50.820    Network 843 Kb of total initialization data received
CS      0       04:28:50.820    Tester  AMD Ryzen 9 7950X 16-Core, 64675 MB
CS      0       04:28:50.820    Tester  optimization pass 45320392586 started
CS      0       04:28:50.829    Symbols @YM: symbol to be synchronized
CS      0       04:28:50.883    Tester  tester agent shutdown started
CS      0       04:28:50.883    Tester  shutdown tester machine
CS      2       04:28:50.883    Symbols symbol @YM synchronization canceled
CS      2       04:29:51.884    Tester  cannot stop tester process [0, 0]
CS      0       04:34:12.760    Network connected to agent1.mql5.net
CS      0       04:34:15.224    Network connected to agent2.mql5.net
CS      0       04:34:16.921    Network connected to agent3.mql5.net
CS      0       04:34:18.793    Network connected to agent4.mql5.net
CS      0       04:34:20.135    Network connected to agent1.mql5.net
CS      0       04:34:21.655    Network connected to agent2.mql5.net
CS      0       04:34:23.080    Network connected to agent3.mql5.net
CS      0       04:34:24.896    Network connected to agent4.mql5.net
CS      0       04:34:26.617    Network connected to agent1.mql5.net
CS      0       04:34:28.473    Network connected to agent2.mql5.net
CS      0       04:34:30.472    Network connected to agent3.mql5.net
CS      0       04:34:32.288    Network connected to agent4.mql5.net
CS      0       04:34:33.828    Network MQL5 Cloud Network server agent2.mql5.net selected after rescan (ping 29 ms)
CS      0       04:34:33.882    Network connected to agent2.mql5.net
CS      0       04:34:34.033    Network authorized on agent2.mql5.net for raMegaCPU
CS      0       07:40:01.969    Network connected to agent2.mql5.net
CS      0       07:40:02.088    Network authorized on agent2.mql5.net for raMegaCPU
CS      0       09:37:53.687    Network 4412 bytes of account info loaded
CS      0       09:37:53.687    Network 1478 bytes of tester parameters loaded
CS      0       09:37:53.687    Network 6340 bytes of input parameters loaded
CS      0       09:37:53.688    Network 2977 bytes of symbols list loaded (738 symbols)
CS      0       09:37:53.688    Tester  job 7200762862178931947 received from Cloud Server
CS      0       09:37:53.688    Tester  file added, 252085 bytes loaded
CS      0       09:37:53.688    Network 7872 bytes of optimized inputs info loaded
CS      0       09:37:53.689    Tester  successfully initialized
CS      0       09:37:53.689    Network 285 Kb of total initialization data received
CS      0       09:37:53.689    Tester  AMD Ryzen 9 7950X 16-Core, 64675 MB
CS      0       10:34:37.319            rescan needed
CS      0       10:34:38.795    Network connected to agent1.mql5.net
CS      0       10:34:40.603    Network connected to agent2.mql5.net
CS      0       10:34:42.034    Network connected to agent3.mql5.net
CS      0       10:34:43.825    Network connected to agent4.mql5.net
CS      0       10:34:45.258    Network connected to agent1.mql5.net
CS      0       10:34:46.745    Network connected to agent2.mql5.net
CS      0       10:34:48.194    Network connected to agent3.mql5.net
CS      0       10:34:49.987    Network connected to agent4.mql5.net
CS      0       10:34:51.409    Network connected to agent1.mql5.net
CS      0       10:34:53.290    Network connected to agent2.mql5.net
CS      0       10:34:54.965    Network connected to agent3.mql5.net
CS      0       10:34:57.735    Network connected to agent4.mql5.net

Unlike the logs shown earlier in this thread, many times the job simply disappears on mine with no cancellation or error noted. To work around this issue, I started mining Monero on three cores to load the CPU down, keeping my PR from going too high, and uninstalled MetaTester and started over just to get it to work again. MetaQuotes should be updated to handle faster and newer CPUs that are coming on the market. Higher PR numbers need to be accepted.