How is the Cloud cost REALLY calculated ?

 

I made some experiments to check if the Cloud can be used for real world optimization tasks. I was very surprised by the results from the cost point of view.

Here the site data :

I will detail the MACD Sample optimization.

Cost calculation from here : https://www.metatrader5.com/en/terminal/help/mql5cloud/mql5cloud_calculation

Cost = QuantPrice * PR * WorkTime (in ms).

I will not give the details for the TaurusTester optimization, but here is the total :

So the cost seems to be overestimated by a factor between 2 and 3.


The tasks page is talking about "task execution cost considering the traffic generated during the process".

But I didn't find anywhere what could the cost for the "traffic". And mainly I am wondering how the OUT traffic can be so huge ?!?

These optimizations are not using any special data processing (Frame), nor creating data files, nothing. So how comes that an EA as simple as MACD Sample is generating 416 MB of out traffic for 50 passes. Which is more than 8 MB for 1 pass ? Is it that out traffic which is paid ?

Thanks MetaQuotes to clarify this.

 
The MQL5 Cloud Network is quite data-hungry.  The majority of this data usage comes from synchronizing market history database files on a per-broker basis, and is often the primary source of delays.  Based on my observations, MetaTester does a terrible job of managing its cache, to the point that it is almost as good as not having one.  For example, it will redownload historical tick data for markets and brokers almost each time it starts a new job, irregardless of already having the needed data in its Bases folder and the data being for past years (which should be unchanging); the network traffic for a system where the Bases folder is purged is almost identical to one that's been going for years and exceeding 400GB.  Consider that the cloud network consists mostly of individual desktop PCs, so every 4-16 agents is a different PC that has to do its own synchronization.  As you can see, from an aggregate point of view (as you have here, requesting the job), this can add up quite quickly!  And as an agent provider, this can be a bit of an issue too; my agents regularly pull hundreds of gigabytes of data each month.
 
Shalem Loritsch #:
The MQL5 Cloud Network is quite data-hungry.  The majority of this data usage comes from synchronizing market history database files on a per-broker basis, and is often the primary source of delays.  Based on my observations, MetaTester does a terrible job of managing its cache, to the point that it is almost as good as not having one.  For example, it will redownload historical tick data for markets and brokers almost each time it starts a new job, irregardless of already having the needed data in its Bases folder and the data being for past years (which should be unchanging); the network traffic for a system where the Bases folder is purged is almost identical to one that's been going for years and exceeding 400GB.  Consider that the cloud network consists mostly of individual desktop PCs, so every 4-16 agents is a different PC that has to do its own synchronization.  As you can see, from an aggregate point of view (as you have here, requesting the job), this can add up quite quickly!  And as an agent provider, this can be a bit of an issue too; my agents regularly pull hundreds of gigabytes of data each month.

What is the relevance of this remark for the topic ?

Agents downloading data is certainly not an "Out traffic", and hopefully not a paying task.

 
Alain Verleyen #:

What is the relevance of this remark for the topic ?

Agents downloading data is certainly not an "Out traffic", and hopefully not a paying task.

Everything: Since you're the one commissioning the job, it would be "out" traffic from your perspective and "in" traffic for the agent providers.  And no, we're only paid for the individual optimization passes (they're listed out in our log files, down to the millisecond), not history synchronization (although whether MQL5 is charging YOU for brokering these large amounts of data between us however is unknown to me).  In fact, time after time I've seen in the logs an agent gets a job, spends many seconds synchronizing history, and then never get any paying passes.  We don't make anything when this happens.  Furthermore, I have seen times where a PC will get a big job, loading all the agents and running for hours (18+ hours would be the longest I've personally observed), only to be cancelled at the end before completing the assigned pass.  Despite consuming several kWh per PC, we get paid exactly nothing for those jobs too.  Days like that are the lowest paying, because the agents were tied up with non-paying work for hours when they could've been free, taking little random jobs throughout the day.

"What's the relevance of all this?"  You were speculating on what's paid and why so much data is involved.  I am providing you with experiential insights on the agent provider's end.