Public discussion of the formula for calculating the cost of resources in the MQL5 Cloud Network - page 4

 
Renat:

We focus on the following categories of users:

  1. those who need to make calculations as quickly as possible
  2. those who are willing to accumulate resources when not in use, so that they can use what they have accumulated quickly later
  3. Those who are willing to simply sell their (or available) resources for money and then withdraw them (users outside trading)

And there is a feeling that in a year's time, a third category of users will prevail, who will use the schedule function to sell resources in unused time.

4. those who are 3 and 2 until they need 1 - and as 1 they will use it much less often (traders mastering MQL5 for themselves).

% is difficult to voice, of course.

//////////////////////////////

TIME    - затраченное время на расчет задачи(пакета задач) в миллисекундах

must be necessarily present in calculations.

Example: I'm pumping 24 hours a day, why don't I collect free money by the hour?

Question: will "slow" (eg, by delay (ping)) be somehow cut off?

//////////////////////////////

On the other hand. most users don't have the capacity of developer servers, which will be the lion's share of computations in the first place.

What will be left for the sellers of their cores?

//////////////////////////////

Frankly speaking, I don't see any adequate figures, based on what to count.

On the one hand we must at least know the cost of the cloud and start from the profitability of the company for MetaQuotes (the lowest possible price tag). Suppose, I, as a potential seller and/or buyer, would be satisfied with 1$/day per core. Will the developers be satisfied with 10 cents?

From user side proceeding from the cost of the computer... well, dunno. Even in this thread, $500 and $2,000 have been quoted. That's a pretty wide spread.

I guess we still need to start with how much the buyer is willing to pay and how many of these buyers will be.

Maybe generally take the average price of the Expert Advisor (depending on complexity), programmers working on demand, can estimate? and then adjust the coefficients. On the total load on the cloud, as an option.

 
radioamator:

I propose to quote the cost of an hour of 100 PR units in a chart, which will be available on the MQL5 website. Buyers and sellers of CPU time through the website place their bids to buy and sell an hour of 100 PR units. All their bids over some period, for example over the last 120 days (almost a quarter), are accumulated and the PRice120 equilibrium price is calculated. This equilibrium price will be the price of the hour 100 units of PR. If the price of the seller's bid is lower than the PRice120 price, then his processing time is sold, if higher, he is not sold. With buyers the opposite is true.

The period of time during which the bids are accumulated is chosen by each buyer and seller individually from several options: 30 days, 60 days, etc. The deviation of its price from the equilibrium price at triggering is also chosen by each buyer and seller individually.

Too complicated in my opinion. The price should be set centrally based on statistics and additional information. General statistics are seen only by the developers, they are the ones who hold the cards.

And periodically adjust the price (say, once a quarter) can be based on supply and demand. Suppose you set a price of 1 cent, but want to work with the cloud will be much more than required - have to raise the price to increase the willingness to provide their capacities.

If the price is too high, buyers will leave (for private networks or elsewhere) and then the price will have to be reduced.

The only question is which price to base it on.

 
Mischek:
You have corporate clients who have bought TeamWox. Probably someone from your company maintains relations with them. Maybe you should try to offer them "the idea of increasing recycling/loading of 80-90% of idle computer power". They already have confidence in your company and the price issue can be quickly determined - it might be close to fair and optimal.

Another idea is that we will try to involve huge communities of enthusiasts who have been involved for decades (almost always for free) in various distributed computing projects (SETI@home and similar ones on the BOINC platform).

We have a good incentive to pay for resources.

 

Renat:

We will run several synthetic tools on the MetaQuotes-Demo server, where the number of sellers, buyers and price can be monitored. The formula for calculating/adjusting the price will be publicly available so that everything is transparent.

If we need to explicitly change the base price or adjust the calculation formula, we can do so with a public discussion.

Good idea. Then we will have to allow trading by CPU time, and all brokerage companies will die of envy... :)
Renat:

Another idea is that we will try to involve huge communities of enthusiasts who have been participating (almost always for free) in various distributed computing projects(SETI@home and similar ones on the BOINC platform) for decades.

Will we be able to catch Martians in the charts? :)
 
Silent:

Question: will the "slow" ones (e.g. by latency (ping)) be somehow cut off, or will they come in by priority/performance?

They are identified on the cloud server and "performance and time" are adjusted accordingly. Mainly this is to combat cheating.


On the other hand. most users don't have the capacity of developer servers, where the lion's share of computation will be done in the first place.

What will be left for the sellers of their cores?

We do not plan to sell our resources; our goal is to build a huge distribution network around the world.

Of course, some of our resources will be distributed for free at least at the initial stage.

On the one hand, we should at least know the cost of the cloud and start from the enterprise profitability for MetaQuotes (the minimum possible price tag). Let's assume that I, as a potential seller and/or buyer, would be satisfied with 1$/day per core. Will the developers be satisfied with 10 cents?

We are only operators of a distributed network, the aim is to create a cloud for tens and hundreds of thousands of calculation agents.

Look at the geographically distributed list of cloud servers - there will be more when the load increases.

Perhaps we should still consider how much the customer is willing to pay and how many customers there will be.

A simple variant of 1 Unit Price = Base Price * Func( Sellers, Buyers, Time)

As a result, the price will automatically be adjusted every hour depending on supply/demand.

 
And you can take the basic 'hospital average' cost of already existing cloud services as a base.
 
Silent:
And one could take the basic 'hospital average' cost of already existing cloud services as a base.
Yes, that was a thought too. But there the price includes a whole computer with disks, memory and CPU and the rest of the security and backup infrastructure.
 
Renat:
Thanks for the clarification, that makes more sense.
 
Renat:
Yes, that was a thought too. But there the price includes the whole computer with disks, memory and CPU and the rest of the security and backup infrastructure.
Essentially the same as coming from the cost of the user's computer...
 
Renat:

It is too complicated a scheme, as no one is even willing to lift a finger (and there is a whole manual bidding process) for paltry sums. The system has to work in near-automatic mode.


I don't really mean trading. A buyer of CPU time wants to buy himself N hours of 100 PR units. He needs to somehow inform the server that I, Ivan Ivanov, want to buy N hours and am willing to pay M cents for it. Buyer puts an order in my personal cabinet to buy, if his price is higher or equal to some price, base, equilibrium for 120 days or whatever, then the buyer buys the processor time. The point is that the offers for purchase/sale of processor time put up through the site are both commands to the server to buy/sell and statistical data to determine the price. The price chart is just for information purposes.