Crazy cache of testing agents - page 3

 
Renat Fatkhullin:
I forgot to tell you a secret - people will find a hundred reasons why they won't use a paid service, even if it costs $1.

This is the underlying explanation for the difficulty of selling. But instead people come up with supposedly rational reasons :)

So if they don't try it, how can they like it? How about giving people the opportunity to use the network for free once a month for an hour?

 
Renat Fatkhullin:

And you look at yourselves instead of operating with forum statements.

Perhaps you are right.

Hopefully, the serious brakes on the MQ-Demo side will soon be finally resolved.

 
fxsaber:

Perhaps you are right.

Hopefully the serious brakes on the MQ-Demo side will be finally resolved soon.

Try the new Friday build of MT5 1545 - something has been improved.

Also note that we have over 230,000 constantly active accounts on MetaQuotes-Demo MT5 and there is a real trading bacchanalia.
 
Renat Fatkhullin:

Having to read several gigabytes of data from a drive is "disgusting organisation"? Even just reading 1gb of data from an ssd at an average speed of 200mbps will take 5 seconds. And if there are 4-32 agents out there?

You just think about the technical side of the task. Nothing is free and no one multiplies the technical requirements by zero.

The technical solution and level of agent optimization is amazing - we put a wild amount of work into it and scratched milliseconds out of all the processes. Don't forget the data volumes, put in more RAM, put in bigger ssd's, put in frame disks and everything will be accelerated.

Prices for all this are already reasonable, but the class and volume to be solved require a serious approach.

I have 64GB of RAM in my system. And tester for 32 agents uses maximum 40GB and then resets data to disk about 1.1-2GB, yes I agree - it all could not fit into RAM. But if I disable half of the agents - the remaining agents behave exactly the same, even though there is still a ton of RAM space.

Dear Admin:

1)Do you really think that such load on disk (hundreds of gigabytes of rewrites per day) is normal? How long do you think a normal SSD will last in this mode?

2) Do you really think that absence of any resource usage adjustments in the tester is correct?

Once again, I solved my problem using the same methods you suggest. But it took me long enough and not to say that this solution always works correctly (for example, when there are multiple copies of the tester in the system on different disks - sometimes leads to spontaneous removal of agents, I wrote on this topic to no avail to service-desk. Again, I solved the problem myself). And not every user will be able to do it.

Renat Fatkhullin:

The topikstarter started the thread in "how long?" mode and made some unsubstantiated statements. If he had provided properly collected data, 50% of the questions would have fallen away at the data collection stage.

What proof do you need? Don't you know how much RAM an agent eats, how much cache on disk, how much data can be overwritten per day? Multiply and get the result.

In general, why prove something to me, I'm accusing you of something? The point of the theme is that here is a problem, how it can be solved? You, like in service-desk, start to tell the obvious things for some reason and praise the optimization of the tester. When any sane programmer knows that no program is perfect.
 
About the cloud. The idea is great. And this is what it looks like in reality:



32 cents/day is a good day for my agents)) And the cloud uses 2 systems of 32 agents each and 2 of 8 agents. A total of 80 agents practically around the clock. The record per day was around $2.00, often no profit at all. The red bar is the cost per 5 minutes of testing in the cloud, about 40,000 runs for 3 months (if I remember correctly). Earnings for 287,781 runs just over 20 quid, took six months))))
It's a shame but at the moment the cloud doesn't make any sense to me.

It would be great if it was like a torrent tracker: make 100 passes, get 80 (-20% to developer).

Tying your agents into a network is no easy task either. That is why all traders' associations died out a few years ago.

Or it would have been possible to network for a fee. No one will use paid services - you're wrong. There is only a question of adequate pricing
 
alrane:
The red bar is the cost for 5 minutes of testing in the cloud, about 40,000 runs over 3 months (if I remember correctly). Earned for 287,781 runs just over 20 quid, took half a year to complete))))

And if you were testing on your PC, how much time would it take?

 
alrane:
I have 64GB of RAM in my system. And tester for 32 agents uses max 40GB and then resets data to disk about 1.1-2GB, yes I agree - it wouldn't fit everything in RAM. But if I disable half of the agents - the remaining agents behave exactly the same, even though there is still a ton of RAM space.

Dear Admin:

1)Do you really think this kind of load on the disk (hundreds of gigabytes of rewrites per day) is normal? How long do you think a normal SSD will last in this mode?

2) Do you really think that absence of any resource usage adjustments in the tester is correct?

Once again, I solved my problem using the same methods you suggest. But it took me long enough and not to say that this solution always works correctly (for example, when there are multiple copies of the tester in the system on different disks - sometimes leads to spontaneous removal of agents, I wrote on this topic to no avail to service-desk. Again, I solved the problem myself). And not every user will be able to do it.

What proof do you need? Don't you know how much RAM an agent eats, how much cache on disk, how much data can be overwritten per day? Multiply and get the result.

In general, why do I have to prove something, I'm accusing you of something? The point of the theme is that here is a problem, how it can be solved? You, like in the service Desk, start to tell the obvious things for some reason and praise the optimization of the tester. When any sane programmer knows that no program is perfect.

You went to war and then ask "why does it take a lot of ammo".

My opinion - you categorically do not understand what tasks you are solving and for you the mythical longevity of the ssd is more important than the tasks to be solved. Yes, such a load is absolutely normal and expected.

Also, you don't want to admit that the tester runs YOUR programs with absolutely unknown needs. And it is you who is responsible for the volumes of resources consumed.

There is no problem. Except for the one Aleks demonstrated - people will think of anything to avoid paying the bill.

 
alrane:
About the cloud. The idea is great. And here's what it looks like in reality:



32 cents/day is a good day for my agents)) And the cloud uses 2 systems of 32 agents each and 2 of 8 agents. A total of 80 agents practically around the clock. The record per day was around $2.00, often no profit at all. The red bar is the cost per 5 minutes of testing in the cloud, about 40,000 runs for 3 months (if I remember correctly). Earnings for 287,781 runs just over 20 quid, took six months))))

Let me get this straight:

  • Although you put agents online, they hardly ever worked because they didn't get enough orders. Not even 0.1% of the time they were not working. So you can't talk about "80 agents around the clock" at all.
  • You spent about $10 on a task which was allocated over 8,800 agents and which cumulatively completed almost 8 days' worth of work. That's a very good price to pay for that kind of acceleration.
  • Your own agents didn't do that amount of work for the network to be able to compare revenue and expense
  • You can't compare runs - they're all different for different tasks. You only have to compare the equivalent power - the quanta are calculated.
  • The figures show that the network is very profitable for consumers

 
-Aleks-:

And if you were testing on your PC, how much time would it take?

On my binder of 4 computers with 80 agents - about 6-10 hours.
Renat Fatkhullin:

You went to war and then ask "why it takes me so much ammo".

My opinion - you categorically do not understand what tasks you are solving and the mythical longevity of the ssd is more important to you than the tasks to be solved. Yes, such a load is absolutely normal and expected.

Also, you don't want to admit that the tester runs YOUR programs with absolutely unknown needs. And it is you who is responsible for the volumes of resources consumed.

There is no problem. Except the one demonstrated by Aleks - people will think of anything to avoid paying the bill.

What kind of war? Reread the first post:
alrane:

Has anyone encountered such a problem? How to deal with it? What can cause such cache volumes?

If SSD longevity was more important to me, I probably wouldn't use them. The thing is, for the tester,the bottleneck in the system is the hard disk! Fortunately, I have SSD, while the vast majority of users have regular HDDs, which makes the situation even worse.
Give
me an example of software, which would be limited by its performance on hard disk, and not on processor, memory or graphics card. I personally have not faced with such a situation, and therefore it is not normal for me.
And it is impossible to expand this bottleneck (using free resources of the system) by means of the tester. And you think it's normal? Do you really think it's so great? Then you don't have to answer
 

Stop making a fuss.

Want to play a real showdown on tester efficiency and performance? Try to take one agent instance, one simple task, and log all the resources in single, re-pass and optimisation mode. Then I'll quickly get you back down to earth. If you don't retract your words yourself after actually evaluating the task.

And ssd is more important to you - you didn't spend so much time describing it for nothing. And you haven't even deigned to think about my explanations. You just pressed a button, all of a sudden you're wasting resources. And you do not care to assess what is really under the bonnet, how much data is there.