[Archive!] Any rookie question, so as not to clutter up the forum. Professionals, don't pass it by. Couldn't go anywhere without you - 2. - page 119

 
Dimka-novitsek:
Really, pro, advise!!! What can it be?


Evaluate what you get, in terms of what you see on the visualization, when you run the EA with the given parameters, and when you check "optimize", the tester gives you a few variants of the result of the EA when you change the parameters involved in it. In this case, nothing is visible on the screen, but there is a run with different values of parameters inside the EA.

In the "optimisation" tester table it is possible to enter values for the parameters used, e.g. "time frame. "From" and "To", with specified step of value change and indication of initial and final value of this parameter. As a result, the tester will show several lines of results of running of the Expert Advisor at 5 minutes, then at 15 minutes, at one hour, etc. I had trouble understanding this at the time.

 
drknn:

In both cases the first parameter is the name of the array. Only in the first case the parameter is described as "object array[]" and in the second as "object&array[]". A logical question arises: what is the difference between these two entries? I mean, if the parameters are specified in the same way, why the hell do we need an ampersand "&" when specifying a parameter in ArrayResize()? Does the ampersand play some role here or is it absolutely irrelevant and this is a developer's mistake?

you don't need to put the & sign when you refer to this function.
this sign is just to let you know that your array in the ArrayResize function will be changed. that's why it's passed by reference.

 
sergeev:

You don't need to put a &-sign when referring to this function.
This is just to let you know that your array in the ArrayResize function will be changed. so it is passed by reference.


The fact that you don't need the ampersand when referencing is clear from the examples given. I just needed to decide in what form the parameter should be written in the tooltip. Anyway, I got it, thanks. So I'll leave it in tooltips for autocomplete functions the way it is written in the meta-editor's help. The result will look like this:

 
Help me please!!! When I test with visualisation, it works, but when I tick optimisation, something is wrong!!!
 
Dimka-novitsek:
Help me please!!! When I test with visualisation, it works, but when I tick optimisation, something is wrong!!!

Show a screenshot of what checkboxes you set in the optimisation parameters and what values are there.
 
Dimka-novitsek:
Help me please!!! When I test with visualisation, it works, but when I tick optimisation, something is wrong!!!

From the question a lot is not clear: How does this look like "WHAT IS NOT IT"? What did you expect when you checked "optimization", did you know what it was for, did you check the "Expert properties-> "input parameters" buttons?
 
Vekker:


Assess what you have got in terms of the visualization. In Visualization, you see the post-run of the Expert Advisor with the given parameters, while the checkbox "Optimization" shows several variants of the result of working of the Expert Advisor when the parameters involved in it are changed. In this case, nothing is visible on the screen, but there is a run inside the Expert Advisor with different values of parameters.

In the "optimisation" tester table it is possible to enter values for the parameters used, e.g. "time frame. "From" and "To", with specified step of value change and indication of initial and final value of this parameter. As a result, the tester will show several lines of results of running of the Expert Advisor at 5 minutes, then at 15 minutes, at one hour, etc. I had trouble understanding this at the time.


I can't see anything for more than an hour and above the line, there should be some numbers separated by a slash, I've noticed they are always there, but I can't see them!

When I look at it, there are 5 or so variants, but visualization takes more than half a minute...

 
drknn:

Show a screenshot of what checkboxes you set in the optimisation parameters and what the values are.

Sure!!!
 
 
Dimka-novitsek:


Optimisation of the takeoff. Value = 150, the start should then also be = 150 (in theory). But if we assume that the tester ignores this 150 and starts with 10 in increments = 10, then by the time it gets to 200, it will have done 20 tests.

Stop loss optimisation. Same thing - you have to start with the specified 50. Also, if we ignore it and start with 15, at step 10 we will perform 15 more tests. The total amount of tests is 35.

Trailing. The same. Another 6 tests. In total, the tester has to run your Expert Advisor on the chart 41 times.

Wouldn't it be better to try to optimize parameters one by one, and specify the values in the "Value" and "Start" columns equal?

Try it.