You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I think it's a good match for the flags:
http://osinavi.ru/asm/4.php
and for for unnecessary operators/comparisons...
I come back to my misunderstood difference in execution time of almost 100% identical in logic and number of checks and sums of two loops:
I already wrote above - I got less comparison operations in my code...
-----
From out-of-school, somewhere from the 90's:
1) if (conditions) A,B,C are independent, then you have to shove them into the AND function as the probability increases, and into the OR function as it decreases.
and if they are dependent, they must not be combined into a single expression (i.e. do if (A && B) ).
2) If there are nested loops A,B, then the outer one should be shorter
3) If there is a condition inside the loop, you should divide the loop and take the condition outside it, if possible.
I have already written above - I have less comparison operations in my code...
-----
from out-of-school studies, somewhere in the 90's:
1) if (conditions) A,B,C are independent, then they should be placed in AND function by increasing probability, and in OR function by decreasing probability.
and if dependent, they cannot be combined into one expression (i.e. do if (A && B) ).
2) If there are nested loops A,B, then the outer one should be shorter
3) If there is a condition inside the loop, you should divide the loop and take the condition outside, if possible.
1) vice versa, which is exactly what you have done. If the first condition in a compound AND operator is not executed, why should we check all the other conditions? So, first we check the condition which is most likely false.
Although, yes - it is: sorting by ascending probability of truth statements in a compound condition.
1) the other way around, which you have done. Peculiarities of execution of conditional operators in C.
Maybe I wrote it wrong, but it's easier to explain for everyone at once...
A && B && C
AND should be feyed as soon as possible to avoid trying all other conditions. The optimizer will not do it for the programmer (*), since he/she does not know on what data the program will be executed.
A || B || C
here it is vice versa :-) because it is Boolean arithmetic. !( ((!A)&&(!B)&&(!C) ) (if you don't mess up the brackets, signs again)
(*) Optimizing compilers can rely on the fact that the conditions are set just so and they build their internal kitchen on these assumptions
Maybe it's misspelled, but it makes more sense to everyone at once...
A && B && C
AND should be feyed as soon as possible so as not to go through all the other conditions. It means that the first predicate to be put is the one that will fail to execute the whole further chain. The optimizer will not do it for the programmer (*) because he/she does not know on what data the program will be executed
A || B || C
it's the other way round :-) because Boolean arithmetic... !( ((!A)&&(!B)&&(!C) ) (if again in brackets, signs are not confused)
(*) Optimizing compilers can rely on the fact that the conditions are set just so and they build their internal kitchen on these assumptions
I completely agree.
I come back to my misunderstood difference in execution time of almost 100% identical in logic and number of checks and sums of two loops:
So, again, why such a variant from Kuznetsov's code:
works more than twice as fast as the same one that does exactly the same thing:
What are the wonders of the compiler?
Is it really possible for such a design:
the compiler finds some special assembler's search command for the processor? But there is an additional check i<j inside, isn't there?
Because the same thing through for is executed much slower:
The code of the demonstrating script is attached
This is how it often happens. You are busy with some unnecessary rubbish and find out something rather interesting.
Developers, could you have a look at the executable code and see what makes this difference?
You need to understand the logic of the compiler in order to create more optimal algorithms in the future.
What a strange situation in general. I think many things depend on the compiler and the machine I'm testing on. Here is my result of your example on a 32-bit machine
as you can see there's no particular advantage. And here's a test of the array deletion functions from the last variant.
Sometimes during testing the results may be quite different from previous ones, and it is unclear why, maybe it has something to do with some specific interrupting system in MQL code processing
when testing algorithms, one should take
* or pre-prepared data sets (roughly similar to those actually required),
* or make a very significant number of passes on the RNG,
in tests:
* alternate/shuffle the testing sequence and
* observe pauses and reset all sorts of caches
when testing algorithms, one should take
* or pre-prepared data sets (roughly similar to those actually required),
* or make a very significant number of passes on the RNG,
in tests:
* alternate/shuffle test sequence and
* observe pauses and reset all sorts of caches
well, as if we use pre-prepared data. For all functions the same data is used for the test. As for shuffle/queueing, yes, I have noticed a difference if you change the order of the tests. But how to reset the caches ?
when testing algorithms, one should take
* or pre-prepared data sets (roughly similar to those actually required),
* or make a very significant number of passes on the RNG,
in tests:
* alternate/shuffle testing sequences and
* observe pauses and dump all sorts of caches
Don't make fun of people