Errors, bugs, questions - page 2822

 
Nikolai Semko:

Only rounding is not done using the standard round(), ceil(), floor() because they also return double.

But through these, especially they work faster than the regular ones:

It may be faster, but it's just wrong.
Pass something like 12345.0000000000001 into your ceil (similar to your example), and you can get 12346 in the output
 
Alexey Navoykov:
It might be faster, but it's just wrong.
Pass something like 12345.0000000000001 (similar to your example) into your ceil, and you can get 12346 in the output.

Have you tried it yourself?
Try it:

Print(ceil( 12345.0000000000001));
Print(Ceil( 12345.0000000000001));
Print(ceil( 12345.000000000001));
Print(Ceil( 12345.000000000001));

Output:

2020.08.10 12:03:23.856 ErrorNormalizeDouble (EURUSD,M1)        12345.0
2020.08.10 12:03:23.856 ErrorNormalizeDouble (EURUSD,M1)        12345
2020.08.10 12:03:23.856 ErrorNormalizeDouble (EURUSD,M1)        12346.0
2020.08.10 12:03:23.856 ErrorNormalizeDouble (EURUSD,M1)        12346
should be 12346, because it is a ceil ("Returns the closest integer numeric value from above.")
the first case is 12345, because the significant digits in double type are 17, while you have 18
 
Nikolai Semko:

Really, you can't compare doubles. It's just a hard rule.

Of course, it is possible and sometimes even necessary to compare doubles directly with each other.

For example, OnTick is sometimes called a trillion times during Optimization. In order to understand whether or not to execute a pending limit, the built-in tester compares the current corresponding symbol price and the limit price. It does it for each pending order before each OnTick call. I.e. these checks are done tens and hundreds of billions of times.

And it is done each time through normalization. Well, this is a horrible waste of computing resources. Since the prices of pending orders and the symbol are preliminarily normalized. Therefore, they can and should be compared directly with each other.

The MQL-custom MQL Tester easily outperforms the native built-in tester in performance.

 

fxsaber
:

Of course, it is possible and sometimes even necessary to compare doubles directly with each other.

For example, OnTick is sometimes called a trillion times during Optimize. The built-in tester, in order to understand whether to execute a pending limit call or not, compares the current corresponding symbol price and the limit call price. It does it for each pending order before each OnTick call. I.e. these checks are done tens and hundreds of billions of times.

And it is done each time through normalization. Well, this is a horrible waste of computing resources. Since the prices of pending orders and the symbol are preliminarily normalized. Therefore, they can and should be compared directly with each other.

The MQL custom tester easily outperforms the native built-in tester in performance.

NormalizeDouble() is a very expensive function. Therefore, you'd better forget about it.

Here's a script that demonstrates the difference between NormalizeDouble() and normalize with int:

#define   SIZE 1000000

int Ceil (double x) {return (x-(int)x>0)?(int)x+1:(int)x;}
int Round(double x) {return (x>0)?(int)(x+0.5):(int)(x-0.5);}
int Floor(double x) {return (x>0)?(int)x:((int)x-x>0)?(int)x-1:(int)x;}
//+------------------------------------------------------------------+
//| Script program start function                                    |
//+------------------------------------------------------------------+
void OnStart()
  {
   double a[SIZE];
   double s1=0,s2=0, s3=0;
   for (int i=0;i<SIZE;i++)  a[i]=(rand()-16384)/M_PI;
   
   ulong t1=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) s1+=a[i];
   t1=GetMicrosecondCount()-t1;  
   
   ulong t2=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) s2+=NormalizeDouble(a[i],5);
   t2=GetMicrosecondCount()-t2; 
   
   ulong t3=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) s3+=Round(a[i]*100000);
   s3/=100000;
   t3=GetMicrosecondCount()-t3; 
   
   Print("простая сумма                            - " + string(t1)+ " микросекунд, сумма = "+ DoubleToString(s1,18));
   Print("сумма с NormalizeDouble                  - " + string(t2)+ " микросекунд, сумма = "+ DoubleToString(s2,18));
   Print("сумма, нормализированная через int       - " + string(t3)+ " микросекунд, сумма = "+ DoubleToString(s3,18));
  }

result:

2020.08.10 12:55:30.766 TestSpeedNormalizeDouble (USDCAD,H4)    простая сумма                            - 1394 микросекунд, сумма = 626010.5038610587362201
2020.08.10 12:55:30.766 TestSpeedNormalizeDouble (USDCAD,H4)    сумма с NormalizeDouble                  - 5363 микросекунд, сумма = 626010.5046099 795727060
2020.08.10 12:55:30.766 TestSpeedNormalizeDouble (USDCAD,H4)    сумма, нормализированная через int       - 1733 микросекунд, сумма = 626010.5046099999 453873
SZZ the normalisation by int is also more accurate (you can see it by the number of nines after the last digit of normalisation - highlighted in blue).
 
Nikolai Semko:

NormalizeDouble() is a very expensive function. That's why it's better to forget about it.

Here is a script that demonstrates the difference between NormalizeDouble() and normalize with int:

result:

SZZ the normalisation by int is even more accurate (you can see it by the number of nines after the last digit of the normalisation - highlighted in blue).

and if summing is not via double, but via long, then the result is even more impressive, since summing via int (multiplying and rounding followed by dividing the final sum) calculates faster than the normal sum of double.

#define   SIZE 1000000

int Ceil (double x) {return (x-(int)x>0)?(int)x+1:(int)x;}
int Round(double x) {return (x>0)?(int)(x+0.5):(int)(x-0.5);}
int Floor(double x) {return (x>0)?(int)x:((int)x-x>0)?(int)x-1:(int)x;}
//+------------------------------------------------------------------+
//| Script program start function                                    |
//+------------------------------------------------------------------+
void OnStart()
  {
   double a[SIZE];
   double s1=0,s2=0, s3=0;
   long s=0;
   for (int i=0;i<SIZE;i++)  a[i]=(rand()-16384)/M_PI;
   
   ulong t1=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) s1+=a[i];
   t1=GetMicrosecondCount()-t1;  
   
   ulong t2=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) s2+=NormalizeDouble(a[i],5);
   t2=GetMicrosecondCount()-t2; 
   
   ulong t3=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) s+=Round(a[i]*100000);
   s3=s/100000.0;
   t3=GetMicrosecondCount()-t3; 
   
   Print("простая сумма                            - " + string(t1)+ " микросекунд, сумма = "+ DoubleToString(s1,18));
   Print("сумма с NormalizeDouble                  - " + string(t2)+ " микросекунд, сумма = "+ DoubleToString(s2,18));
   Print("сумма, нормализированная через int       - " + string(t3)+ " микросекунд, сумма = "+ DoubleToString(s3,18));  
  }

result:

2020.08.10 13:15:58.982 TestSpeedNormalizeDouble (USDCAD,H4)    простая сумма                            - 1408 микросекунд, сумма = 460384.3207830497995019
2020.08.10 13:15:58.982 TestSpeedNormalizeDouble (USDCAD,H4)    сумма с NormalizeDouble                  - 6277 микросекунд, сумма = 460384.3162300114054233
2020.08.10 13:15:58.982 TestSpeedNormalizeDouble (USDCAD,H4)    сумма, нормализированная через int       - 964 микросекунд,  сумма = 460384.3162299999967218
 
Nikolai Semko:

And if the summation is not via double, but via long, then the result is even more impressive, because summation via int (multiplication and rounding, followed by division of the total sum) is faster than a normal double sum.

result:

Decimal for comparison add.

Wrong link, it's not a complete implementation.

 
fxsaber:

And it's done through normalisation every time. Well, this is a terrible waste of computing resources.

How do you know about it? Because even if prices are not normalized, the check is simply done without any normalization:

 if (fabs(price-limitprice) < ticksize/2)

Given that prices are multiples of ticksize

 
Nikolai Semko:
Moreover, normalization through int also turns out to be more accurate (you can see it by the number of nines after the last digit of normalization - highlighted in blue).

The test is incorrect. Why do you divide by 100000.0 only once at the end? It should be performed at each iteration and then summed up. That's a fair comparison. But this is not normalization at all - you have just optimized your test algorithm. Naturally, it will be faster and more accurate (because the accumulated error is reduced).

 
Alexey Navoykov:

How do you know this?

Because you can input non-normalised prices to the Tester and it will handle them identically.

After all, even if the prices are not normalized, the check is easily done without any normalization.

By normalization I meant in this case, a single standard algorithm, after applying it, you can directly compare doubles of this standard.

So the tester doesn't compare doubles directly. It does it through NormalizeDouble, ticksize or something else. But certainly not through direct comparison of doubles. And it is not rational at all.

 
fxsaber:

Of course, it is possible and sometimes even necessary to compare doubles directly with each other.

For example, Optimize OnTick is sometimes called a trillion times. The built-in tester, in order to understand whether to execute a pending limit call or not, compares the current corresponding symbol price and the limit call price. It does this for each pending order before each OnTick call. I.e. these checks are done tens and hundreds of billions of times.

And it is done each time through normalization. Well, this is a horrible waste of computing resources. Since the prices of pending orders and the symbol are preliminarily normalized. Therefore, they can and should be compared directly with each other.

The MQL-custom MQL Tester does not beat the native built-in tester in performance.

So I decided to check the performance nonsense version.
And the result was surprising.
Comparison even of pre-normalized double is even slower on average than when double is compared through epsilon or conversion to int

#define  SIZE 1000000

int Ceil (double x) {return (x-(int)x>0)?(int)x+1:(int)x;}
int Round(double x) {return (x>0)?(int)(x+0.5):(int)(x-0.5);}
int Floor(double x) {return (x>0)?(int)x:((int)x-x>0)?(int)x-1:(int)x;}

bool is_equal(double d1, double d2, double e=0.000000001) {return fabs(d1-d2)<e;}

void OnStart()
  {
   double a[SIZE], a_norm[SIZE];
   int s1=0,s2=0, s3=0;
   for (int i=0;i<SIZE;i++)  {
     a[i]=(rand()-16384)/1641.1452;
     a_norm[i]=NormalizeDouble(a[i],2);
   }
   double test = 1.11;
   
   ulong t1=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) if (a_norm[i]==test) s1++;
   t1=GetMicrosecondCount()-t1;  
   
   ulong t2=GetMicrosecondCount();
   for (int i=0;i<SIZE;i++) if (is_equal(a[i],test,0.005)) s2++;
   t2=GetMicrosecondCount()-t2; 
   
   ulong t3=GetMicrosecondCount();
   int test_int = test*100;
   for (int i=0;i<SIZE;i++) if (Round(a[i]*100)==test_int) s3++;
   t3=GetMicrosecondCount()-t3; 
   
   
   Print("простое сравнение предварительно нормализированых double - " + string(t1)+ " микросекунд, всего совпадений = "+ string(s1));
   Print("сравнение double через эпсилон                           - " + string(t2)+ " микросекунд, всего совпадений = "+ string(s2));
   Print("сравнение double через преобразование в int              - " + string(t3)+ " микросекунд, всего совпадений = "+ string(s3));  
  }

The result:

2020.08.10 14:31:39.620 TestCompareDouble (USDCAD,H4)   простое сравнение предварительно нормализированых double - 900  микросекунд, всего совпадений = 486
2020.08.10 14:31:39.620 TestCompareDouble (USDCAD,H4)   сравнение double через эпсилон                           - 723  микросекунд, всего совпадений = 486
2020.08.10 14:31:39.620 TestCompareDouble (USDCAD,H4)   сравнение double через преобразование в int              - 805  микросекунд, всего совпадений = 486
2020.08.10 14:31:42.607 TestCompareDouble (USDCAD,H4)   простое сравнение предварительно нормализированых double - 1533 микросекунд, всего совпадений = 488
2020.08.10 14:31:42.607 TestCompareDouble (USDCAD,H4)   сравнение double через эпсилон                           - 758  микросекунд, всего совпадений = 488
2020.08.10 14:31:42.607 TestCompareDouble (USDCAD,H4)   сравнение double через преобразование в int              - 790  микросекунд, всего совпадений = 488
2020.08.10 14:31:44.638 TestCompareDouble (USDCAD,H4)   простое сравнение предварительно нормализированых double - 986  микросекунд, всего совпадений = 472
2020.08.10 14:31:44.638 TestCompareDouble (USDCAD,H4)   сравнение double через эпсилон                           - 722  микросекунд, всего совпадений = 472
2020.08.10 14:31:44.638 TestCompareDouble (USDCAD,H4)   сравнение double через преобразование в int              - 834  микросекунд, всего совпадений = 472

I don't exclude that much depends on novelty and architecture of processor, and for someone the result may be different.

To tell you the truth - I don't even understand why it happens.
It seems that the compiler has nothing to optimize with sum of random numbers. You can't put rounding out of brackets.
It seems that double comparison in processor is one command
When comparing through epsilon (the fastest way) we still have a comparison operation of two double's but in addition we have a function call with passing of three parameters and one operation of subtraction.
Does the performance of the operation of comparison of two double's depend on the values of the variables themselves? I doubt it.
Geez, I don't get it. Please help me, what have I failed to take into account or where did I go wrong?