Features of the mql5 language, subtleties and tricks - page 136
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So you're willing to light a $100 note to find a dime rolled under the bed?
May or may not be a coin
The probability of a ten is twice as high. And most importantly, the get_rand() costliness claim is sucked out of your finger, so why get random numbers through the back door with a shifted probability (while expecting a uniform distribution) when you can have a normal distribution? You're not saving a $100 note, you're saving matches.
May or may not be a coin
The probability of a ten is twice as high. And most importantly, the get_rand() costliness claim is sucked out of your hand, so why get random numbers through the back door with a shifted probability (while expecting a uniform distribution) when you can have a normal distribution? You're not saving a $100 note, you're saving matches.
Yes, I was wrong about high slowness of your function. I misunderstood the algorithm. Sorry.
But still my algorithm is the fastest of all the proposed ones, despite the fact that it is more universal and not limited to 32767, like yours.
Code to prove it.
This script randomly generates an array of points with random colour and random coordinates. The size of the array is equal to the number of pixels on the chart chart chart. It is repeated 5 times
I picked up numbers in such a way to show the essence of the problem, when we apply rand()%20000
as it should be:
But 99.9% of the time, this function will work as well: it will work even faster.This function will generate a random number from 0 to 1073741824. This number is even greater than the number of ticks for any instrument over the entire history. The "unfairness" of such a function would be microscopic for 99.9% of tasks.
But still my algorithm turns out to be the fastest of all proposed algorithms, despite the fact that it is more universal and not limited to 32767 like yours.
The code as a proof.
Thanks for the work, really interesting results. It turns out that rand() function is so fast that it works faster than arithmetic operations.
Thanks for the effort, really interesting results. It turns out that rand() is so fast that it's faster than arithmetic operations.
No, it's not. About a nanosecond, just like extracting the square root from a double number. The +-*/ operations are performed in fractions of a nanosecond.
But just like the square root, rand() is performed in modern processors at the hardware level, not programmatically.
no, not faster. About a nanosecond, just like extracting the square root from a double number. The +-*/ operations are performed in fractions of a nanosecond.
But just like the square root, rand() is performed in modern processors at the hardware level, not programmatically.
Why no, it is not. Your version differs from mine in that rand() is always called 5 times while mine has an average of 1.64 times at 20 000 range and 1 time at 256. Altogether rand() is called 25 times for each iteration in your code while mine has 1.64*2+3 = 5.3 times. Of course, the situation is strange, we must find out what exactly the reason is. You have a lot of bitwise operations being performed there in addition...
1. Well we do realise that in your functions the problem is not solved but only masked, I won't lose weight, I'll just tighten my belt tighter.
2. In our and Alexey's versions it is the worst case scenario, while in many other situations the speed will almost be at the level of rand(), while you have constant time.
Have you ever wondered why rand() generates numbers in such a narrow range? This is a pseudo-random generator, so it is periodic, so generating a bunch of random numbers in places where it is not needed, with subsequent discarding of them, its quality is deteriorating (it will repeat earlier).
4. Some people mine random data in a more complicated way. I, for example, yank from the net, someone may even buy it. Why would I want to waste hard-won data in order to then dumbly discard it (generating ulong, writing the right algorithm is not our way)?
Why not, if yes. Your version differs from mine in that it always calls 5 rand() while mine has an average of 1.64 times at 20 000 range and 1 time at 256 range. Altogether your rand() is called 25 times for each iteration, while mine has 1.64*2+3 = 5.3 times. Of course, this situation is strange, we have to find out what exactly the reason is. Because you have a lot of bitwise operations being performed there in addition...
bits are the cheapest operations. Almost free of charge.
But on the whole I agree. I don't understand why either... Maybe it's the wonders of optimization. Although what can be optimized there...
bits are the cheapest operations. Almost free of charge.
But on the whole I agree. I don't understand why either... Maybe it's the wonders of optimization. Although what there is to optimize...
1. Well we do realise that in your functions the problem is not solved but only masked, I won't lose weight, I'll tighten my belt tighter.
2. In our and Alexey's variants, this is the worst case scenario, while in many other situations the speed will be almost on the rand() level, while you have constant time.
Have you ever wondered why rand() generates numbers in such a narrow range? This is a pseudo-random generator, so it's periodic, so generating a bunch of random numbers where it is not needed, with subsequent discarding, its quality is deteriorating (it will repeat earlier).
4. Some people mine random data in a more complicated way. I, for example, yank from the net, someone may even buy it. That's why should I waste hard-earned data in order to then stupidly discard it (generating ulong, writing a proper algorithm is not our way, after all)?
well that's nerdy.
To reproduce the situation, when this problem will be noticeable at least by 0.1%, you need to operate with ranges above the following values:
Have you ever used such ranges? Then why did you put in these checks and loops?
The best is the enemy of the good.
Personally, myrandom generation ranges in practice are limited to 2000, 4000 at most. rand() works quite well for this purpose.
Insert such an option into my code:
So you will not notice "unfairness" of rand() function (as it was with rand()%20000) and points will be situated visually evenly, so it is the fastest and the most efficient function.
It's not for nothing that processor developers limited rand() to 2^15=32768. They're not stupid people. This covers 99% of practical problems.
And for lovers of "extreme" ideas there is more than enough option:
Personally, myrandom number generation ranges in practice are limited to 2000, max 4000. For this, rand() works quite well.
Insert such a variant into my code:
and you won't notice "unfairness" of rand() function (as it was with rand()%20000) and points will be visually uniform, so it is quite working and the fastest.
You're welcome to use it, I don't mind. All the more so when the right side of % is a multiple of RAND_MAX+1 (256 or 1024).
And there is more than enough variant for those who like "out-of-bounds" ideas:
What do processor developers have to do with it? The generator is software implemented. The only requirement is RAND_MAX >= 32767 and a period of at least 2^32. So the µl has a very sparse oscillator at "minimums".
And the most far-sighted will do themselves a fair rand() (if there is no multiplicity), this is even recommended in reference books.