"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 44
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
PRNG class in the specified range.
Advantages: possibility to have several randoms with different initialization,
evenly distributed values of any range up to 10 lm (unfortunately, the array is not big enough).
For 10 lakh calls Rand() overtakes the standard rand by 50 µs (time rnd.Rand()=344 time rand()=391).
Disadvantages: Srand() takes a long time to initialize, standard range is 32768 (766 µs), 1 lakh almost 2 minutes.
I haven't checked the limit of 10 lakhs, but for sure it's incredibly long because the time grows progressively.
Healthy criticism is accepted.
Can you be more specific?
For what purposes you may need these functions?
uint control(){return(gcnt-1);};// получить значение счётчика при последнем автоматическом запросе Rand
uint Rand(uint i){return(res[i]);};// запрос Rand по указанному i (счётчик не изменяется)
Or a small example of how to use them.
What does the standard one do not do?
These functions may be needed to simply copy part or all of a random sequence, or to get values that have been issued on certain calls to Rand().
Standprtnny is unsatisfactory in many ways:
1) If you want to get a sequence from 0 to 100 evenly distributed, you can not get directly from standard (you have to mold an algorithm).
2) It is impossible to use two sequences simultaneously, initialization of srand, makes it impossible to use previous initialization.
3) The range of the standard rand allows you to make the division into 32768 parts only, and in no other way. I'm not even talking about splitting it by 100000, but you can't even split it by any multiple of 10. For example: you have a range [-1;1] and you need to divide it into steps to the 3rd digit, initialize the class to range 2000, then
and get PRNG from the range [-1;1] with step 0.001
You can't do that with the standard.
1) Agreed.
2) Not clear? What purposes may you need it for?
3) I didn't get into the code too much, it seems to be related to 1).
That's why I have a question about these (above) two functions.
You definitely need such a class (function). But nobody needs unnecessary braking.
If you have trouble understanding step 2, maybe you should first decide why you need a PRNG at all?
Then it will become clear why to have two uncorrelated PRNG sequences at the same time.
If you have trouble understanding step 2, maybe you should first decide why you need a PRNG at all?
Then it will become clear why to have two uncorrelated PRNG sequences at the same time.
Strange to hear - uncorrelated PRNG sequences.
I can't understand what neural networks are for?
I can't figure out what neural networks are for?
It's just a fancy thing nowadays...
Advantages: possibility to have several randoms with different initialization,
So each object leads a different sequence?
Disadvantages: Srand() initializes long, the standard range is 32768 (766 µs), 1 lamb is almost 2 minutes.
This is where it's really, really scary, unrealistic. It shouldn't be like this.
I'll see when there's even a little bit of light.Strange to hear - uncorrelated PRNG sequences.
I can't understand, what are neural networks for?
Let me remind you, if you forget, after 32768 calls to rand (if not re-initialized) the sequence repeats itself.
Accordingly, if we use one initialization to generate two sequences parallel in time, some parts may have a correlation between them.
Networks have different algorithms, e.g. Montecarlo, or GA, again each network with any algorithm requires initial values of weights.
It's just a fancy thing now...
1) So each object leads a different sequence?
2) This is where it's really, really scary, unrealistic. It shouldn't be like that.
I'll see when there's even a little bit of light.1 Yes, every object memorizes a sequence in Strand() and then it goes around in circles just like the standard rand().
2 I have already optimized it as much as I could; it was worse.
In brief, to make it easier to understand what I've done:
After preparation, we write an increasing sequence (counter values) into a temporary array, then the generator outputs a value from the range renge (this index is requested when assigning from temp to res). As soon as value has been produced to the resulting buffer, we put renge into a temporary one (value, which the generator can't output). Values are passed to res buffer via check, if temp has a renge then we have already assigned this value to res, then do quick sort(which takes all renge to the end of the array) and truncate the range by the remaining untouched sequence. Then continue until all res is full.
Let me remind you, if you forget, after 32768 calls to rand (if you don't re-initialize it) the sequence repeats itself.
Correspondingly, if we use one initialization to generate two sequences parallel in time, some parts may have correlation between them (this is not necessarily, but there is still a possibility).
Nets have different algorithms, e.g. Montecarlo, or GA, again, each network with any algorithm requires initial values of weights.
This is all speculation. There is no proof.
A lot of experiments with GA, there was such a doubt that PRNG loops (repeats). It turned out that it was not about the reel... (this is about me).
Bottom line: PRNG does not affect the GA operation in any way.
About initialization:
There is a desire to be able to initialize at your choice - zeros, PRNG, from a file.
This is all speculation. There is no proof.
A lot of experiments with GA, there was such a doubt that the PRNG is looping (repeating). It turned out that it was not about the reel... (this is about me).
Bottom line: PRNG does not affect the GA operation in any way.
About initialization:
There is a desire to be able to initialize at your choice - zeros, PRNG, from a file.
Oh man, it really does not repeat, and before it repeated, well, let's know (now I will cut out all unnecessary).
About the initialization: do not understand?
FW reloaded the source code.
A little slower but it was easier before.
gave 114 sec, now after simplification 120 sec.