You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
http://www.ixbt.com/video3/rad2.shtml- It is best to use optimized libraries for large data sets rather than getting "creative" with writing programs in OpenCL (I'm not ruling it out). You can use a hybrid system, where small amounts are handled usingOpenCL and large amounts are handledusing optimized libraries. You may need to convert the library to a specific programming language and create conditions for the inclusion of this library. If this can be implemented, it would give an impressive result and consequently accelerate the operation many times over. Note to .....
P.S This may be a new thread in the forum
It is not technological for developers to tweak the compiler specifically for an extremely specific, albeit unique, product.
And so far I do not see any trader's tasks that require such a huge size of multiplied matrices.
Announcement of MetaTrader 5 update
An update of the MetaTrader 5 platform will be published in the next few days. Once the update is published, there will be an additional news release containing the final list of changes and build numbers. The following changes are planned:
MetaTrader 5 Client Terminal build 648
MetaTester: Added support for using OpenCL programs in testing agents.
Understanding OpenCL, prepare a task-test for Cloud+OpenCL. Very interesting mathematical perspectives.
That's more for MetaDriver........................
Recently updated video driver (NVIDIA301.42).
I did one of the old tests for interest (ParallelTester_00-01x) and could not believe my eyes.
On 24 page I was doing test, and there was 29, then I set memory to 2-channel mode and it went to 39.
Now it is ~306.
Amazing. It seems that NVIDIA has tweaked the drivers humanly.fyords, how did you make earlier events appear higher in the log?
But in general it's great, I understand you. I was just as happy when I bought my HD 4870 on the cheap and saw its power.
One small recommendation: choose parameters so that GPU execution time is comparable to 1 second. Then the time ratio will also be more accurate. The average error of the GteTickCount() function is no less than tens of milliseconds. So you could easily get 120 ms or 170 ms on the GPU. And the value of acceleration depends greatly on this.
I have fine-tuned this script a bit to make it run on all available devices (look up from the bottom: 1) CPU on Intel platform, then 2) HD 4870 on AMD platform, and 3) CPU on AMD platform):
The script results are from the bottom up!
At the latter parameter, which is 10x less, my card is not as fast as yours. Probably doesn't have time to overclock properly :)fyords, how did you make earlier events appear higher in the log?
In reports, right button "View", new window "Query" button and the log is built by time correctly, and it's more convenient to read (for me).
As for the script, thank you, I will try it tomorrow, it's a long wait for its completion, especially with Count pass = 12800.
For now here is an old script with Count pass = 12800
The gain has become even greater.The error isn't actually much less. Yes, close to it, but there are outliers from the average, clustering around 32, 48 and even more. They are rare, I don't argue, they can be ignored.
But when a person runs a script, he or she is not necessarily doing anything on the computer. And the system can also run its own tasks, which can slow down execution.
Technically, the standard deviation is indeed small, around 6-7, and weakly dependent on the execution time itself. But it poorly reflects the true variation. Here's a histogram of the times recorded when performing the same calculations:
The distance between adjacent bars is 16ms. Smaller columns are quite likely, and they differ from each other by as much as 32ms. If the middle column ("true execution time") is 140 milliseconds, then the left one is 124 ms and the right one is 156 ms.
So, the real variation when divided by the low GPU execution time can be quite large:
20 seconds/124 ms ~ 161
20 seconds/156 ms ~ 128.
The "true ratio" of execution times roughly corresponds to the largest bar:
20 sec/140ms ~ 143.
If we take a longer execution time on the GPU, the impact of this error will be much less. At least let it be 500ms.
Script for the simulation: