Machine learning in trading: theory, models, practice and algo-trading - page 3259
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Here on R ChatGPT offers
This R variant is almost 6 times inferior to NumPy.
As I understand, python can work with integer matrix and here the speeds are of a different order
If the code is correct, the result is as follows
The question of accuracy/comparability of the results of calculations itself should be checked.
Judging by
Array size: 0.0762939453125 MB
The calculated matrix is 100*100 and not 15000*15000.It's getting worse with memory.
Before we launch
And while running Alglibov PearsonCorrM memory is growing all the time: and 5 gg was seen, 4,6 got on the screen
and during the work of the standard Matrix.CorrCoef
Apparently, the standard one is optimised for minimum memory usage, and the Alglibov one is optimised for speed.
Perhaps resize of the array occurs somewhere, which is very slow. If you find and set the final size at once, it may be faster
You are marvellous at translating any idea into g... counting all sorts of uninteresting results :)
Alexei is a special amateur
Save both matrices to files to reconcile the results.
https://drive.google.com/file/d/1ATJkHwUY8jzeRp-rdTsYBeYHor-68EPB/view?usp=share_link
According to
100*100 matrix is calculated, not 15000*15000.You need a tool that can count the matrix out of memory
So far I don't see any technical obstacle to count a million-by-million matrix on a simple home machine. But the comparison of NumPy vs MQL5 is very important for me.
15000 * 100 * 4 bytes / 1024 / 1024 ≈ 5.72 MB
This is the input matrix.
The output will be 15000 rows to each of the 15000 rows. As in all other examples it is about 1.7 Gg (if Double by 8 bytes).