Machine learning in trading: theory, models, practice and algo-trading - page 3270
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Right, thanks! I don't understand why the wrong option worked with inCols < 100.
Apparently
too much. Random - the average correlation there is about 0, probably.Probably too much. Random - there is an average correlation of about 0, probably.
It is not the average error that is measured there, but the maximum difference between the corresponding elements.
Forum on trading, automated trading systems and testing trading strategies
Machine learning in trading: theory, models, practice and algo-trading
fxsaber, 2023.10.01 09:38 pm
That's why it's not clear how the wrong code
gets a match.
And PearsonCorrM2 can be speeded up by 2 times if you count by triangle.i.e. go from the end. Count 100 line with all, then 99 with all 0-99, 99 and 100th already counted - you can just copy. ...50th line with all to 50th, etc. Well, do not count with itself because =1.
That's why it's not clear how, with the wrong code.
gets a match.
I figured it out, if you swap the matrix calculations, you get a mismatch.
I.e. rubbish from memory during the calculation of the first matrix got into the new matrix and by some miracle coincided with the desired result.
I think it's a head-on calculation of the correlation matrix.
196
500 seconds versus 14 - that's why I remember it being
the fastest, due to the algorithm.
Well, yes, if you make it on new µl matrices, won't it be faster?
I think all 9 functions used in PearsonCorrM and PearsonCorrM2 can be rewritten to matrices and compared. In principle, it would take an hour of work to rewrite the matrix declarations and references. At the same time we will find out whether matrices are better than din. arrays.
Titles
IsFiniteMatrix(
IsFiniteVector(
AblasInternalSplitLength(
AblasSplitLength(
RMatrixGemmK(
RMatrixGemm(
RMatrixSyrk2(
RMatrixSyrk(
RankX(
I think all 9 functions used in PearsonCorrM and PearsonCorrM2 can be rewritten to matrices and compared. In principle, it will take an hour of work to rewrite the matrix announcements and references. At the same time we will find out whether matrices are better than din. arrays.
Everything is already done: MQ have rewritten them for their matrices.
500 sec vs 14 - that's why I remembered that.
the fastest, because of the algorithm.
I didn't realise the algorithm there. NumPy does not lag far behind only due to the fact that it does not perform repeated calculations.
NumPy seems to have a different algorithm from ALglib, since the performance is very different. But it is clear that in the whole huge Python-community there was some very strong algorithmist who devoted a decent amount of time to studying this issue.It's already done: the MQs have rewritten them to fit their matrices.
And it got slower?)
And it got slower ??))))
Compare to the old version of Alglib. I have no data that it has become slower.
NumPy seems to have a different algorithm from ALglib.
Maxim's CPU is twice as fast as mine. I don't remember if he gave timings for Algliba, I think not.