Machine learning in trading: theory, models, practice and algo-trading - page 3265
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In MQL5, the elements of such a matrix are calculated in ~55 hours on my old machine. Memory consumption is minimal.
The row length is 100.A million by a million or 100? Is that the input matrix?
And the output is 1000000* 1000000 ? That's a terabyte. Did you read it line by line and dump it to disc?
What function did you use? PearsonCorrM, PearsonCorrM2, PearsonCorr2 or standard?
It makes no difference if the pattern of the original EA is the same: see a pattern, open.
It's different there, the EA generates signals by itself.
And pattern sets should be tied with logic. I tried directional trades and min reversal, there are patterns for both, relatively good ones.
Is it a million times a million or is it 100? Is that the input matrix?
The input is 100x1000000.
And the output is 1000000*1000000 ? That's a terabyte. You counted it line by line and dumped it to disc?
Line by line. I didn't dump anything. In the context of discussing the search for patterns, roughly speaking, in each row we need to find only situations where Abs(Corr[i]) > 0.9. To do this, we don't need to write the matrix, just count its rows.
What function did you use to do the counting? PearsonCorrM, PearsonCorrM2, PearsonCorr2 or the standard one?
I couldn't find an in-house function for line-by-line calculation. Alglib seemed slow. I'm trying my own version.
It's different, the NS generates its own signals.
And pattern sets should be tied with logic. I tried directional trades and min reversal, there are patterns for both, relatively good ones
Sounds good.
I've put it aside for now, the results are no better than MO, although MO is also lame in terms of smoothness of balance
5 minutes, half training
Line by line. I did not discount anything. In the context of discussing the search for patterns, roughly speaking, in each row you need to find only situations where Abs(Corr[i]) > 0.9. To do this, you don't need to write the matrix, just count its rows.
A is exact. For each row there will probably be 1-5 thousand correlated rows only, they can be saved if necessary.
I think PearsonCorrM2 will work fast. We feed 1 matrix full, the 2nd matrix from one row to be checked. And if we go from the end, we can specify the size of the first matrix as the number of the next row, so that we don't recalculate the correlation repeatedly for rows below the row being tested.
I think PearsonCorrM2 would be a fast track.
I doubted at first that it would be fast.
Tried one of these, measuring only the highlight. Something slow, so I'm making my own.
It's not like anyone else didn't know but me.
Pearson is invariant for the actions of multiplication and addition.
I didn't feel addition, despite the simple formula. And the wiki specifically says so.
Ключевым математическим свойством коэффициента корреляции Пирсона является то, что он инвариант при отдельных изменениях положения и масштаба двух переменных. То есть мы можем преобразовать X в a + bX и преобразовать Y в c + dY, где a, b, c и d - константы с b, d>0, без изменения коэффициента корреляции.
has a unit correlation matrix (rows by columns).
Pearson is invariant for the actions of multiplication and addition.
It is probably not very good for price data.
Maxim - I thought you wanted to try normalisation. Were there any improvements? Or deterioration.