Machine learning in trading: theory, models, practice and algo-trading - page 2630
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Kind of creative, you don't know in advance.
Anything can be approximated, but TC is clear logic in the code, without approximations.
We don't know the exact logic, you know... it's not decompilation. That leaves fuzzy, "in the image and likeness". abibas trainers
So, if you take the intersection strategy of two cars and don't give a direct sign of the intersection to the modeler.
It's pretty good, I'm even surprised, but it's a primitive algorithm...
blue for the original signal, red for the prefix.
And if you don't normalize it...
So, if you take the intersection strategy of two cars and don't give a direct sign of the intersection to the modeler.
It's pretty good, I'm even surprised, but it's a primitive algorithm...
blue for the original signal, red for the prefix.
And if you don't normalize it...
So, if you take the intersection strategy of two cars and don't give a direct sign of the intersection to the modeler.
It's pretty good, I'm even surprised, but it's a primitive algorithm...
Blue is the original signal, red is the predictor.
And if you don't normalize it...
You can not know in advance what MA Expert Advisor uses and what periods it has. Or any other indicators are used.
Try to train the model not on the MA (X) but on raw quotes (x) for example on 100 bars (you don't know the periods of MA from the black box, you can only guess how many bars may have been used).
Well, the Y is the one given by your examiner.
You cannot know in advance what, expert MA is using and what periods. Or any other indicators used.
Don't tell me what I can and can't do, say "I don't know how you can do it". That's more honest.
Try to train the model on raw quotes (x) instead of MAhs (X)
raw isn't bad either.
not bad on raw either
Not bad on the raw ones either.
Does it really need MO?
My results. Whoever can decipher it, well done, I've forgotten what's what.
Another test example, crossing of ma and price. The input is increments of several last bars, the output is trade direction (1-bay, 0-sell). Parameters of the underlying network: 1 Dense layer with tanh. 1 epoch, batch=32. win - number of inputs, per - MA period, total - training sample size. The network is trained in 1 epoch so that there are no repeated samples during training. Validation is based on the training sample inverted vertically (*-1). The test runs on a separate independent sample. All of them are equal to total. At per<=win the network shows high accuracy, which was required to prove, the network is able to look for hidden patterns.
For small networks (<1000 neurons) the calculation on cpu is faster than on gpu. With batch=8192 the computation takes the same amount of time. This test case with 1 and 100 hidden neurons is computed in the same time. For cpu double and single precision counts in the same time, results are comparable. Different activation types count for about the same time and gave comparable results. Win size doesn't affect the time much. total=10^6 at batch=1 counts for 18 minutes. The relationship between batch and time is linear.
Accuracy of sample size. batch=1 , per=100, win=100. First column - sample size (total), 2 - time min.sec, 3 - accuracy on test, 4 - accuracy on train, 5 - accuracy on validation.
1м 18.49 99. 98.7 99.
100k 1.54 98.5 97.3 98.6
10k 0.11 97.8 88.4 98.1
1k 0.01 71.2 62.1 66.5
Adding noise to input. total=10^6, batch=32 , per=10, win=10. First column - noise fraction from input, 2 - accuracy on test, 3 - accuracy on trace, 4 - accuracy on validation.
0.001 99.8 98.1 99.8
0.01 99.6 98.2 99.6
0.1 96.8 96.1 96.8
1 74.9 74.2 75.1
Number of inputs and error. total=10^6, batch=32 , per=100. accuracy on test, accuracy on train, accuracy on validation.
win=150: 99.5 98.7 99.5
win=100: 99.6 98.8 99.6
win=90: 98.9 98.2 98.9
win=80: 97.2 96.6 97.2
win=70: 94.8 94.3 94.8
win=60: 92.0 91.6 91.9
win=50: 88.6 88.2 88.6
win=20: 74.7 74.4 74.7
Graphs of weights. 1 input neuron. ma(100) 100 inputs left, ma(50) 100 inputs right