Machine learning in trading: theory, models, practice and algo-trading - page 493
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
wolf-forward is necessary, you can not optimize like that, forward will always be bad (or random) in this case, depending on what phase of the market you get into, I already have a bunch of versions of such systems-billionaires on the backtest, which on forward work like a coin) this is called overfitting
I've got a dozen optimizations with monthly shifts, in each month the best input parameters differ from parameters of other months. And which of them to choose to work with?
Is there an algorithm for selecting system parameters when valving forward?
I received a dozen optimizations with an offset per month, in each month the best input parameters are different from the parameters of other months. And which one to choose?
I did not express myself correctly, I meant "something like it", i.e. self-optimizing system with some optimization criterion, and the same NS can be used as an optimizer
Is there an algorithm for selecting system parameters when rolling forward?
I received a dozen optimizations with an offset per month, and in each month the best input parameters are different from the parameters of other months. And which one should I choose for my work?
Speaking of optimization and learning. It takes me 23 hours, not counting intermediate manipulations. After each pass (which is several epochs) I change the sample for training. No, I don't shuffle it, I change it, i.e. I don't show the same pictures. In the learning process, there is no repeating sampling.
And what exactly the optimization algorithm? look for the same but with L-BFGS algorithm, it will be many times faster
and your NS will be trained, well, 100 times faster, for example, not 23 hours but 10 minutes (like all normal people :))) if you have a simple gradient descent with a fixed step
here is a comparison:
http://docplayer.ru/42578435-Issledovanie-algoritmov-obucheniya-iskusstvennoy-neyronnoy-seti-dlya-zadach-klassifikacii.html
And what exactly the optimization algorithm? look for the same but with L-BFGS algorithm, it will be many times faster
and your NS will be trained, well, 100 times faster, for example, not 23 hours but 10 minutes (like all normal people :))) if you have a simple gradient descent with a fixed step
here is a comparison:
http://docplayer.ru/42578435-Issledovanie-algoritmov-obucheniya-iskusstvennoy-neyronnoy-seti-dlya-zadach-klassifikacii.html
Thanks, I'll read it.
More like learning, not optimizing. Not simple. Already wrote - standard BP with simulated annealing manually.
Perhaps some algorithms are better, but I use only what is in the development environment. Other, external ones are problematic.
In general, the speed is not critical, if I train once every 3 months - 23 hours is even a piss. But in the 3 month test, any deterioration was not noticed. Probably works longer.
More like learning, not optimizing. Not simple. I already wrote - standard BP with simulated annealing manually.
Perhaps some algorithms are better, but I use only what is available in the development environment. Other, external ones are problematic.
whatever, training is the optimization of the target f-i
Right, they wrote about annealing, I'm not familiar with it, I'll read it
whatever, training is the optimization of the target function
whatever, training is the optimization of the target function
Right, they wrote about annealing, I'm not familiar with it, I'll read it
Yes, annealing is imitated manually by changing learning parameters after N epochs. Besides, learning sequence is completely replaced (not mixed, exactly replaced).
This is cool, where can I read more about this NS? i.e. it's like without a teacher, but you still feed something to the output?
Where can I read more about this kind of NS? i.e., it's like without a teacher, but you still feed something to the output?
Read the theory of Heikin Neural Networks and Bishop in English - no translation, but it seems to be preparing.
It's very simple. Your input is random trades, and your output is the result. The Monte Carlo method is called, and it is not very fast. And the systematization is a task of SAP.