Machine learning in trading: theory, models, practice and algo-trading - page 376
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I found in ALGLIB early stopping training with validation section:
Something seems wrong to me, because in real trading the bars will go in their own order, and not mixed up with those of an hour and a day ago.
And if the "nature" of the market changes, it means that it is necessary to re-learn or look for new NS models.
Do you have more than 500 connections in your grid? They write that L-BFGS is less efficient than L-M if there are few neurons
Do you have more than 500 connections in your grid? They write that L-BFGS is less efficient than L-M if there are few neurons
Less so far, to save time - the development stage, as soon as everything is finished, I will have to work hard in search of predictors and network diagrams
Maybe you will write an article when you figure it out completely? :) There are no good articles on Alglibian neural network, there's one translated hard-to-understand
Article like a description of the NS (because I couldn't even find a normal help for algib) and an example of training / retraining, auto-optimization in the bot. Well it's just the way I noticed that there is not enough information to study. For this kind of still pay) spend your time not in vain
Why don't you write an article when you've got it all figured out? :) There are no good articles on Alglib neuronics, there is one translated hard-to-understand
Article about description of NS (because I couldn't even find some help for alglib) and example of learning/retraining, auto-optimization in bot. Well it's just the way I noticed that there is not enough information to study. For this kind of pay more) spend your time not in vain.
I took https://www.mql5.com/ru/articles/2279 as a basis. It took me about 8 hours to make it work. I think most programmers will not take more time.
But it's been a week of rework, adding more options, tests, etc.
I don't think so - I can't find time for the article 100%... Besides, I'm just starting to understand NS myself, I can't say anything clever/new.
I took https://www.mql5.com/ru/articles/2279 as a base. It took me about 8 hours to make it work. I think most programmers will not take more time.
But it's been a week of rework, adding more options, tests etc.
I'm still looking towards Bayesian classifier + genetics, not bad results. With grids somehow murky in my head, a lot of nuances
Yes, I mean the same article, it didn't seem very interesting to me, though I'm more of a trader than a programmer )
Early-stop training on unmixed data:
It feels like there was a fit to the validation plot. The test one is good, but it was not in training and was not compared, probably just a coincidence.
By mixing, the error leveled off on the training and validation sections.This is the same fay of ensembles, and there is a 2/3 split and everything is mixed between both plots, I'll try to do the same...
I did it:
But it got worse at the test one.
It seems wrong to mix the data and then divide them in training and validation sections, because in real trading the bars will follow their own order, and not mixed up with those of an hour, a day or a week ago. Similarly, the cross-validation algorithms where the validation section is at the beginning, then in the middle, then at the end.
And if the "nature" of the market changes, it means we must re-learn or look for new NS models.
And if you do not mix and validate on the last section, then how do you avoid fitting to this section?
4 plots turns out? Training/validation/test1/test2 ?
How many cycles of training/validation do you need to do? Haven't seen any information about that anywhere... 1 cycle total? - and right after that we either approve or change something in the predictor set or network scheme? More precisely in N cycles of training we will be shown one best.
Test section2 is a verdict: no match, then we start all over again, preferably with a set of predictors
PS.
By the way, there is also a tester, the final verdict of the TC.
I still do not understand the situation with mixing the results:
Learning by early stopping method on unmixed data:
It feels like there was a fit to the validation plot. The test one is good, but it was not in training and was not compared, probably just a coincidence.
Due to mixing, the error smoothed out at the training and validation sections.This is the same fay of ensembles, and there is a 2/3 split and everything is mixed between both plots, I'll try to do the same...
Shuffled it:
And it got worse at the test one.
It seems wrong to mix the data and then divide them in training and validation sections, because in real trading the bars will follow their own order, and not mixed up with those of an hour, a day or a week ago. Similarly, the cross-validation algorithms where the validation section is at the beginning, then in the middle, then at the end.
And if the "nature" of the market changes, it means we must re-learn or look for new NS models.
And if you don't mix and validate at the last section, how do you avoid fitting to that section?
1. My understanding is that you don't train anything at all - just a random result on predictors that have nothing to do with the target variable.
2. Stirring.
I don't know NS.
But in so many other MO algorithms, learning is done on exactly one line. ONE value of each predictor is taken and the target variable is mapped to it. Therefore, the shuffling is irrelevant. There are MO algorithms that take neighbors into account.
But anyway our points of view coincide and initially I always do testing on test2 without shuffling.
PS.
Once again.
If the error on two different samples is different like yours - that means your system is hopeless, only to be thrown away.
Wandering around the bottomless cesspool called the Internet, I came across this paper.
Artificial Neural Networks architectures for stock price prediction:comparisons and applications
In other words - NS architecture for stock prediction-comparison and application
The situation with mixing results remains unclear:
Something seems wrong to me to mix the data and then divide into training and validation, because in real trading the bars will go in their own order, not mixed with the ones from an hour, a day or a week ago. Similarly, the cross-validation algorithms where the validation section is at the beginning, then in the middle, then at the end.And if the "nature" of the market changes, it means we must re-learn or look for new NS models.
And if you do not mix and validate on the last section, then how do you avoid fitting to this section?
After splitting into train/test/valid , shuffle train. Do not mix the rest of the sets.
This is valid for classification by neural networks. Moreover, when training deep neural networks, mix each minibatch before feeding the neural network.
Good luck