Machine learning in trading: theory, models, practice and algo-trading - page 2706
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
You can't compare methods of feature generation, as I haven't created a system in code yet. You can compare your system with my set of predictors and the system/method of their selection.
Anyone can get data from the historical interval of the MQL server - you want a continuous history. But the final sample, which will be used for training, will be an order of magnitude smaller strings with examples, but with additional predictors.
The Expert Advisor that I propose to use will save the open predictors and at the end of the csv file there will be columns with the financial result and target - you can take information on the time of triggering of the "initial rule"/activation function from there, so there is no need to reproduce the algorithm in R.
I suggest the time interval - from 2010 to 2020 - training, the rest of the time for checking the results outside the training.
When you create your predictors, you can save the result in csv - and I will do so. You can either merge the columns and study on different ranges or just separately - you need it to compare the correctness of synchronisation.
I can send purely markup, if you don't want to get into it at all.
Let's get back to the logic
there can be many different attributes and they may or may not be informative, it depends on their relation to the target.
what is the difference between a relation and a fit? the degree of informative dependence expressed through some measure
the lower the information dependence of labels on each individual feature, the more features are required for training.
Increasing the number of features leads to fitting because the system starts to have so many free parameters
what is the only correct approach in this case? Minimising the number of features while increasing their ratio to the target features
That's why it is necessary to bombard not only features, but also target features, according to some information-binding criterion.
If someone will work in this direction, I will help with the code.Let's get back to the logic
there can be many different attributes and they may or may not be informative, it depends on their relation to the target attributes.
what is the difference between a relation and a fit? the degree of informative dependence expressed through some measure
the lower the information dependence of labels on each individual feature, the more features are required for training.
Increasing the number of features leads to fitting because the system starts to have so many free parameters
what is the only correct approach in this case? Minimising the number of features while increasing their ratio to the target features
That's why it is necessary to bombard not only features, but also target features, according to some information-binding criterion.
If someone will work in this direction, I will help with the code.Of course I agree with the logic, that's why I suggested earlier - that we identify predictors at random and then use them for markup.
For me, these points that have predictive ability are events, which I generally think to train separately, or to separate sheets from them, and then to carry out any cumulative training procedure.
Such an Event can be considered as a separate trading system and the behaviour/efficiency of these systems can be analysed.
Now for me on netting the problem is independent accounting of these events, i.e. virtual support that would work correctly on real data with loss of communication and other delights.
The feature generation methods cannot be compared as I have not yet created the system in code.
So compare human and machine predictor generation methods :)
So compare the methods of generating predictors by human and machine :)
What are you doing in this whole business, then?
Just to clarify, my generated predictors.
Specifically, my generated predictors.
Of course I agree with the logic, that's why I suggested earlier - that we identify predictors at random and then use them for markup.
For me, these points that have predictive ability are events that I generally think to train separately, or to separate sheets from them, and then to carry out any aggregate training procedure.
Such an Event can be considered as a separate trading system and the behaviour/efficiency of these systems can be analysed.
Now for me on netting the problem is independent accounting of these events, i.e. virtual support that would work correctly on real data with loss of communication and other delights.
You really think there's any value in that?
Of course there is. You can see what kind of gain your method gives you. Maybe it is so insignificant that there is no point in implementing it, or vice versa.
First of all, you need to remove unnecessary words and terms that interfere with thinking. Otherwise, co-op is simply impossible. There are general approaches to selecting signs, but they need to be adapted to time series and trading specifics. It is to take a ready-made one and figure out how to use it better when marking up a chart. All the tools are available.
New terms are introduced just to organise the accumulated information and to be able to generalise it and move on.
The fact that I come up with different things is so, something works, something doesn't. You will not deny that you repeated in your article one of my ideas stated here, will you?
You shouted that I was a fool to pick leaves a couple of years ago, and now mytarmailS is doing the same thing.
I may be taking something from the public lucubrations of reason in this thread, but I don't speak disparagingly until I understand or verify.