Machine learning in trading: theory, models, practice and algo-trading - page 2750
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Well, it makes some sense, because often models don't live long. But I would like to find options without constant retraining, at least on the interval of a year +, with slow degradation of the model, which is easy to track
.
I can't agree with this.
The market is changing, and the time intervals of change are different and independent of each other.
I used to be able to write EAs that lived from 3 to 6 months. I optimised them at weekends. Then they died, and for a short time quite enough to drain the deposit. But not enough time for optimisation. In the end the situation was even worse: after some time it turned out that there was a boundary, moving away from which it was impossible to select parameters.
There are longer periods of market changes: 5-7 years. But the result is the same as for monthly periods. The bot dies forever. I will send a specific bot from the market in a private message - here you can't.
So this whole idea of "out of sample" is rubbish. A bot still has a lifetime, we don't know how long: 3 months or 7 years. When the bot dies, we confuse it with another drawdown and lose our deposit.
Ideally, it should be retrained on the next candle. If we work on ticks, then on the next tick, on H1 then on the arrival of the next hour.
I don't argue with training on every bar, and maybe even on a non stationary tick for example. I don't understand the training structure completely. Is EA logic a separate training or part of the training on each bar? It is like the tails of the first training, how many tails, or stages of training?
On each bar everything is new
There are more wooden models for causal inference, haven't had time to figure it out yet
from practice, many interpretations:
- environment affects the subject (when everything is correlated in laboratory conditions, in natural conditions other unaccounted dependencies appear - the most trivial one is human factor or crowding effect) - RL is better than ML, but it is also necessary to model and not everything can be taken into account....
- when about 2 correlated values one can only make an inference about what depends on what, not the other way round (result on factor or factor on result)
- mediation, moderation, interaction, dependencies interfering in the process (often which cannot even be traced experimentally)
- in general, it is important to plan an experiment (it is useful to be able to draw graphs of dependencies, exactly logical, theoretical ones) in order to plan an experiment, the results of which can be processed by ML or even simpler ....
i.e. in what sequence and what factors to fix in order to get a conditional distribution on the investigated factor, or joint influence of 2 investigated factors, to compare the obtained results with the unconditional distribution - to put forward a hypothesis "better - not better", "influence - not_influence", to confirm or refute statistically, to transfer to tests in field conditions.... and you get a new causal inference.)
and in ML RF - I don't know how they do it, processing correlation matrices - (especially point 2 is questionable).
many people criticise probabilistic models precisely because of point 2 and start extolling causal inference, claiming that they have taken into account the influence of other factors.... but algorithmically how the question (! another word synonymous with reasoning) is solved by the VM apparatus is unknown (at least to me) - I would say "nothing".
for me causal inference is essentially reasoning, and the study of mediation, moderation, interaction is a separate big topic and a matter of taste (i.e. to sketch this or that graph built logically) - also a kind of Design (experiment).
just having 1 BP in the market - you can't test a hypothesis about dependencies... and with a reasonable setting of the experiment one OLS or ANOVA will be enough (but you won't have to single out chips for sure).
?? ... so I don't know the algorithm of attributing a feature to a factor or to a result (by wooden models or whatever), except for logic and theoretical knowledge .... but nowadays we have all sorts of things being advertised under other words -- I don't know in what context you encountered causal inference
from practice, many interpretations:
- environment influences the test person (when everything correlates in laboratory conditions, in natural conditions other unaccounted dependencies appear - the most banal - human factor or crowding effect) - RL is better than ML, but it is also necessary to model and not everything can be taken into account....
- when about 2 correlated values one can only make an inference about what depends on what, but not vice versa (result on factor or factor on result)
- mediation, moderation, interaction, dependencies interfering in the process (often which cannot even be traced experimentally)
- in general, it is important to plan an experiment (it is useful to be able to draw graphs of dependencies, exactly logical, theoretical ones) in order to plan an experiment, the results of which can be processed by ML....
i.e. in what sequence and what factors to fix in order to get a conditional distribution for the investigated factor, or the joint influence of 2 investigated factors, to compare the obtained results with the unconditional distribution - to put forward a hypothesis "better - not better", to confirm or refute statistically, to transfer to tests in field conditions ... and run into a new causal inference)
and in ML RF - I don't know how they do it, processing correlation matrices - (especially point 2 is questionable).
many people criticise probabilistic models precisely because of point 2 and start extolling causal inference, claiming that they have taken into account the influence of other factors.... but algorithmically how the question (! another word synonymous with reasoning) is solved by the VM apparatus is unknown (at least to me) - I would say "nothing".
for me causal inference is essentially reasoning, and the study of mediation, moderation, interaction is a separate big topic and a matter of taste (i.e. to sketch a graph built logically) - also a kind of Design (experiment).
just having 1 BP in the market - you can't really test a hypothesis about dependencies... and with reasonable experimentation one OLS or LDA will be enough (but you won't have to allocate features for sure).
?? ... so I don't know the algorithm of attributing a feature to a factor or to a result (by wooden models or whatever), except for logic and theoretical knowledge..... but nowadays we have all sorts of things being advertised under other words -- I don't know in what context you encountered causal inference
I saw some Uber lib that said they've improved their processes.
and the general interpretation that correlation != causation and trying to solve it in different ways, starting with A/B, but I don't know about that.
They have some strange definitions, you can't understand it without a bottle, you have to fill your head with unnecessary words.
By the way, I wonder how other models will work, if they will be able to create the mach() function without an error
Briefly trained different models without any GP tuning
conclusion: models cannot create the function, only approximate it with some accuracy, so creation of features and selection of features are still relevant.
This is the error on the next 300 bars. At each bar, predictors were formed, then filtered, the model was trained and the next bar was predicted.
I will try to do something similar in the evening, but I have done quite a lot of such retraining bots, and for them to give such a scor, I can hardly believe it....
Rather there is a confusion in the concepts/understandings of what is a test sample, and because of this we talk about different things, calling it the same thing
The task queue has been unloaded a bit - it became possible to run the script. I run it and get an error.
Do I understand correctly that the programme wants the old version R 4.0?
Well, I searched for an old version and didn't find it. Terrible incompatibility is repulsive, of course.
Wrong. If a package is built for a different version, there will be a warning. What incompatibility are we talking about?
randomForest v.4.7-1.1 didn't go anywhere and in crane. R 4.1.3
Please note that from build 3440 we start distributing AVX versions of the software: https://www.mql5.com/ru/forum/432624/page5#comment_42117241.
The next step is to rewrite the mathematical apparatus to vector and OpenCL functions, which gives ten-tract accelerations without the need to install additional libraries like CUDA.
This is a really big step forward. Need to rewrite indicators and experts?
I can't agree with that.
The market is changing, and the time intervals of change are different and independent of each other.
I used to be able to write Expert Advisors that lived from 3 to 6 months. I optimised them at weekends. Then they died, and for a short time quite enough to drain the deposit. But not enough time for optimisation. In the end, the situation was even worse: after some time it turned out that there was a boundary, moving away from which it was impossible to select parameters.
There are longer periods of market changes: 5-7 years. But the result is the same as for monthly periods. The bot dies forever. I will send a specific bot from the market in a private message - you can't do it here.
So this whole idea of "out of sample" is rubbish. A bot still has a lifetime, we don't know how long: 3 months or 7 years. When the bot dies, we confuse it with another drawdown and drain our depo.
Ideally, it should be retrained on the next candle. If we work on ticks, then on the next tick, on H1 then on the arrival of the next hour.
Have you compared your algorithm with KNN (or some modification of it)? It would be interesting how significant the gain is.
Have you done a comparison of your algorithm with KNN (or some modification of it)? It would be interesting how significant the gain is.
KNN is not the same thing at all.
I am interested in "predictive ability", not classification, and even without a teacher, which are useless in our business.