Machine learning in trading: theory, models, practice and algo-trading - page 3475

 

The probability of building the model described here, in dynamics by iterations, is 7.7% at the end point.

However, I have doubts that it is correct to calculate this way - to calculate it I multiplied the probability of choosing the correct answer at the current iteration with the accumulated probability at the previous ones. What do people knowledgeable in probability calculation have to say?

I think this parameter should be maximised to find the optimal method of selecting quantum segments/leaves, and eventually the method of building the model.

 
cemal #:

Sounds quite interesting :CRPS-based online learning for nonlinear combination of probabilistic forecasts https://www.sciencedirect.com/science/article/abs/pii/S0169207023001371

https://cran.rstudio.com/web/packages/profoc/index.html https://profoc.berrisch.biz/

Not really clear how it can be applied.... In the role of expertises of the model signs?

How did you apply it?

 
Aleksey Vyazmikin #:
The probability of building the model described here in the dynamics by iterations is 7.7% at the end point.

I calculated the wrong model by mistake, here is the correct version of the graph with a final probability of 2.1%


 

I think I asked, but I don't remember. There are paintings like this.

On the abscissa is the number of deals (not more than one at a time). Sample is four times longer in time than OOS. And the number of transactions is eight times longer than on OOS.

Is this skewness an indication that the pattern is the result of luck?

Or does it indicate that there are at least twice as many random (fitting) trades on Sample?
 
fxsaber number of deals (not more than one at a time). Sample is four times longer in time than OOS. And the number of transactions is eight times longer than on OOS.

Is this skewness an indication that the pattern is the result of luck?

Or does it indicate that there are at least twice as many random (fitting) trades on Sample?
Statistically speaking, is and oos are from similar but different distributions. Stationarity is not respected. This can be either because of really different distributions or because of over-optimisation (false maximum found).
In machine learning this is called bias. How it is caused - we need additional analysis.
 
fxsaber number of deals (not more than one at a time). Sample is four times longer in time than OOS. And the number of transactions is eight times longer than on OOS.

Is this skewness an indication that the pattern is the result of luck?

Or does it indicate that there are at least twice as many random (fitting) trades on Sample?
Given that the average profit is less than the average los, that's a good sign

If you tested it on different periods and on different pairs, and then counted how many similar patterns came out, that would be interesting
 
fxsaber number of deals (not more than one at a time). Sample is four times longer in time than OOS. And the number of transactions is eight times longer than on OOS.

Is this skewness an indication that the pattern is the result of luck?

Or does it indicate that there are at least twice as many random (fitting) trades on Sample?

Falling Recall is a common occurrence. There are likely a lot of sparse rules in the model - if a tree approach with many rules is used rather than NS.

I can't see the number of trades, but visually the two plots are comparable given the 1/4 OOS.

What percentage of the selected variants in the training (optimisation) period pass the independent testing section?

 
Ivan Butko #:
Here's if you tested on different periods and on different pairs, and then counted how many similar pictures came out, that would be interesting

Different characters, different patterns.

 
Aleksey Vyazmikin #:

What percentage of the selected options in the training (optimisation) period pass the independent testing section?

I will be able to answer if you explain the terminology.

 

You can evaluate the result of your optimisation in a simple way.

  • Optimise the TS for different periods (take the best ones according to your subjective criteria).
  • average the parameters, the TS will become unbiased
  • run on the whole dataset + new data.
It has already happened.


H.Y. there will be only one optimal TS (in terms of parameters), there will be nothing to choose from and make other people's brains about it.

Reason: