Bayesian regression - Has anyone made an EA using this algorithm? - page 41

 
Vizard_:
That's how, slowly, we came to the fascinating subject of transformations)))) because if there is no normal distribution, you can make one.
It's going to take a long time, because you need both retransformation and... And Box-Cox doesn't really like it)))) It's just a shame that if you don't have
It's just a pity that if you don't have proper predictors, it won't have much effect on the end result...

First I would like to see a glimmer of understanding in the eyes of the 'faithful'. And then, yes, convert if necessary. Whether thick tails can be converted, that's the question. They can make a big difference to quality.

 
Alexey Burnakov:

First I would like to see a glimmer of understanding in the eyes of the 'faithful'. And then, yes, convert if necessary. Whether thick tails can be converted is the question. They can have a big impact on quality.

There are regressions for thick tails, from memory FARIMA.

But back to the magnitude of the increment.

What are we trading? An increment of 7 pips at 1 o'clock relative to the previous bar? I do not understand it very well. May someone enlighten me?

The increment can be traded, more precisely, the volatility, but relative to some stationary series - it is called cointegration.

 
I wish someone would seriously consider the inputs )
 
СанСаныч Фоменко:

There are regressions for thick tails, from memory FARIMA.

But back to the magnitude of the increment.

What are we trading? An increment of 7 pips on the hour marker relative to the previous bar? I do not understand it very well. May someone enlighten me?

The increment can be traded, more precisely, the volatility, but relative to some stationary series - it is called cointegration.

And what do you trade if not increments?
 
Комбинатор:
I wish someone would seriously consider the input data )

I thought. Seriously )

First, I generate as many inputs as I can think of. Then I select the most relevant ones for a particular target variable and trash the rest. It seems to help, but it depends on the training method.

In the experiment I conducted, I did the following. First I thought up what information the system would need to see. But that's all subjective. I also picked informative predictors before training but it worked:

train_set <- dat_train_final[, c(eval(inputs), eval(outputs[targets]))]
        test_set <- dat_test_final[, c(eval(inputs), eval(outputs[targets]))]
        
        input_eval <- gbm(train_set[, 109] ~ .
                       , data = train_set[, 1:108]
                       , distribution = "laplace"
                       , n.trees = 100
                       , interaction.depth = 10
                       , n.minobsinnode = 100
                       , bag.fraction = 0.9
                       , shrinkage = 0.01
                       , verbose = T
                       , n.cores = 4)
        
        best_inputs <- as.character(summary(input_eval)[[1]][1:10])
        
        train_set <- dat_train_final[, c(eval(best_inputs), eval(outputs[targets]))]
        test_set <- dat_test_final[, c(eval(best_inputs), eval(outputs[targets]))]

I'll comment. First I trained on a weak, not retraining model with all available predictors. It's important that the model doesn't have time to retrain. Then I took the top 10 most important ones.

Not only did this not reduce the results to noise, but it also speeded up the training by a factor of 10.

That's one way of looking at it.

 
Alexey Burnakov:
What are you trading if not increments?

Trend in which long and short are of interest.

Orders in the terminal: BUY, SELL.

 
Комбинатор:
I wish someone would seriously consider the input data )

Just thinking about it, I even provide a paid service for cleaning up the input predictor sets from the noise predictors for the classification models. This leaves a set that does not generate overtrained models. True, we should clarify: if anything remains. There is a paradoxical thing: for trend trading all the many varieties of wipes are hopeless.

Among those sets that I have processed:

  • Reducing the original list of predictors by a factor of 3 to 5.

This leaves 20-25 predictors that can be dealt with in the future

  • from this set of predictors on every bar I choose some subset by standard means of R
  • there remain 10-15 predictors, on which the model is trained
  • it is possible not to make the last selection of the predictors an approximate number of bars equal to a window, but the window is within 100

Result: the model is not retrained, i.e. classification error in training, AOB and out of sample is approximately equal.

 
Man, the children of the corn of normality/normality are some.
 
two parallel threads discussing the same thing - selecting predictors for the model
 
СанСаныч Фоменко:

Trend in which long and short are of interest.

Orders in the terminal: BUY, SELL.

This is the same! Increases turned into + or - signs. And you can take this sign for increments one hour ahead.

What is the question?