Machine learning in trading: theory, models, practice and algo-trading - page 3070

 
Aleksey Vyazmikin #:

I don't have features in one line - you'll spend more time reproducing them in python. It is more logical to test the effectiveness of the approach on my data, and then decide whether to implement the predictor calculation code or not.

If I had very "good" predictors, relative to others, I would not be in a hurry to make them publicly available :) You can do this - take me a model with an acceptable result and from there pull out 20 predictors by importance (according to one of the ways of its definition) in the model.

Besides, I am also interested in the effectiveness of the proposed method on binary predictors - which are quantum segments from predictors, and this technology is not so fast to reproduce, so an array would be preferable - but here I am interested in the result with a large volume of predictors.

If something will be interesting, then already then we can spend time and effort on getting into the logic of calculating predictors and their implementation.

Very stuffy. Give 10-20 examples of your own trait calculations. You can have one with different periods. On the input formulas for calculating the signs.

Large volume of binary traits will not count.


a few top results from those 3k models:

it feels like the same "patterns" are found even with different sampling of labels. All graphs are similar. Well on other chips there will be other pictures.



 
Aleksey Vyazmikin #:

In addition, I am also interested in the effectiveness of the proposed method on binary predictors - which are quantum segments from the predictors,

Is it dividing the fiche into 16 quanta (for example) and then dividing by 16 fiches with 0 and 1?
Where 1 is if the values of the primary chip are in the required quantum, and 0 if in any other quantum?

 
Forester #:

Is this dividing a fiche into 16 quanta (for example) and then dividing it into 16 fiches with 0 and 1?
Where 1 is if the values of the primary feature are in the required quantum, and 0 if in any other quantum?

The idea is to select a couple of segments from the 16 that have potential. About coding, yes, that's the way it is.

 

Maxim Dmitrievsky #:

OOS TO THE LEFT OF THE DOTTED LINE

The OOS itself (raw data) how was it formed?
 
Maxim Dmitrievsky #:

Very stuffy. Give 10-20 examples of your own trait calculations. You can have one with different periods. The input is the formulae for calculating the traits.

I will not consider a large volume of binary signs.


some top results from those 3k models:

it feels like the same "patterns" are found even with different tag sampling. All the graphs are similar. Well on other chips there will be other pictures.



Try indicators - there is ta library for python.

GitHub - bukosabino/ta: Technical Analysis Library using Pandas and Numpy
GitHub - bukosabino/ta: Technical Analysis Library using Pandas and Numpy
  • bukosabino
  • github.com
It is a Technical Analysis library useful to do feature engineering from financial time series datasets (Open, Close, High, Low, Volume). It is built on Pandas and Numpy. The library has implemented 42 indicators: Volume Money Flow Index (MFI) Accumulation/Distribution Index (ADI) On-Balance Volume (OBV) Chaikin Money Flow (CMF) Force Index...
 
fxsaber #:
How was the OOS (input data) formed?

in the classical way, a set of attributes at closing prices

 
Aleksey Vyazmikin #:

Try indicators - there is a ta library for python.

which ones are yours? just waste your time )

 
Aleksey Vyazmikin #:

The idea is to select a couple of segments from the 16 that have potential. About the coding, yes, that's right.

Then you can divide 1 feature into 16 quanta, number them and mark them as categorical. The tree will similarly check for each category/quantum (==0 or ==1 or ==2 ....). You can also put uninteresting quanta into one category.

The result will be 1 in 1. Or almost, at the expense of uninteresting quantum, it may turn out that the tree will choose it at split as the best one.

On the plus side, only 1 chip, faster calculations. File sizes and memory consumption will be significantly reduced.

 

15 years of OOS

The approach turned out to be curious, but sensitive to traits all the same. It just doesn't work that way on returnees.


 
Forester #:

Then you can divide 1 feature into 16 quanta, number them and mark them as categorical. The tree will similarly check for each category/quantum (==0 or ==1 or ==2 ....). You can also put uninteresting quanta into one category.

The result will be 1 in 1. Or almost, due to the uninteresting quantum, it may turn out that the tree will choose it as the best one during split.

On the plus side, only 1 chip, faster calculations. File sizes and memory consumption will be significantly reduced.

That's almost where I started. But no, it won't work like that, because the method of model building has its own idea of beautiful and it will consider rubbish as such - we've been through this before - we know.....