Machine learning in trading: theory, models, practice and algo-trading - page 1015

 
Alexander_K2:

Strictly speaking, in a sliding sample of returnees we should calculate the ACF estimator for this discrete series. If it is periodic, then the next returnee is predicted 100% by Kolmogorov. But I do not know the criterion for assessing the periodicity of ACF. It is not "by eye" to look at it, really...

Warmer.

Let us expand the set of NOT quite ordinary predictors for prediction models.



From here

Meta-learning how to predict time series

Thiyanga S Talagala, Rob J Hyndman and George Athanasopoulos

 
Gianni:

There only the divine guru and a couple of his padawans add people, throw me your Skype and data about yourself, I'll ask, but I do not promise anything because I am not an authority there, only a disembodied spirit, dirt on slippers. It is the gray cardinals, puppet and company, who are spotted for near-market activity, branded with shame for life, to wash away the shame can only be for tens of billions of green.

Thanks, I'm not so much interested in membership, which I understand involves considerable difficulty, but to look at the level, which is probably just as significant.

You wrote that this group is looking at a unified representation of IO models, that's the models I'd like to see.

I am ready to show my, modest developments for comparison as well, I serialize trained models in binary or text format and in the form of a source code.

 
SanSanych Fomenko:

Warmer.

Let's expand the set of NOT-so-common predictors for prediction models.



From here

Meta-learning how to predict time series

Thiyanga S Talagala, Rob J Hyndman and George Athanasopoulos

Good review article. Only, in my opinion, the multitude of time series considered is too wide for us. I would like to see a similar review of methods, but for the type of series we are interested in.

I would also like to see some new methods and models. There is for example anomalous diffusion(more popular).

 
Aleksey Nikolayev:

Good review article. Only, in my opinion, the multitude of time series considered is too wide for us. I would like to see a similar review of methods, but for the type of series we are interested in.

I would also like to see some new methods and models. There is for example anomalous diffusion(more popular).

I cited this link because of the table: it is a fresh look at predictors and as a development of Alexander's idea about ACF.

 
SanSanych Fomenko:

I cited the link because of the table: a fresh look at the predictors and as a development of Alexander's thought about ACF.

On the good side, instead of torturing himself and us with 70-year-old models, he would have better studied this very anomalous diffusion and its application to the market. That would be a useful application of his enormous enthusiasm and physical education.

 
Vladimir Perervenko:

The ZZ parameter is different for each instrument and timeframe. For example for EURUSD M15 a good initial value of 15 points (4 digits). It also depends on predictors that you use. It's a good idea to optimize parameters of predictors and ZZ parameter together. That is why it is desirable to have non-parametric predictors, it makes life much easier. As such, digital filters show good results. Using ensembles and cascade combining I got average Accuracy = 0.83. This is a very good result. Tomorrow I will send an article for review that describes the process.

Good luck

And how do you find out the ZZ settings, purely by trying different models, and the one that gives the best result with these settings is the best?

Why do you prefer points for ZZ rather than time (bars)?

 
Aleksey Vyazmikin:

And how do you find out the ZZ settings, purely by trying different models, and the one that gives the best result with these settings is the best?

Why do you prefer points for ZZ to times (bars)?

1. 1. There are different methods of optimization.

2. from experience.

Good luck

 

Yesterday it occurred to me, why do we look for decision trees, i.e., a model describing an entity? That is, why do we need to describe the whole entity in general, maybe we should just look for the pieces of that entity that are most understandable and predictable? I thought that since I'm collecting leaves from trees, maybe I should use a method for finding such leaves without constructing a complete decision tree, which should, as I understand it, give an increase in quality for the same amount of computational time spent.

I looked on the Internet and do not see such a method anywhere. Maybe someone knows about such developments?

While I'm working out algorithm, I think, that first of all I need to select predictors, that allocate predictive ability of one of classes, at that predictors must be made binary (for that I have to form my own sample for each predictor or to form exclusion ranges from general sample (what is more reasonable?)). Then already use the selected predictors (and their combinations) to build stubs for a particular class (in my case 3 classes), and then use these stubs to build the remaining predictors. At the same time we can also check them for preference of a certain class. Then, according to the idea, we will find the areas most amenable to classification for specific targets. And the remaining area will be just a field of inactivity/expectation.

Of course, you can then see where the leaves are layered on top of each other and make an average result in these cases. Yes, and similar to the tree can be built in this way then, but with the elements of voting due to the density in different areas of different targets.

What do you think of this idea?

 
Vladimir Perervenko:

1. There are various methods of optimization

2. from experience.

Good luck

1. That's what I would like to know about these methods. Otherwise I'm reinventing my bicycle again (I've already sketched the ideology), and suddenly everything has already been done before us...

2. I see. But it is unreasonable.

 
Vladimir Perervenko:

Vladimir, could you suggest some methods of "future selection" (or something like that) but for BP? When the algorithm analyzes BP maybe it excludes or adds something to make the forecast better, google couldn't help me(

Reason: