Machine learning in trading: theory, models, practice and algo-trading - page 1250

 
Farkhat Guzairov:

Jeez, all that you do manually, should do a neuronet, and so.... Waste of time, and then if the result is negative, then a sea of frustration and tossing in search of other methods.

Well, as in the manual, all automated, there are just stages that require manual control, and it is because of uncertainty, exactly how to behave on them.

And the fact that someone owes something, I doubt it very much ...

The result - who will know it until the future arrives ...

 
Vizard_:

Most likely, lags from 1 and -1 (or their predicate descriptions) will improve (or already improve) cut 0, since "not to enter"
For myself, this approach immediately rejected and began to use only the binary
classification (flip). In your case I would look at the dependencies on the different volatility, the effect, no effect, etc.
And I would make three binary classifiers. You could look for example only 1 "buy". Too complicated...

I have a trend system on the minutes, so just binary cannot be used at once, because during the fluxes it is necessary to be on the fence, and not just turn over. In theory it is possible to use separately buy/not trade and sell/not trade, but we need a large sample, and not everything is good with it. In ketbust I have a signal for entry and the vector (buy/sell) is selected separately - something seems to work, but there are still experiments, it is too early to say for sure. Of course, it is possible to make three binary symbols, but it will not be easy in the sense of combining this case into one design...

Volatility of course affects the result, just on the surge and you can earn, the question is the scale.
 
Vizard_:

To understand, research, conclusions apply to the inverse signal.
How does not affect the result, but the placement(selection) of predictors...

I will not pretend to understand you, please expand your thought.

 
SanSanych Fomenko:

No.

For a long time I used the modified JMA on mcl4 in terms of period adaptation, but it is of little use: it fades as well as everything else. From time to time I had to intervene manually.

If about filters, there is a curious smooth package. Inside smoothing sits Kalman with state space. It gives very good quality mashups, with extrapolation (forecast) for several steps ahead.

Djuric is a total crap.
But Kalman, apparently, needs to be slaved. But I think that Kalman, in our case, will not be any better than Mashki.
 
Yuriy Asaulenko:
Djuric is totally bullshit.
And here Kalman, apparently, should be slabbed. But it seems that Kalman, in our case, will not be better than MAs.

What is bullshit and what is not is unknown.

We should look at the predictive power of a particular predictor for a particular target variable. And better yet watch the variability of this predictive ability with window movement.

 
Aleksey Vyazmikin:

Well, as in manual, everything is automated, there are just stages that require manual control, and this is because of uncertainty, exactly how to behave on them.

And the fact that someone owes something, I doubt it very much ...

Well, the result - who will know it until the future arrives ...

Based on the above code, you have a clear algorithm of actions under certain conditions, in this case when you have input data and desired result, a neuronet will help you, but you will have to constantly manually change the code when market tends to change again.

It's up to you what to do, but I would use a trained neural network in the process.

 
About known and not known, if you think like that, you also do not know how events will unfold, here the decision by you or anyone else / something has a probabilistic nature, I mean that we should assume that the decision will have a 50/50 outcome, in which case it does not matter who will elicit it, you or a neural network.
 
Vizard_:

Does the weight of predictors change depending on the ox. Sounds like a sophisticated fit.
And there's multiclassing in the catwalk as well. Run it with crossvalidation, see if there are errors on fouls, etc.
Maybe it'll work for a fool... and all the effort isn't really necessary...

How do you propose to measure weight and volatility? I'm not against experiments.

Multiclass is there, but there is no model unloading, except in their binary code, which I have no idea even in theory how to connect and make work.

There's a whole epoch with catbust, I'm experimenting with sets of predictors there (partially removing - 512 combinations), with random weights of root predictor selection (200) - that's 100k models already, and I have two such partitions. Yes, there are interesting models in all this, and there are completely draining (profitable on the test and training sample, but on the independent drain or close to zero), but also there is no guarantee that they will continue to work. Now (22.12.2018) I started a new model creation, but marked all predictors as categorical, which is my idea originally (because many are already cut into irregular intervals and converted to integer values), in the new year the plan is to finish processing - see if there is a difference, because models with non-categorical features were prepared in this volume for 1.5 days, and here at least 10 ...

Fit or not - hard to say, yesterday I wrote that I'm more inclined to consider as a fit a model that can by its volume (the number of leaves) remember many variants and combinations at once, and I have a model does not exceed 100 leaves... Of course, the main problem for me is the lack of data - I'm working on Si instrument, I'm thinking of adding EURUSD futures, but I need to validly convert predictors - metrics issue.

 
Farkhat Guzairov:

Based on the above code, you have a clear algorithm of actions under certain conditions, in this case, when you have the input data and the desired result, a neural network will help you, but otherwise you will be constantly manually making changes to the code when the next change in the market trend.

It's up to you what to do, but I would still connect a neural network (trained) in the process.

I have entry points, and to enter or not, I don't know - that's the task of the MO.

As I said before, I don't know a fast neural network capable of absorbing a large (300-500) volume of input neurons... but to give already selected leaves to a neural network or to a tree again...

I don't understand what you mean about making changes in the code. Do you think trends haven't changed in 5 years?
 
Vizard_:

I'm not suggesting anything, I wrote how I would do it myself. And the obtained three classes I would just put into ts...

How do you stick it? Then you need to make some kind of bridge between python or R - for me it is a dark forest.