Machine learning in trading: theory, models, practice and algo-trading - page 2536

 
Aleksey Nikolayev #:

One reputable scientist wrote that price must necessarily be logarithmic, and all the theorists blindly continue to do so.

John Tukey? Or Box and Cox?

 
transcendreamer #:

John Tukey? or Box and Cox?

Perhaps Eugene Fama in his dissertation, but not sure.

 
Renat Akhtyamov #:

I think it's been here before

And again the problem of flat and trending will be rolled out


...........

The method is as follows:

1)You have some model (e.g. linear regression)

2) Some observation set, the accuracy of which you are not sure

Then create some random noise and combine it with a set of observations, repeat several times.

After that compare the behavior of the model on different resulting sets, and make some conclusions.

Optionally, we can select the most stable behavior as the preferred one.

This is not a magic wand, just a tool for analysis and possible slight improvement, it doesn't turn a wrong model into a right one.

 
LenaTrap #:

...........

The method consists of:

1)You have some kind of model (e.g. linear regression)

2)Some set of observations, the accuracy of which you are not sure

Then create some random noise and combine it with a set of observations, repeat several times.

After that compare the behavior of the model on different resulting sets, and make some conclusions.

Optionally, we can select the most stable behavior as the preferred one.

This is not a magic wand, but just a tool for analysis and possible slight improvement, it does not turn a wrong model into a right one.

I don't get it. There is a deterministic series and a model that describes it with 100% accuracy. You add noise and the accuracy of the model is 52%. What is the point of this action?
 
Dmytryi Nazarchuk #:
It is unclear. There is a deterministic series and a model that describes it with 100% accuracy. We added noise - the accuracy of model description is 52%. What is the meaning of this action?

In morse code and communicators. that's where it comes from.

 
LenaTrap #:

...........

The method consists of:

1)You have some kind of model (e.g. linear regression)

2)Some set of observations, the accuracy of which you are not sure

Then create some random noise and combine it with a set of observations, repeat several times.

After that compare the behavior of the model on different resulting sets, and make some conclusions.

Optionally, we can select the most stable behavior as the preferred one.

This is not a magic wand, but just a tool for analysis and possible slight improvement, it does not turn the wrong model into the right one.

Only in certain situations, and there are a set of situations where it doesn't work. The only hope is that by logic, which is not always the case, there are fewer situations where it doesn't work.

 
Dmytryi Nazarchuk #:
It is unclear. There is a deterministic series and a model that describes it with 100% accuracy. We added noise - the accuracy of model description is 52%. What is the point of this action?

If you can get exact values from that series, then there is no point. If you can only get approximate values, then the point is simple enough, to check whether the model result is the error of your inaccurate measurements of the original(ideal) series. There are precise mathematical formulas and definitions for that, but I don't understand them.

 
LenaTrap #:

If you can get exact values from this series, there is no point. If you can only get approximate values, then the point is simple enough, to see if the model result is the error of your inaccurate measurements of the original(ideal) series. There are precise mathematical formulas and definitions for that, but I don't understand them.

The point is the reliability of isolating what you're looking for, not the accuracy of the values. There is the searched for, mix in 10 percent, select 99, mix in 50, select 80 or 20... it all depends on the algorithm for selecting the data you're looking for.

Well, it depends on the quality of noise of course. You can veil any signal if you know the signal. and sometimes it happens by accident.
 
LenaTrap #:

If you can get exact values from this series, there is no point. If you can only get approximate values, then the point is simple enough, to check if the model result is an error of your inaccurate measurements of the original(ideal) series. There are precise mathematical formulas and definitions for that, but I don't understand them.

It's not for regression.
 
elibrarius #:
I filled in the predictors and output randomly. Just to make sure that learning is not possible. Made sure it was 50/50%.
With the quotes and with the target at TP=SL it's also 50/50%.

What if the target is not randomly set?

I experimented here - usually my sample is divided into 3 parts, so I combined the sample into one and trained the model with 100 trees, then checked which predictors were not used and blocked them. Then, as usual, I trained the model with a stop on retraining in the second sample, and compared the results in the third sample with the variant when I train without excluding predictors. It turned out that the results were better on the selected predictors, and here I find it difficult to conclude this effect of thinking such "the selection of different predictors occurs because of the difference in samples on the interval, by training on the whole sample we automatically select predictors that do not lose their significance over time." However, does this mean that the larger the sample, the more robust the model over the long horizon? Is it possible to select predictors for training in this way, i.e., does it not contribute to overtraining? In general, I heard the recommendation from the creators of CatBoost, that I need to find hyperparameters model, and then stupidly trained on the entire available sample to apply the model to work.


elibrarius #:
There was a variant with 47.5% error, it looked cool, but when I connected it to the MT tester it turned out to be a fall instead of growth. It turned out that I didn't take into account the commission, it ate this 2% advantage.

Here's thinking about how to account for the commission...
I wanted to add 4 pts to the spread. But that's not right. Since TP and SL will sometimes be triggered by an overestimated Ask, not on that bar, where it should be in the tester, because of this the order of subsequent trades may change.
But the tester uses the minimum spread on the bar, it will also differ from the reality.

I have not worked out the best way yet.

If the market moved in the direction of A by 100 points, then there should not be dependence on the spread at all - the financial result only depends on the spread - I believe that it should not be taken into account in training. Suppose I have market entries, after the model confirms them or not, and when marking them up I have the possibility to just virtually widen the spread - i.e. if the profit is less than the specified number of points, then don't enter. The spread is taken into account in model analysis, when the result is calculated purely by sampling - I simply subtract a specified number of points from the financial result of the trade.

On Moex stops are triggered where the price was, so of course it's easier with that.