Machine learning in trading: theory, models, practice and algo-trading - page 128

 
mytarmailS:

Came across the blog and almost cried, the man did almost the same idea that I once came up with and decided to implement, and that's the idea and I started learning programming about a year ago

https://www.r-bloggers.com/time-series-matching-with-dynamic-time-warping/

And I thought that my idea was unique))) youth, naivety... Of course I didn't use DTW, because I had no idea about it at the time

If we take two time series: target variable and predictor. We get some "distances". If the distance is small (what is "small"), then the predictive power is high? Such a predictor is more valuable compared to others that have this distance greater?

Is this the case?

 
SanSanych Fomenko:

If we take two time series: the target variable and the predictor. We get some "distances". If the distance is short (what is "short"), then the predictive power is high? Such a predictor is more valuable compared to others that have this distance greater?

Is this the case?

not like that at all, what predictor, what target, there's nothing like that there. Just stupidly in the BP looking for areas that are similar to the latest current situation is all...
 
mytarmailS:

I read it three times, I don't understand it ((

1) It seems that this is not MO in pure form, it is like an improvement of some existing TS, which has signals to enter, and only on these inputs we enter and already these inputs we analyze the MO, right?

2) When there is a profit we close the trade. When the trade is at a loss we hold the position, why would we do that?

3) When to buy, when to sell?

If i'm not careful, i will show you something... But i dont understand what you have just said, maybe you can show me a picture or a scheme of fleas...

1. The signal generates a network.

2. 2. None, we have a free country. If you want - close, if you do not want - do not close. SL is in the system, may or may not be - also your right.

3. how do I know? Ask the grid you teach.

4. Maybe you need to read the 4th time, maybe it will become clearer.

 
mytarmailS:

Came across the blog and almost cried, the man did almost the same idea that I once came up with and decided to implement, and that's the idea and I started learning programming about a year ago

https://www.r-bloggers.com/time-series-matching-with-dynamic-time-warping/

And I thought that my idea was unique))) youth, naivety... But of course I didn't use DTW, because I had no idea about it at the time

If you know English well I would appreciate if you could explain the gist of the article
 
Andrey Dik:

1. The signal is generated by the network.

2. Not with any, we have a free country. If you want - close, if you do not want - do not close. SL is in the system, may or may not be - also your right.

3. how do I know? Ask the grid you teach.

4. Maybe you need to read it for the 4th time, maybe it will become clearer.

1) Your whole algorithm is to create some kind of "soft" target function for the neural network right?

But already on the first step of this algorithm we should receive some signals from the neural network, signals which are received from training the neural network by the target function, by the target function which we have not yet created because we are only on step 1.

my brain is exploding...

 
mytarmailS:
not like that at all, what predictor, what target, there is nothing like that... It's just a dumb search in BP for areas that are similar to the latest current situation, that's all...

Two time series

Here is the reference

dtw(x, y=NULL,
dist.method="Euclidean",
step.pattern=symmetric2,
window.type="none",
keep.internals=FALSE,
distance.only=FALSE,
open.end=FALSE,
open.begin=FALSE,
... )

Details The function performs Dynamic Time Warp (DTW) and computes the optimal alignment between two time series x and y, given as numeric vectors. The "optimal" alignment minimizes the sum of distances between aligned elements. Lengths of x and y may differ. The local distance between elements of x (query) and y (reference) can be computed in one of the following ways:

Note the parameterstep.pattern=symmetric2,

This is from the documentation of the package.

 
mytarmailS:

1) Your whole algorithm is creating some "soft" target function for the neural network, right?

But already at the first step of this algorithm we have to accept some signals from the neural network, signals that come from training the neural network by the target function, by the target function that we have not yet created, because we are only at step 1.

my brain is exploding...

People here are like that. "Soft". They don't give any details. And what is the use? They're living off their plumerator?

I can think of a lot of soft target functions. Off the top of my head:
Mashka's directional prediction.
Prediction of zz knee
Linear trend slope prediction.

All these signals just don't give an idea how to close a trade and when. And then comes the shamanism with closing conditions and as a consequence the fitting.
 
SanSanych Fomenko:

Two time rows.

Well, yes, two rows, to measure the proximity between two rows you need two rows,

two series (in our case) means two parts of the same series (prices)

the only difference is that the rows you feed to dtw can be of different sizes, and that's very cool for us.

 
mytarmailS:

1) Your whole algorithm is creating some "soft" target function for the neural network, right?

But already at the first step of this algorithm we have to accept some signals from the neural network, signals that come from training the neural network by the target function, by the target function that we have not yet created, because we are only at step 1.

my brain is exploding...

Alexey Burnakov:
The people here are like this. They are "soft". They do not talk about details. And what is the use? They live their plum?

I can think of a lot of soft target functions. Off the top of my head:
Mashka's directional prediction.
Prediction of zz knee
Linear trend slope prediction.

All these signals just don't give an idea how to close a trade and when. And then comes the shamanism with the closing conditions and as a consequence of adjustment.

Here, I have clearly spelled out what I do:

In details: on the current bar a buy signal, like we buy, count down the smallest number of bars ahead in the future and check - whether the deal will be profitable, if yes - like we close, if no - we count forward one more bar and check again. And so on, until the maximum number of bars is reached and we finally close it. This is a learning mechanism.

What is not clear? It's not a fantasy, that's exactly what I do now. The intended function is maximizing profits with minimal drawdowns. I train using my genetics.

 
Alexey Burnakov:
There are such people here. They are "soft". They do not tell the details. And what is the use? They live their plum?

I can think of a lot of soft targets. Off the top of my head:
Mashka's directional prognosis
Prediction of zz knee
Linear trend slope forecast.

All these signals just don't give an idea how to close a trade and when. And then comes the shuffling with closing conditions and therefore the fitting.
You just do not know how to cook them (nerves). (ц) :)