Machine learning in trading: theory, models, practice and algo-trading - page 118

 
mytarmailS:
...

Personally, mine is to delete the hell out of RSI

...

Because it is contradictory.

And if we simply take even an ordinary price - say, a series of 20 values, and the market is now trending upwards - then the trend is up and there is no second one, everything is unambiguous and not contradictory, you know what I mean?

To be honest, I do not understand.

The RSI and the "usual" price (momentum) show yesterday's day - retrospectively. If price has been going up over the previous 20 values, that does not mean that there is an upward trend now, and it is quite possible that there could be three mutually exclusive possibilities in the future:

  1. The price will continue to move up.
  2. The price will flat
  3. The price will turn around and go down.

The mutual exclusivity of potential future events is essentially contradictory as well, just as in the case of RSI.

The past is a fact (objectively) and therefore unambiguous. And the future is merely an assumption (subjective), and therefore probabilistic.

 
This article is rather an example of creating and using neural networks (MLP, CNN and LSTM) using Keras (Python) package. The results obtained without hyperparameter tuning cannot be considered indicative. By the way, the author at the end of the article says it too.
 
Yury Reshetov:

To be honest, I don't get it.

Both the RSI and the "normal" price (momentum) show yesterday - retrospectively. If price went up in the previous 20 values, it does not mean that there is an upward trend now, and it is quite possible that there could be three mutually exclusive possibilities in the future:

  1. The price will continue to move up.
  2. The price will flat
  3. The price will turn around and go down.

Mutual exclusivity of potential future events, in fact, is also contradictory, as in the case of RSI.

The past is a fact (objectively), and therefore unambiguous. And the future is just a supposition (subjective), and therefore probabilistic.

I'm not talking about predicting the future, this is a taboo for me, I'm talking about the current data representation

Let's take a series of 20 candles, at the moment this series shows a strong upward trend. The second one is not a given, the trend is up and that's it, even if you just try, what will happen on the next 21-th candle is unclear, it's a probabilistic forecast of the future and there are many outcomes, I absolutely agree with you about this

And now let's take an rsi indicator or a Momentum indicator, an indicator with a fix period, say 10, and apply it to our series of 20 candlesticks where the trend is up, and we'll see that on the 11th candle the indicator will start to bend down from its max values, why on the 11th? The indicator not only cannot describe the future, it even cannot describe the present/past objectively, it simply lies and confuses, it confuses the common trader and the neural network

 
mytarmailS:

I'm not talking about predicting the future, it's still taboo for me, I'm talking about the current data representation

let's take a series of 20 candles at the moment in this series, a strong upward trend is present, the second is not a given, the trend is up and that's it, even if you try, what will happen on the next 21-th candle is not clear, this is a probabilistic forecast of the future and there are many outcomes, I absolutely agree with you about this

And now let's take an rsi indicator or a Momentum indicator, an indicator with a fix period, say 10, and apply it to our series of 20 candlesticks where the trend is up, and we'll see that on the 11th candle the indicator will start to bend down from its max values, why on the 11th? The indicator not just describes the future, it even cannot describe the present/past objectively, it simply lies and confuses, it confuses the common trader and the neural network

TA indicators and oscillators describe the past exactly as much as it is specified in their algorithms. And they have a lag, as a result of which they miss sharp price movements, and if guided by them, you can jump into "the train that has long left" or "jump out of the train, which will go in the right direction early". As a consequence of lags in TA there are also divergences, when the price goes in one direction, and the oscilloscope draws in the opposite direction.

Therefore there is nothing to discuss: the codes of most technical analysis tools are open and it is easy to understand them, if you know mathematics well.

But that's beside the point, because all of the above has nothing to do with the topic of machine learning. Machine learning in the predictive part is about past-future relationships, not about trying to "understand" only hindsight.

 
Yury Reshetov:

TA indicators and oscillators describe the past exactly as much as it is specified in their algorithms. And they have a lag, as a result of which they miss sharp price movements and if you navigate by them, you can jump into "the train that has already left a long time ago" or "jump out of the train early, which will go in the right direction". As a consequence of lags in TA there are also divergences, when the price goes in one direction, and the oscilloscope draws in the opposite direction.

Therefore there is nothing to discuss: the codes of most of the technical analysis tools are open and it is easy to understand them, if you know the mathematics well.

But that's beside the point, because all of the above has nothing to do with the topic of machine learning. Machine learning in the predictive part is about the past-future relations, not about trying to "understand" only the retrospective.

I don't know, in 8 years of market research I never managed to jump on the "train" with indicators, I consider them contradictory and don't recommend them to anyone, I also explained why I think so, what else to add here I don't know...
 
Vladimir Perervenko:
This article is rather an example of creating and using neural networks (MLP, CNN and LSTM) using Keras (Python) package. The results obtained without tuning of hyperparameters cannot be considered representative. By the way, the author at the end of the article talks about this too.

I personally see an error there. The first MLP network trained on the raw prices, which is methodological nonsense, suddenly starts to show almost exact coincidence with the original price series, while the rest of the networks predicting the price show exactly what it is supposed to show - the forecast repeats the previous value, with zero information. I think his MLP is shifted back a bar, coincidentally or not. I did this myself some years ago by misunderstanding, and the result is always the same - R^2 <= 0.

But the directional classification accuracy of 54% seems to be true. With such an accuracy, taking into account the overhead in the American market, you can make 10-15% profit per year.

What does this have to do with hyperparameter tuning and readability? You can do it without tuning, if you are good at it. With tuning it is possible to retrain it so badly on the test.

But on the whole, his experiment is kind of lame.

 
Alexey Burnakov:

I personally see a mistake there. The first trained MLP network - trained on raw prices, which is methodological nonsense, suddenly starts to show almost exact coincidence with the original price series, and the other networks predicting the price show what they are supposed to show - the forecast repeats the previous value; zero information. I think his MLP is shifted back a bar, coincidentally or not. I did this myself some years ago by misunderstanding, and the result is always the same - R^2 <= 0.

And what does tuning of hyperparameters and readability have to do with it? You can do it without tuning if you are good at it. With tuning, you can retrain it so badly on the test.

This is about the conclusions in the article: 1. the regression problem is solved better; 2. MLP shows better results.

 
Dmitry:

The last time this thought was expressed to me in such a discussion was by Matemat on 4.

It was a given - tea, still looking for it, poor guy....

So I'm already writing here about these dependencies. There they are. The problem is that the smart uncles put such a spread on the trade that it almost always compensates for the edge.

And it takes years to find strong, profitable ones, that's for sure.

 
Alexey Burnakov:

So I've already written here about these addictions. They do exist. The problem is that the smart guys put such a spread on the trade that it almost always compensates for the edge.

And it takes years to find strong, profitable ones, that's for sure.

) No one is arguing that there are dependencies! The argument is about profitable strategies.

A simple tree will give 65-70% of correct candlestick color definitions - you cannot use it. Even in binary strategy the advantage is too small.

 
mytarmailS:

regression works best? from his conclusions

Take a close look at 3 of his graphs:

https://cdn-images-1.medium.com/max/800/1*pHoc6M3mpkaLd6IleRZrvQ.png

https://cdn-images-1.medium.com/max/800/1*a_99bupenNcTfPQPQZoiB7wA.png

https://cdn-images-1.medium.com/max/800/1*a_99bupenNcTfPfPQZZoiB7wA.png

His numbers there don't match the graphs. The same RMSE is claimed for a chart that supposedly has a strong match between the predicted price and the real price, and two others where the network doesn't learn anything. The graphs are also the same.

I take it that the classification at least gives something.

He's not doing the price regression right at all. You can't feed raw prices into the NS (and scaling won't solve the problem). His non-stationarity would be horrible. The output is a so-called "shift" - the forecast is a slightly modified value of the last close price.

Reason: