What to feed to the input of the neural network? Your ideas... - page 48

 
It turns out Ivan Butko is the author of the topic. He seems to be asking about ideas - What to feed to the neural network input? Your ideas..., and he doesn't need ideas, but something else. All right. I apologise if something is wrong.
Ivan Butko
Ivan Butko
  • 2024.04.12
  • www.mql5.com
Профиль трейдера
 
Yuriy Asaulenko #:

A normal backtest usually coincides with reality. Otherwise, fuck such backtests).

Your picture shows the MAE metric, not the profit in pips.
You don't understand what you're doing
 

I have the following classification (in descending order of importance):

1. augmentation of the original series

2. original series

3. normalisation of the initial series in a sc. window

4. indicators (including returns)

 
Yuriy Asaulenko #:

Does the error value mean anything to you? That's a shame. It's already enough for an evaluation.

Just try a backtest and you'll see for yourself.
I did what you're doing now seven years ago.
 
Maxim Dmitrievsky #:

I have this categorisation (in descending order of importance):

1. augmentation of the original series

2. original series

3. normalisation of the initial series in the sc. window

4. indicators (including returns)

In the articles I intentionally used returns, because they do not require any preprocessing, which may be different for each trading instrument.
 
Yuriy Asaulenko #:

Okay, there will be a backtest for you, not necessarily of this particular NS, perhaps another one.

x - numbers of deals, Y - accumulated profit in instrument points.

I understand the test starts at 150 index?
 
Yuriy Asaulenko #:

I have no idea. I don't even remember what the instrument is :)

)))
Well, then we're talking about nothing.
 
mytarmailS #:
)))
Well, then we're talking about nothing.

There's nothing to talk about.)

 
Maxim Dmitrievsky #:

Initially, the most desirable thing is to feed the original series. But this does not always work due to prices moving out of the training range on new data.

Discuss augmentation/differentiation techniques with least loss of information for the second case would be a useful discussion.

Something is missing.

Some kind of intermediate knowledge.

Here I am picking at these normalisations. I add coefficients to make the activation function have some fancy form (create new rules), although weights do this, but still - the rules of out (what number comes out) take the fanciest forms for sure, but one thing becomes invariable - remembering history.

It's kind of logically good, but for new data it's bad. They're different, why do we need memory! It's rubbish like going to university: "Forget everything you were taught at school".


What I'm getting at is this: with or without normalisations (bare data) - everywhere NS stumbles upon the fact that all patterns are 50/50. The only thing is: in bare data, NS is more stable on the forward.... ONLY IN FLAT! And who promises it tomorrow or the day after tomorrow? Also no, because in this case the NS averages the weights so much to adjust for the longest flat in the history of training. And, if new data goes beyond this flat - NS opens the opposite trade and - sits till death.

No filters on the buy/-a_now_sell/-a_now_buy flag type - do not work. NS simply switches off after receiving the set SL. And that's half the problem, flotation doesn't guarantee full stability, NS can flotate too. The totality of these consequences makes it hard to call all this a working method of making profit.



The very idea of NS is.... well, it doesn't create new information. It... relabels it. There are numbers 0.2, 0.3, 0.4 - it labels them 0.3456. Another set of input numbers, it labels it 0.5367. And so on.

But this set, each set of numbers, are patterns. The NS essentially takes pattern "a" and calls it pattern "b". Renames it.



And, going back to the beginning of the post, what's missing is knowledge of learning. What is learning. What is it in general? You open up the definition - there's a sneeze of abstraction. That is, something tied to an applied task, ChatGPT is also memorising textbooks and doesn't understand what I want from it.

You take two numbers, multiply them by two other numbers each - is this learning? No. It's tweaking, fitting, optimising. Its result is "marking" of the input set.

Training is when pattern A gives us a BUY signal now and a SELL signal tomorrow. How exactly? What's the trick? That's what learning is all about. Learning context. Adaptation.



But there have been adaptations. And they didn't work. We can assume something was missing. And changing the period of the moving average is a strange adaptation, even though it makes sense.


How do you translate learning into numbers? That's the question.

Without any preprocessing of the data. How does the system know what to preprocess? The very idea of data preprocessing, "cleansing from noise" - it already looks like the creation of WORKING PATTERNS! - That's it. Take it, trade it. But there is no such thing.

Market noise is called market noise because it is market noise. Where does that come from? From a physics textbook? Maths? Because clever graphs of amplitudes and discontinuities showed something there? You all are good, your articles are professional and academic, but they are loosely applied to Forex, if not quite loosely.



We freely call trend a trend, flat - flat, noise - noise, training - training, but nothing follows from nowhere. We communicate in different languages.



As if we need some radically different work, a radically different approach (working with numbers).

Well and accordingly - the strictest interpretation, without any "system thinks", "black box", "NS decided", "training is incorrect", "no information, something didn't fall there, so it doesn't earn".

 
Ivan Butko #:

Something's missing.

Some kind of intermediate knowledge.

The word "stationarity" has been heard here before, but not in the context I would like.

I am thinking simply, after having also experienced all this thankless fiddling:

There are market states, you can get them, for example, through clustering.

If we combine quotes (returns) from each individual cluster into one row, and remove quotes from other clusters, in some cases we will get almost a stationary series. You can already work with it.

Further, it does not matter what to input to the model (preferably raw prices, so that there is no loss of information).

MO algorithms work fine, you don't need to dig into them. You need to look for stationary series/laws. Only on them MOShka steadily predicts the future.

If there are any other ideas how to get a stationary series or a stationary regularity - this is always the right way of thinking.