Machine learning in trading: theory, models, practice and algo-trading - page 3172

 
fxsaber #:

Thanks, I'll try MathRand increments.

Is there supposed to be an OOS dropout on the SB?

I don't think it's supposed to be that way by definition of SB.

Take one retrained coin on new data - it will behave like a sb. Add a few more (according to the number of TC parameters), sum up the errors and you will get sharp plums, and sometimes sb, and sometimes vice versa. Some of the coins were tied to the trend, which changed. Part on small fluctuations. The first part started predicting all the time in the wrong direction, and the second part was predicting badly even without it, because it was retrained on noise. The negative effects added up, and there were no compensating coins left.
 
Aleksey Nikolayev #:

I usually try to "move" the task a bit - slightly change all possible parameters (and available metaparameters) and see how the result changes. Sometimes it becomes a bit clearer.

Thanks. Usually it's laziness to look too deeply that stops me. Superficial "wiggling", of course, I practice.

 
Maxim Dmitrievsky #:
Take one retrained coin on new data - it will behave like a sb. Add a few more (according to the number of TS parameters), sum up the errors and you will get sharp plums, and sometimes sb, and sometimes vice versa. Some of the coins were tied to the trend, which changed. Part on small fluctuations. The first part started predicting all the time in the wrong direction, and the second part was predicting badly even without it, because it was retrained on noise. The negative effects added up, and there were no compensating coins left.

This statement compares to the fact that when adding up the SB rows, sharp dumps will be seen. Well, the SB itself has the dips. There's no need to add anything.


I may be wrong, but I see it this way.

  • Any combination (addition, etc.) of several SBs - SB.
  • Any TC on a SB is an SB.
The original question was not about the presence of sharp plummets, but about the fact that a sharp plum starts immediately after Sample.
 
mytarmailS #:

The OOS on the left is also a fit, only kind of second order


Imagine you only have 1,000 variations of a TC, in general.


your steps 1 and 2

1) You start to optimise/search for a good TS, this is the train data (fitting/searching/optimisation).

Let's say you've found 300 variants where the TC makes money...

2) Now you are looking for a TC out of these 300 variants which will pass OOS is test data. You have found say 10 TCs that earn both on the traine and on the test ( OOS ).


So what is point 2 ?

It is the same continuation of fitting, only your search(fitting/searching/optimisation) has become a little deeper or more complex, because now you have not one condition of optimisation (pass traine), but two (pass test + pass traine).

I don't practice this kind of self-deception. I only do it this way.

  1. Optimisation on the traine.
  2. From the found ones I take the top five and watch the behaviour on OOS. There is no optimisation in this point in any case.
This is how the original images were obtained. So the nice OOS on the left is no fitting at all.
 
fxsaber #:

This statement compares to the fact that when you add up the SB rows, you will see sharp dumps. Well, the SB itself has the dips. There's no need to add anything.


I may be wrong, but it seems so.

  • Any combination (addition, etc.) of several SBs is SB.
  • Any TC on the SB - SB.
The original question was not about the presence of sharp plums, but about the fact that a sharp plum starts immediately after Sample.
I gave you an explanation. Perhaps it takes time to understand. Some parameters of the TS are stuck on all the time of incorrect predictions. Because of which the overall result is always a drain. Sometimes immediately, sometimes not immediately. It's random.
 
Maxim Dmitrievsky #:
I've given you an explanation. Maybe it takes time to understand.

We're probably just talking about different things. Or there is a terminological conflict.

Above, for example, there is a perception of OOS as a section of forward testing. I.e. one term, but approaches are different.


Forum on trading, automated trading systems and testing trading strategies

Machine Learning in Trading: Theory, Models, Practice and Algorithm Trading

Maxim Dmitrievsky, 2023.08.17 06:33 AM

Take one retrained coin on new data - it will behave like a sb. Add a few more (according to the number of TS parameters), sum up the errors and you will get sharp plums, and sometimes sb, and sometimes vice versa. Some of the coins were tied to the trend, which changed. Part on small fluctuations. The first part started predicting all the time in the wrong direction, and the second part was predicting badly even without it, because it was retrained on noise. The negative effects added up and there were no compensating coins left.

This explanation gives an example that it is possible to find a situation in the SB where the right result will be obtained on the right side: not necessarily a sharp plum, but any result at all. For example, a sharp profit.

But this is just some "luck" of choosing a sample interval on the SB.

All this is, of course, a naked theory.

I'll have to get to the charts and take a look.

 
fxsaber #:

We're probably just talking about different things. Or a terminological conflict.

Above, for example, there is a perception of OOS as a forward testing section. I.e. one term, but approaches are different.



This explanation gives an example that it is possible to find a situation in the CB where the right result will be obtained: not necessarily a sharp plum, but any result at all. For example, a sharp profit.

But this is just some "luck" of choosing a sample interval on the SB.

All this is, of course, a naked theory.

I'll have to get to the charts and see.

It's luck and p-hacking, yes. So the results can be whatever you want them to be.

P-hacking is deliberately fitting the results to a meaningful statistical criterion. For example, you see that on the oos on the left the stats are steep and choose that option. Likewise on the right. It's all about fitting.
 

fxsaber #:

Any combination (addition, etc.) of several SBs is a SB.

Absolutely true when adding multiple SBs with fixed weights. More fancy combinations can result in something more complicated, mainly due to volatility fluctuations.

fxsaber #:

Any TS on a SB is a SB.

This is only partially true when all trades are with roughly the same volumes, stops and takeouts.

Mathematically speaking,"Any TS on SB is a martingale" (not to be confused with martingale). For example, a poker drawn on the SB during over-sitting, averaging, etc. is also martingale, but not SB.

 
Well, the way to analyse such on the SB chart is initially a dead end, because on it you can get any results at all :)
Either soothing or annoying.
 
СанСаныч Фоменко #:

The OOS should always be to the RIGHT.

If the OOS is LEFT, it is impossible to guarantee that the TS is NOT overtrained and is NOT looking ahead. These are the first major issues that must be addressed when testing a TC BEFORE anything else.


Which one do you have? It makes no difference! It doesn't matter if it's one or both of them. You need to test it correctly and basta - OOS on the right.

And it is better to forget about the tester and form files for testing as follows:

Highly categorical statements without any doubt. I made a post on the topic of OOS placement.

It's not the first time I've encountered dislike for the tester. I don't know why I disliked the number crusher.

We have two files.


The first file is divided randomly by sample into three parts: training, testing and validation. Study on a (random) training sample, then check on a random testing and validation sample - these are all DIFFERENT pieces of the first file. Compare the result. If they are approximately equal, then check on the second "natural sequence" file. If they are approximately equal here too, we get the main conclusion: our TC is NOT overtrained and does NOT look ahead. Only having this conclusion it makes sense to talk about anything else: accuracy, profitability and other things, all of which are SECONDARY.


I note that there are virtually no other ways to test for looking ahead and overtraining.

I don't see well how you can look ahead in optimisation.


On methodology. I don't understand the necessity of splitting into train/test/exam. Claiming, even with the most favourable statistical study, that the TC is NOT overtrained seems too self-defeating.

The most I can get in a conclusion is "it is likely that the TC found some pattern that was present some time before and after the training interval. At the same time, there is no guarantee that this pattern has not already broken down."

"Out-Of-Sample" - где расположить, справа или слева?
"Out-Of-Sample" - где расположить, справа или слева?
  • 2019.12.10
  • www.mql5.com
Когда-то в паблике столкнулся с мнением, что OOS должен располагаться только справа. Т.е. расположение его слева от интервала Оптимизации - ошибка. Я с этим был категорически не согласен, т.к. не
Reason: