Machine learning in trading: theory, models, practice and algo-trading - page 3215

 
mytarmailS #:

Then you're saying it's a philosophy, not a pattern.

I see a mutual waste of time. I am sure, if we were talking in person opposite each other, the probability of mutual understanding would be close to one.


Overlaying something on TC results is a normal practice. The most common one is filters. Less often - MM (for example, a filter on the balance curve: if you deviate more strongly, you change MM more strongly). Even rarer - search for regularities in trading results.

 
fxsaber #:

I see a mutual waste of time. I'm sure if we were talking in person opposite each other, the probability of mutual understanding would be close to one.

The proposal for a conference in the UAE still stands )

 
fxsaber #:

I see a mutual waste of time. I'm sure if we were talking in person opposite each other, the probability of mutual understanding would be close to one.

I agree

 
Maxim Dmitrievsky #:

he spells it wrong. OOS - test, validation - second subsample (along with traine) for model evaluation (validation).

The validation one can be equal to the test one or separate.

This separation came about because IOs often use the second subsample to stop training early. You could call it fitting to it, in a sense.

That is why they use 3 subsamples, one of which is not involved in training at all.

Validation - confirmation of validity. Yes/No. Assessment is kind of a tricky thing to do for a goodness of fit model.))) assessments for goodness of fit, then))

The conversation is about terms and their meanings, I think.)

 
Valeriy Yastremskiy #:

Validation - confirmation of validity. Yes/No. Assessment is a bit of a tricky one for a fit-for-purpose model.))))) assessments for fit-for-purpose then).

The conversation is about terms and their meanings, I think).

Validation precedes evaluation, or it includes evaluation, whatever you like. That's not what it was meant to get to.

and what you should have been getting at is that the MOSers are confusing subsamples :)) But they produce multiple utopian market theories on an industrial scale.

Since our goal is to find the network with the best performance on new data, the simplest approach to comparing different networks is to estimate the error function on data independent of the one used for training. Different networks are trained by minimising the corresponding error function defined with respect to the training dataset. The performance of the networks is then compared by evaluating the error function on an independent validation set, and the network with the smallest error with respect to the validation set is selected. This approach is called the holdout method. Since this procedure alone may lead to some overloading of the validation set, the performance of the selected network must be validated by measuring its performance on a third independent data set, called the test set.

An application of this process is early stopping, where candidate models are successive iterations of the same network, and training is stopped when the error on the validation set grows, the previous model (the one with the minimum error) is selected.

https://en.wikipedia.org/wiki/Training,_validation,_and_test_data_sets
 
Maxim Dmitrievsky #: An application of this process is early stopping, where candidate models are successive iterations of the same network, and training stops when the error on the validation set grows, the previous model (the one with minimal error) is selected

On data with patterns - this will work.
If there are almost none, there will be a fit to the plot on which the early stop was made + a good trace. You could just increase the traine by an extra section and get roughly the same model.

 
Forester #:

On data with patterns - it will work.
If there are almost none, there will be a fit to the plot on which the early stop was made + a good trace. You could just increase the traine by an extra section and get about the same pattern.

That's a different question.
 
Maxim Dmitrievsky #:
That's a different question.

That's what I was raising.

 
fxsaber #:

That's what he was lifting.

Mixing at the very least, bootstrap. If your samples are from different distributions, what comparison can you talk about.
MO doesn't look for patterns, it classifies samples with already known patterns.
If looking for patterns via MO are separate techniques that I do, then looking for patterns via MO != just training on subsamples.
 
Maxim Dmitrievsky #:
Mixing at the very least, bootstrap. If your samples are from different distributions, what comparison can we talk about.
MO doesn't look for patterns, it classifies samples with already known patterns.
If looking for patterns via MO are separate techniques that I do, then looking for patterns via MO != just training on subsamples.

I have a terminological misunderstanding, unfortunately.