Machine learning in trading: theory, models, practice and algo-trading - page 3484

 
Aleksey Vyazmikin #:

Is this used in the learning algorithm? I can't figure out where.

No, why

the more random permutations, the worse the trace will be.

 

Well, okay, while I'm sitting here. During this time (2-3 hours) I managed to reduce the NS training error by about 2 times and achieve stable training results on different random samples.


The error of 0.035 is nothing at all. It is already possible to prepare a profit test.

I will not use it anyway, the interest is purely academic).

 

Launching a Mega AI Project on TensorFlow! Let's get started, shall we?

 
So, trained the random and I hasten to inform you that on the samples train, test, exam came out zeros on the balance, ie the pattern is not found on all samples, as I understand, so there was no exceeding the threshold of activation.
 
Aleksey Vyazmikin #:

No, it's not. Imagine we have ranges in the predictor:

1. 0 to 10, 55% probability.

2. 10 to 15 , 45% probability.

3. 15 to 25 - 50% probability

Here I just changed the numbers after ordering by probability, now the new numbering is relative to the old 1=2, 2=3, 3=1. In fact, I've only changed their designation.

The designations are only for categorical predictors. Splits there are only evaluated by one category/quantum, and how you name them doesn't matter. For example colours: red, green and white.

In numerical predictors quanta are initially sorted and then when evaluating splits they are sorted by quantum numbers. Rearrangement of their numbers will swap the lines for example 0 quantum (0-100 lines) and 9 quantum (900-1000 lines) only in the predictor, and the teacher lines will remain old. Total in rows 0-100 and 900-1000 you will train the model with different values of the teacher than originally. I.e. 900-1000 lines of the predictor will teach 0-100 lines of the teacher, because 900-1000 are moved to 0-100.

 
Forester #:

The designations are only for categorical predictors. Splits there are evaluated by one category/quantum only, and it doesn't matter what they are called. For example colours: red, green and white.

In numerical predictors quanta are initially sorted and then when evaluating splits they are sorted by quantum numbers.
Rearrangement of their numbers will swap the lines for example 0 quantum (0-100 lines) and 9 quantum (900-1000 lines) only in the predictor, and the teacher lines will remain old. Total in rows 0-100 and 900-1000 you will train the model with different values of the teacher than originally. I.e. 900-1000 lines of the predictor will teach 0-100 lines of the teacher, because 900-1000 are moved to 0-100.

Agreed.

The only difference between a categorical predictor and a non-categorical one for the algorithm is that the former immediately selects a quantised segment individually for evaluation, while the latter evaluates an ordered group.

The rows do not change. We simply rank the quantum segments not by their number with respect to a nominal indicator - eigenvalue - but with respect to a different criterion.

We affect only the composition of the group of quantum cutoffs.

 

Anomalies continue, I took a random sample, and transformed it as described above, and now a strange thing happened - a couple of models were trained.

Profit 100 models, ordered in ascending order on the sample exam.

This is what the model looks like in terms of probability - below is one model on three samples - train, test, exam/.

And the balance graph below on exam

Well, that you can go to the market - ha-ha-ha :)

Now you have to think whether we are putting randoms into the model or something useful - how to determine it is not clear.

All that is clear is that my method allows to train the untrained...

 
Aleksey Vyazmikin #:

Anomalies continue, I took a random sample, and transformed it as described above, and now a strange thing happened - a couple of models were trained.

It is only clear that my method allows to train untrainable.....

To fit the untrainable.

It's a famous fool's test, where you train on random data and fit it to a test. And then compare it to training on the original data.

If the training on the randoms fits on average as good or better, you're in trouble. That's p-hacking.

 
Aleksey Vyazmikin #:

The rows do not change. We simply rank quantum segments not by their number with respect to the nominal index - eigenvalue, but with respect to another criterion.

The values in the rows change. If earlier lines 0-100 corresponded to predictor values, for example from 2,55 to 2,88, which are indicated by quantum 0 (values are kept in the table of values by quantum), and 900-1000 were from 3,44 to 3,77. So by replacing the quantum number in these lines you change their values. Which is equivalent to changing the values in the rows of this predictor. You don't move the teacher's values in the same rows. I.e. you teach the wrong thing: 0 line of the teacher you give the value from the 900 line of the predictor.
I'm already tired of explaining. Do as you wish.

 
Yuriy Asaulenko #:

Well, okay, while I'm sitting here. During this time (2-3 hours) I managed to reduce the NS training error by about 2 times and achieve stable training results on different random samples.


The error of 0.035 is nothing at all. It is already possible to prepare a profit test.

I will not use it anyway, the interest is purely academic).

Look at the MAE, then at the chart, then at the MAE, then at the chart, then at the MAE, then at the chart, then at the MAE...... then at the chart.... a few more iterations of shifting the focus of attention from the MAE to the graph and back again.

The error is smaller than yours. How you manage to profit from this remains a mystery.

https://www.mql5.com/ru/articles/12772


Оценка ONNX-моделей при помощи регрессионных метрик
Оценка ONNX-моделей при помощи регрессионных метрик
  • www.mql5.com
Регрессия – это задача предсказания вещественной величины по непомеченному примеру. Для оценки точности предсказаний регрессионных моделей предназначены так называемые метрики регрессии.
Reason: