Machine learning in trading: theory, models, practice and algo-trading - page 3482

 
Forester #:
The information is jumbled/randomised.
It's up to you what to spend your time on. There is nothing more to say on this topic.

Do you realise that only the scale of measurement of the event has changed?

 
Aleksey Vyazmikin #:

Do you realise that only the scale of measurement of the event has changed?

The scale has changed during quantisation.
You are also rearranging them. You're disrupting the natural sorting.
 
Forester #:
The scale has changed during quantisation.
You also rearrange them. I.e. you break the natural sorting.

You can imagine that we have made a transformation through some function - a conditional sine wave. The function is the same on all samples. This has changed the scale and the order of construction on this scale.

 

I increased the learning rate by 10 times, and the models appeared, which already have a probability greater than 0.5 on two samples.

On the graph the models are ordered by profit on the sample exam


 
Started training on data from normal distribution - I am surprised that the training is going briskly.... I'll run 100 models and see the financial results.
 
Aleksey Vyazmikin #:
Started training on data from normal distribution - I am surprised that the training is going briskly.... I'll run 100 models and see the financial results.

Why? I think everything is known in advance.

PS I came to MKL only to have my account deleted, but my tongue got caught).

 
Yuriy Asaulenko #:

Why? It's like we already know everything in advance.

PS I came to MKL only to have my account deleted, but my tongue got caught).

I was surprised not by the learning process itself, but by the fact that the validation of each iteration is done on a separate sample, and for the next step of learning the effect on the completed iteration is confirmed, so I thought there would be a stop after a couple of iterations.

 
Aleksey Vyazmikin #:

What surprised me was not the learning process itself, but that the validation of each iteration is done on a separate sample, and for the next learning step the effect is validated on the completed iteration, so I thought there would be a stop after a couple of iterations.

TensorFlow? There will be no stopping, as many epochs assigned as there are.

 
Yuriy Asaulenko #:

Why? It's like we already know everything in advance.

PS I came to MKL only to have my account deleted, but my tongue got caught).

And why delete your account?
 
mytarmailS #:
Why would you delete the account?

Why do I need it? I haven't been here for a few years and I don't plan to continue. Here, I'm looking at Mashko's training thread - it's still there.

Reason: