Machine learning in trading: theory, models, practice and algo-trading - page 3431

 
Maxim Dmitrievsky #:

The multicurrency is ready, so far such a crooked one will do for now

What's left is the best part.


Is it training on different character samples at once?

 
Aleksey Vyazmikin #:

Is it training on different samples of characters at once?

multiple characters at once in the same dataset

I guess that's where the normalisation of the fic is needed, as much as I despise it
 
secret #:
Your quotes are cheating. Undervalued spread, to put it simply.

My question didn't say anything about data source and trading system.

It's a number series and FF optimisation.

 

On a per-subject basis, to get inspired and motivated and to anticipate successful success

https://www.ruder.io/multi-task/

An Overview of Multi-Task Learning for Deep Learning
An Overview of Multi-Task Learning for Deep Learning
  • 2017.05.29
  • www.ruder.io
Note: If you are looking for a review paper, this blog post is also available as an article on arXiv. Table of contents: Introduction In Machine Learning (ML), we typically care about optimizing for a particular metric, whether this is a score on a certain benchmark or a business KPI. In order to do this, we generally train a single model or an...
 
Maxim Dmitrievsky #:

multiple characters at once in one dataset

I guess that's where fiche normalisation is needed, as much as I despise it

I went that way a bit, but I was sampling leaves on one instrument and looking at their efficiency on the other, unfortunately the efficient ones were still about 5% of all originally sampled -- on independent samples for the two instruments, maybe a bit more, I remember it being small. And, I have normalisation where it is required, i.e. the information from the predictors is not in points.

I have plans to do a search of different variants of normalisation, but at my pace, it won't be soon.

 
Aleksey Vyazmikin #:

I went a bit in that direction, but I was sampling leaves on one instrument and looking at their effectiveness on another, unfortunately the effective leaves were about 5% of the original sample -- on independent samples for the two instruments, maybe a bit more, I remember it was small. And, I have normalisation where it is required, i.e. information from predictors is not given in points.

I have plans to do a search of different variants of normalisation, but at my pace, it won't be soon.

Well, here it is needed only to make the feature sets comparable. Otherwise HC/boosting will remember examples for each pair and generalisation won't happen.

Ideally, the more characters you can cram in, the better. In terms of generalisation.
 
Maxim Dmitrievsky #:

Well, here it is needed only to make the feature sets comparable. Otherwise, NS/boosting will remember examples for each pair and generalisation will not happen.

Normalisation for me solves the issue of comparability of identical patterns regardless of market volatility.

 
Aleksey Vyazmikin #:

I have normalisation solves the issue of comparability of identical patterns regardless of market volatility.

Not to be confused with standardisation?

 
Maxim Dmitrievsky #:

Not to be confused with standardisation?

In my case, of course, it is more about standardisation, if we are talking about classical concepts. But, in fact, the purpose of all this is to make data comparable.

In its purest form, normalisation does not do anything - it only changes the scale.

 
Aleksey Vyazmikin #:

In my case, of course, it's more about standardisation, if we're talking about classical concepts. But, in fact, the purpose of all this is to make the data comparable.

In its pure form, normalisation does not give anything - it only changes the scale of calculation.

which makes the data comparable

Reason: