Machine learning in trading: theory, models, practice and algo-trading - page 3388
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Nothing is clear. Probability of finding where is the same example as in the training sample?
the same row in the dataset
if you only have 1,000 rows
Roughly speaking, if you have 18+ features, you're training a classifier to remember every row because they don't even repeat
and in causal inference, you can't match examples to calculate statistics.1. How do you get this matrix? What are the numbers there?
2. I'm talking about rules. I don't care in my approach how and from what the rule is derived, but if the response is similar to another in the training sample, it doesn't carry additional information.
Why are large numbers of signs evil? Interesting graph from a book on kozulu.
Probability of finding the same example in the training sample, depending on the number of features.
If you have more than 14 (and even 10) features, you get a lot of rules that you can't reduce without loss.
It's all within the realm of the casual...
They use efficient compression algorithms inside neuronics, like sec2sec, so that's also true.
It uses efficient compression algorithms inside neuronics, like sec2sec, so it's also fair.
If we are talking about text, there is used in 95% of cases the usual word counter like - how many times a word occurred in this observation? 0, 1, 103..
.
These are different architectures, layer cakes. It's hard to compare. We're talking about normal classification or regression. In this case it looks like a universal law.
These are other architectures, layer cakes. It's hard to compare. We're talking about ordinary classification or regression. In this case, it looks like a universal law.
----------------------------------------------------------------------
Oh, I remember, it's called a bag of words.
What's new, unfamiliar, incomprehensible, complicated?
The same table of signs + any MO
This is working with unstructured data (text) then we translate it into a bag of words structure and then anything else we want
It's all the same.
----------------------------------------------------------------------
Oh, I remember, it's called a bag of words.
What's new, unfamiliar, incomprehensible, complicated?
The same table of signs + any MO
This is working with unstructured data (text) then we translate it into a bag of words structure and then anything else we want
That's a different matter. No matter how you transform them, the dimensionality of the input vector must be lower than the specified threshold, otherwise you cannot detect a pattern. The categorical ones probably have a larger limit on vector length. Plus, take into account the dependence on the number of rows. On huge data, the number of features may be larger.
What a different one)))