Machine learning in trading: theory, models, practice and algo-trading - page 3387

 
mytarmailS #:
I posted the code.
Here are the details
h ttps:// rdrr.io/cran/caret/man/findLinearCombos.html

I was hoping you could describe the process in your own words.

Okay, here's the translator:

"

Details

QR decomposition is used to determine whether a matrix has full rank, and then to identify sets of columns that are involved in dependencies.

To "resolve" them, the columns are iteratively removed and the rank of the matrix is rechecked.

The trim.matrix function in the subselect package can also be used to achieve the same goal.

"

Not much is clear from the description, for starters the question is what matrix are we talking about, how is it obtained?

mytarmailS #:
By what other activation points?

If the rules in the leaf have been executed, this is the leaf activation, which means that the leaf is used in forming the final answer of the model. A table is constructed by the number of leaves and each row is labelled as activated, if it is - "1", if it is not - "0".

 

I've sketched out some basic theses on kozul, for those who find it difficult to read books in English. And an example in python, how it works best, according to my version. Do you want the article?


 
Aleksey Vyazmikin #:
1. Google qr matrix decomposition, it's not something you can tell in a nutshell

2. with this method you can remove at best one third of the unnecessary features.
 
Maxim Dmitrievsky #:

I've sketched out some basic theses on kozul, for those who find it difficult to read books in English. And an example in python, how it works best, according to my version. You want the article?

Go ahead.
 
mytarmailS #:
Come on.

I'm just finishing another book to add to the theory.

because it says there's nothing more practical than good theory.

 
mytarmailS #:
1. Google qr matrix decomposition, it's not something you can tell in a nutshell

2. With this method you can remove at best one third of the unnecessary features

1. I'm not asking about the decomposition, I'm asking where the matrix came from.

2. This seems like an unsubstantiated assertion. In my opinion, you can remove more than you need with my method.

 
Aleksey Vyazmikin #:

1. I'm not asking about the decomposition, I'm asking where the matrix came from.

2. It seems like an unsubstantiated statement. In my opinion, my method can remove more than it needs to.

1 matrix with features

2 are we talking about linearly dependent features or everything?
 
mytarmailS #:
1 feature matrix

2 are we talking about linearly dependent features or everything?

1. How is this matrix obtained? What are the numbers in there?

2. I'm talking about rules. I don't care in my approach how and from what the rule is derived, but if the response is similar to another in the training sample, it doesn't carry additional information.

 

Why are large numbers of signs evil? Interesting graph from a book on kozul.

The probability of finding the same example in the training sample, depending on the number of features.

If you have more than 14 (or even 10) features, you get a lot of rules that you can't reduce without loss.


 
Maxim Dmitrievsky #:

Why are large numbers of signs evil? Interesting graph from a book on kozulu.

Probability of finding the same example in the training sample, depending on the number of features.

It's not clear. Probability of finding where the same example as in the training sample?