Machine learning in trading: theory, models, practice and algo-trading - page 3401
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
h ttps:// youtu.be/Ipw_2A2T_wg?si=U03oigHFfaFxwjbs
That reminds me.
Try this one.
binary classification
Thank you. Now it works fast!
GPT won't replace human of course, but it helps quite well.
50,000 traits / columns
found a subset of the best features in less than 3 seconds.
all features that are relevant to the target found and none of the 50,000 noise features found
So, it found six predictors out of the whole list. Hmm, now I will train 100 CatBoost models on them and see the average result.
Logistic regression is a classification algorithm, for example, texts are classified using it.
Yes, of course it's a classification algorithm. I haven't seen any contradictions in your arguments and my earlier words. In general - some misunderstanding.
This code saves the indexes of predictors for exclusion and the list of predictors selected by this method.
So let's compare - 100 CB models are trained - the first picture will be with the abess method, and the second one without, the result on the sample train
Test sample - we stop training on it.
The exam sample is a delayed sample that does not participate in the training process.
It seems that the abess method of selection does not work very effectively....
I should note that I have very unbalanced classes, I have about 16% of units - maybe it affects the selection.
The abess selection method doesn't seem to work very effectively....
I have very unbalanced classes, I have about 16% units - maybe it affects the selection.
I've got five options
1. It saves correctly - it corresponds to the output to the log. The code is clear to me on the contrary - it is a compilation of the original code and the one you suggested earlier here.
2. Probably needed for this method. I will try to do it.
3. Well, there may be something wrong in the data - that's what the process with selection is for. The data is obtained correctly, if that is what we are talking about.
4. That's what I'm writing about. Maybe it is necessary to set auto-balancing in the parameters?
5. So far it turns out like this. Maybe with stationary data it would be OK.
1. It saves correctly - it corresponds to the output to the log. The code is clear to me on the contrary - it is a compilation of the original code and the one you suggested earlier here.
2. Probably needed for this method. I will try to do it.
3. Well, there may be something wrong in the data - that's what the process with selection is for. The data is obtained correctly, if that is what we are talking about.
4. That's what I'm writing about. Maybe it is necessary to set auto-balancing in the parameters?
5. So far it turns out like this. Maybe with stationary data it would be OK.
For CatBoost it is necessary to submit a list of predictors that are subject to exclusion, i.e. those that are not selected. Decreasing the value by -1 is necessary because the index in the same CatBoost is counted from null, as well as in many other languages.
Normalisation is easy, just apply scale(data) to the data before all procedures. This normalises the matrix by columns.
# Нормализация выбранных предикторов normalized_data <- scale(data_without_excluded) data_without_excluded <- normalized_data
Is this acceptable?
In general, the result is identical to that without normalisation.
For CatBoost it is necessary to submit a list of predictors that are subject to exclusion, i.e. those that are not selected. Decreasing the value by -1 is necessary because the index in the same CatBoost is counted from null, as in many other languages.
Is this acceptable?
In general, the result is identical to that without normalisation.
So the coffee wasn't fake.