Machine learning in trading: theory, models, practice and algo-trading - page 3152
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Guys, can we say that MO is a special case of optimisation?
I think so.
I don't even know what code we're talking about.
I still don't understand the idea of dividing at any point in the sample. The point, as it seems to me, is to find the moment when the influence of a factor has changed. Maybe we should loop through different parts of the sample and use genetics to find the one that was affected by the predictor in a new way?
Earlier you accused others of not telling the truth, but you yourself do not make clear the meaning of these actions, as you see, for trading purposes.
I have not tested anything on this issue yet, as it is difficult to automate it in MQL5.
:)
You have the exact same book in front of you as I do. The way I see it, wrote it too. Ask mordorator to make a summary if you lost it.banned, 12 hours later unbanned, and then banned again.
What was that?
banned, 12 hours later unbanned, and then banned again.
What was that?
A month will be fine, let it go on, new account.
I came across the recipe preprocessing package from R. Impressive list of preprocessing steps from this package:
#> [ 1] "step_BoxCox" " step_YeoJohnson "
#> [ 3] "step_arrange" " step_bagimpute "
#> [ 5] "step_bin2factor" " step_bs "
#> [ 7] "step_center" " step_classdist "
#> [ 9] "step_corr " " step_count "
#> [11] "step_cut " " step_date "
#> [13] "step_depth" " step_discretize "
#> [15] "step_dummy" " step_dummy_extract "
#> [17] "step_dummy_multi_choice" "step_factor2string "
#> [19] "step_filter" " step_filter_missing "
#> [21] "step_geodist" " step_harmonic "
#> [23] "step_holiday" " step_hyperbolic "
#> [25] "step_ica" " step_impute_bag "
#> [27] "step_impute_knn" " step_impute_linear "
#> [29] "step_impute_lower" " step_impute_mean "
#> [31] "step_impute_median" " step_impute_mode "
#> [33] "step_impute_roll" " step_indicate_na "
#> [35] "step_integer" " step_interact "
#> [37] "step_intercept" " step_inverse "
#> [39] "step_invlogit" " step_isomap "
#> [41] "step_knnimpute" " step_kpca "
#> [43] "step_kpca_poly" " step_kpca_rbf "
#> [45] "step_lag" " step_lincomb "
#> [47] "step_log " " step_logit "
#> [49] "step_lowerimpute" " step_meanimpute "
#> [51] "step_medianimpute" " step_modeimpute "
#> [53] "step_mutate" " step_mutate_at "
#> [55] "step_naomit" " step_nnmf "
#> [57] "step_nnmf_sparse" " step_normalize "
#> [59] "step_novel " " step_ns "
#> [61] "step_num2factor" " step_nzv "
#> [63] "step_ordinalscore" " step_other "
#> [65] "step_pca" " step_percentile "
#> [67] "step_pls " " step_poly "
#> [69] "step_poly_bernstein" " step_profile "
#> [71] "step_range" " step_ratio "
#> [73] "step_regex" " step_relevel "
#> [75] "step_relu" " step_rename "
#> [77] "step_rename_at" " step_rm "
#> [79] "step_rollimpute" " step_sample "
#> [81] "step_scale" " step_select "
#> [83] "step_shuffle" " step_slice "
#> [85] "step_spatialsign" " step_spline_b "
#> [87] "step_spline_convex" " step_spline_monotone "
#> [89] "step_spline_natural" " step_spline_nonnegative"
#> [91] "step_sqrt" " step_string2factor "
#> [93] "step_time" " step_unknown "
#> [95] "step_unorder" " step_window "
#> [97] "step_zv"
In my experience, the labour intensity of preprocessing is many times lower (3 to 5 times) than the labour intensity of applying the model itself
Caught the recipe preprocessing package from R
Hedley Wickham doesn't do bullshit
:)
You have the exact same book in front of you as I do. The way I see it, wrote it too. Ask mordorator to make a summary if you lost it.I see, instead of discussion and exchange of experience we again fall into some emotional reactions.
You erased the second post - I wanted to say that I didn't see any connection with the link you gave. There in the article it is strange that the author did not try the standard CB functions for balancing the sample. And conclusions cannot be drawn only from the results of a test on one dataset.I see, instead of discussion and exchange of experience, we again fall into some emotional reactions.
You erased the second post - I wanted to say that I didn't see any connection with the link you gave. There in the article it is strange that the author did not try the standard CB functions for balancing the sample. And you can't draw conclusions only from the results of a test on one dataset..
Anarticle with an approach similar to the one promoted by Aleksey Vyazmikin. Instead of a classification tree, a "difference tree" is constructed, in which each leaf corresponds to a different probability of an event (for example, fire frequency). In essence, this is some variant of clustering.
I will say at once that I am not ready to recount the article in detail, as I have only glanced through it in passing.