Machine learning in trading: theory, models, practice and algo-trading - page 3152

 

Guys, can we say that MO is a special case of optimisation?

I think so.

 
Aleksey Vyazmikin #:

I don't even know what code we're talking about.

I thought I was replying to another Alexei, I was writing from my phone.
 
Aleksey Vyazmikin #:

I still don't understand the idea of dividing at any point in the sample. The point, as it seems to me, is to find the moment when the influence of a factor has changed. Maybe we should loop through different parts of the sample and use genetics to find the one that was affected by the predictor in a new way?

Earlier you accused others of not telling the truth, but you yourself do not make clear the meaning of these actions, as you see, for trading purposes.

I have not tested anything on this issue yet, as it is difficult to automate it in MQL5.

:)

You have the exact same book in front of you as I do. The way I see it, wrote it too. Ask mordorator to make a summary if you lost it.
 

banned, 12 hours later unbanned, and then banned again.

What was that?

 
mytarmailS #:

banned, 12 hours later unbanned, and then banned again.

What was that?

A month will be fine, let it go on, new account.

 

I came across the recipe preprocessing package from R. Impressive list of preprocessing steps from this package:

#> [ 1] "step_BoxCox" " step_YeoJohnson "

#> [ 3] "step_arrange" " step_bagimpute "

#> [ 5] "step_bin2factor" " step_bs "

#>  [  7] "step_center"  "  step_classdist " 

#> [ 9] "step_corr " " step_count "

#> [11] "step_cut " " step_date "

#> [13] "step_depth" " step_discretize "

#> [15] "step_dummy" "  step_dummy_extract " 

#> [17] "step_dummy_multi_choice" "step_factor2string "

#> [19] "step_filter" " step_filter_missing "

#> [21] "step_geodist" " step_harmonic "

#> [23] "step_holiday"  "  step_hyperbolic " 

#> [25] "step_ica" " step_impute_bag "

#> [27] "step_impute_knn" " step_impute_linear "

#> [29] "step_impute_lower" " step_impute_mean "

#> [31] "step_impute_median"  "  step_impute_mode " 

#> [33] "step_impute_roll" " step_indicate_na "

#> [35] "step_integer" " step_interact "

#> [37] "step_intercept" " step_inverse "

#> [39] "step_invlogit"  "  step_isomap " 

#> [41] "step_knnimpute" " step_kpca "

#> [43] "step_kpca_poly" " step_kpca_rbf "

#> [45] "step_lag" " step_lincomb "

#> [47] "step_log "  "  step_logit " 

#> [49] "step_lowerimpute" " step_meanimpute "

#> [51] "step_medianimpute" " step_modeimpute "

#> [53] "step_mutate" " step_mutate_at "

#> [55] "step_naomit"  "  step_nnmf " 

#> [57] "step_nnmf_sparse" " step_normalize "

#> [59] "step_novel " " step_ns "

#> [61] "step_num2factor" " step_nzv "

#> [63] "step_ordinalscore"  "  step_other " 

#> [65] "step_pca" " step_percentile "

#> [67] "step_pls " " step_poly "

#> [69] "step_poly_bernstein" " step_profile "

#> [71] "step_range"  "  step_ratio " 

#> [73] "step_regex" " step_relevel "

#> [75] "step_relu" " step_rename "

#> [77] "step_rename_at" " step_rm "

#> [79] "step_rollimpute"  "  step_sample " 

#> [81] "step_scale" " step_select "

#> [83] "step_shuffle" " step_slice "

#> [85] "step_spatialsign" " step_spline_b "

#> [87] "step_spline_convex"  "  step_spline_monotone " 

#> [89] "step_spline_natural" " step_spline_nonnegative"

#> [91] "step_sqrt" " step_string2factor "

#> [93] "step_time" " step_unknown "

#> [95] "step_unorder"  "  step_window " 

#> [97] "step_zv"

In my experience, the labour intensity of preprocessing is many times lower (3 to 5 times) than the labour intensity of applying the model itself

 
СанСаныч Фоменко #:

Caught the recipe preprocessing package from R

Hedley Wickham doesn't do bullshit

 
Maxim Dmitrievsky #:

:)

You have the exact same book in front of you as I do. The way I see it, wrote it too. Ask mordorator to make a summary if you lost it.

I see, instead of discussion and exchange of experience we again fall into some emotional reactions.

You erased the second post - I wanted to say that I didn't see any connection with the link you gave. There in the article it is strange that the author did not try the standard CB functions for balancing the sample. And conclusions cannot be drawn only from the results of a test on one dataset.
 
Aleksey Vyazmikin #:

I see, instead of discussion and exchange of experience, we again fall into some emotional reactions.

You erased the second post - I wanted to say that I didn't see any connection with the link you gave. There in the article it is strange that the author did not try the standard CB functions for balancing the sample. And you can't draw conclusions only from the results of a test on one dataset.
Raise your level, at least to write code and a basic understanding of algorithms, which is written about in books. Then there will be something to talk about. Otherwise, cleverness (an attempt to imitate formal scientific style), with grammatical and other errors, causes only a smile :).

Never tuned models through weights before, seemed interesting. Purely based on that book to write a profitable TS has not been possible yet. I mean meta lerners, which are described there. Tuning through weights is also considered there. But when I added some elements to my work, it became better in some places. For example, cross-training, which is described in another article. I have already gone through it all and moved on so to speak, there is no desire to pull the wagons behind me. You and Sanych spent too long discussing whether it is necessary in trading or not, without learning anything :)
.

Erased, because I left the retarded forum. No need.

Good luck, you'll need it.

 

Anarticle with an approach similar to the one promoted by Aleksey Vyazmikin. Instead of a classification tree, a "difference tree" is constructed, in which each leaf corresponds to a different probability of an event (for example, fire frequency). In essence, this is some variant of clustering.

I will say at once that I am not ready to recount the article in detail, as I have only glanced through it in passing.