Machine learning in trading: theory, models, practice and algo-trading - page 3338

 
Aleksey Vyazmikin #:

About classification is kind of written here. However, CatBoost has a slightly different formula, but maybe it's the cost of mathematical transformations....

And a link to a video from the same place, I think.


No code. And judging by the pictures, the subsequent trees are trained not exactly 0 and 1, abs. error values like 0.21, 1.08, -0.47 ..... like in regression.
 
Aleksey Vyazmikin #:

You can watch a video on the subject

It's a bit of a mess. Logloss grows when removing bad samples, not falls. If you remove good samples, it also grows.

If you don't do it yourself, nobody will do it for you.
 
Tidy Modeling with R
Tidy Modeling with R
  • Max Kuhn and Julia Silge
  • www.tmwr.org
The tidymodels framework is a collection of R packages for modeling and machine learning using tidyverse principles. This book provides a thorough introduction to how to use tidymodels, and an outline of good methodology and statistical practice for phases of the modeling process.
 
СанСаныч Фоменко #:

A good book with a lot of code. I give the link. Unfortunately the .PDF file size is too large

//---------------------------------------


Perdon .Went through the topic1 Softwareformodelling. It was sufficient.

Did not find large amounts of code. I think you've made a mistake. You've been cleverly deceived.

It's just that, a beautiful book, with lots of clever words.

P.Z.

Copied from other books.

Without any system.

1 Software for modeling | Tidy Modeling with R
1 Software for modeling | Tidy Modeling with R
  • Max Kuhn and Julia Silge
  • www.tmwr.org
The tidymodels framework is a collection of R packages for modeling and machine learning using tidyverse principles. This book provides a thorough introduction to how to use tidymodels, and an outline of good methodology and statistical practice for phases of the modeling process.
 
Lorarica #:

Perdon .Went through the topic1 Softwareformodelling. That was enough.

Didn't find large amounts of code. I think you've made a mistake. You've been cleverly deceived.

It's just a beautiful book with a lot of clever words.

P.Z.

Copied from other books.

Without any system.

Get out of the habit of reading only the titles: a book is not a Twitter post.

I have read more than half of the book, so I can judge the content myself; there are sections that are 80% code.

Here is a list of packages that were used to write the code in the book.

This version of the book was built with: R version 4.1.3 (2022-03-10), pandoc version
2.17.1.1, and the following packages:
• applicable (0.0.1.2, CRAN)
• av (0.7.0, CRAN)
• baguette (0.2.0, CRAN)
• beans (0.1.0, CRAN)
• bestNormalize (1.8.2, CRAN)
• bookdown (0.25, CRAN)
• broom (0.7.12, CRAN)
• censored (0.0.0.9000, GitHub)
• corrplot (0.92, CRAN)
• corrr (0.4.3, CRAN)
• Cubist (0.4.0, CRAN)
• DALEXtra (2.1.1, CRAN)
• dials (0.1.1, CRAN)
• dimRed (0.2.5, CRAN)
• discrim (0.2.0, CRAN)
• doMC (1.3.8, CRAN)
• dplyr (1.0.8, CRAN)
• earth (5.3.1, CRAN)
• embed (0.1.5, CRAN)
• fastICA (1.2-3, CRAN)
• finetune (0.2.0, CRAN)
• forcats (0.5.1, CRAN)
• ggforce (0.3.3, CRAN)
• ggplot2 (3.3.5, CRAN)
• glmnet (4.1-3, CRAN)
• gridExtra (2.3, CRAN)
• infer (1.0.0, CRAN)
• kableExtra (1.3.4, CRAN)
• kernlab (0.9-30, CRAN)
• kknn (1.3.1, CRAN)
• klaR (1.7-0, CRAN)
• knitr (1.38, CRAN)
• learntidymodels (0.0.0.9001, GitHub)
• lime (0.5.2, CRAN)
• lme4 (1.1-29, CRAN)
• lubridate (1.8.0, CRAN)
• mda (0.5-2, CRAN)
• mixOmics (6.18.1, Bioconductor)
• modeldata (0.1.1, CRAN)
• multilevelmod (0.1.0, CRAN)
• nlme (3.1-157, CRAN)
• nnet (7.3-17, CRAN)
• parsnip (0.2.1.9001, GitHub)
• patchwork (1.1.1, CRAN)
• pillar (1.7.0, CRAN)
• poissonreg (0.2.0, CRAN)
• prettyunits (1.1.1, CRAN)
• probably (0.0.6, CRAN)
• pscl (1.5.5, CRAN)
• purrr (0.3.4, CRAN)
• ranger (0.13.1, CRAN)
• recipes (0.2.0, CRAN)
• rlang (1.0.2, CRAN)
• rmarkdown (2.13, CRAN)
• rpart (4.1.16, CRAN)
• rsample (0.1.1, CRAN)
• rstanarm (2.21.3, CRAN)
• rules (0.2.0, CRAN)
• sessioninfo (1.2.2, CRAN)
• stacks (0.2.2, CRAN)
• klaR (1.7-0, CRAN)
• knitr (1.38, CRAN)
• learntidymodels (0.0.0.9001, GitHub)
• lime (0.5.2, CRAN)
• lme4 (1.1-29, CRAN)
• lubridate (1.8.0, CRAN)
• mda (0.5-2, CRAN)
• mixOmics (6.18.1, Bioconductor)
• modeldata (0.1.1, CRAN)
• multilevelmod (0.1.0, CRAN)
• nlme (3.1-157, CRAN)
• nnet (7.3-17, CRAN)
• parsnip (0.2.1.9001, GitHub)
• patchwork (1.1.1, CRAN)
• pillar (1.7.0, CRAN)
• poissonreg (0.2.0, CRAN)
• prettyunits (1.1.1, CRAN)
• probably (0.0.6, CRAN)
• pscl (1.5.5, CRAN)
• purrr (0.3.4, CRAN)
• ranger (0.13.1, CRAN)
• recipes (0.2.0, CRAN)
• rlang (1.0.2, CRAN)
• rmarkdown (2.13, CRAN)
• rpart (4.1.16, CRAN)
• rsample (0.1.1, CRAN)
• rstanarm (2.21.3, CRAN)
• rules (0.2.0, CRAN)
• sessioninfo (1.2.2, CRAN)
• stacks (0.2.2, CRAN)
• stringr (1.4.0, CRAN)
• svglite (2.1.0, CRAN)
• text2vec (0.6, CRAN)
• textrecipes (0.5.1.9000, GitHub)
• themis (0.2.0, CRAN)
• tibble (3.1.6, CRAN)
• tidymodels (0.2.0, CRAN)
• tidyposterior (0.1.0, CRAN)
• tidyverse (1.3.1, CRAN)
• tune (0.2.0, CRAN)
• uwot (0.1.11, CRAN)
• workflows (0.2.6, CRAN)
• workflowsets (0.2.1, CRAN)
• xgboost (1.5.2.1, CRAN)
• yardstick (0.0.9, CRAN
In terms of its content, the book is a systematic presentation of the problems and solutions to what is called "machine learning", on this site it is very useful, as "machine learning" is usually understood as just a model.
 
Lorarica #:

Perdon .Went through the topic1 Softwareformodelling. That was enough.

Didn't find large amounts of code. I think you've made a mistake. You've been cleverly deceived.

It's just a beautiful book with a lot of clever words.

P.Z.

Copied from other books.

Without any system.

In the software section she was looking for a lot of code...))))

And lots of "clever words" and pictures is a drawback for her. ..))))

Clowness
 
СанСаныч Фоменко #:
It's a great book, but there's no one here to read it
 
Where is the stat output after resampling and cv? And the construction of the final classifier. Take this topic and develop it. It's the basis of kozul.

Tuls for creatin' effecitve modells, comparing multiple modells vis resampling. Then there should be something like statistical inference and unbiased model building.

We need statistical inference. It gives some results compared to the same RL and other methods.

Look it up in R: statistical learning, weak supervised learning, functional augmentation learning.
 
There is a snorkel lib in python. They have somewhere on their site a comparison of learning with a teacher vs learning with weak control. That the latter outperforms the former. That's useful to know too.

 
СанСаныч Фоменко #:

Give up the habit of reading only the headlines: a book is not a Twitter post.

I have read more than half of the book, so I can judge the content myself; there are sections that are 80% code.

Here's a list of the packages used when writing the code in the book.

In terms of its content, the book is a systematic presentation of the problems and solutions to what is called "machine learning", on this site it is very useful as "machine learning" is usually understood as just a model.

Yes. It's a good book.

Since you've read half of it.

You could probably write one line of code.

Most memorable to you?

P.Z.

I advise everyone to study the book.

Reason: