Machine learning in trading: theory, models, practice and algo-trading - page 2945

 
Stanislav Korotky #:

Nah, it's empty. I forgot to mention that it's about bousting.

Well, here,


and here.


But the formulas there seem to be different, or written in a more complete form....

 
Aleksey Vyazmikin #:

Here, then,


and here


But the formulas there seem to be different, or written in a more complete form....

Collective farming is voluntary! But why propagate collective farming? What more lectures on YouTube, when there is technical documentation and programme code. I will note that gbm itself is only a part of the package, there are many other things next to the function itself.

Here is a link to the gbm package

Here is a link to the theory

Here is a link to the manual

And here is the list of literature about gbm.

References Y. Freund and R.E. Schapire (1997) "A decision-theoretic generalisation of on-line learning and an application to boosting," Journal of Computer and System Sciences, 55(1):119-139.

G. Ridgeway (1999). "The state of boosting," Computing Science and Statistics 31:172-181.

J.H. Friedman, T. Hastie, R. Tibshirani (2000). "Additive Logistic Regression: a Statistical View of Boosting," Annals of Statistics 28(2):337-374.

J.H. Friedman (2001). "Greedy Function Approximation: A Gradient Boosting Machine," Annals of Statistics 29(5):1189-1232.

J.H. Friedman (2002). "Stochastic Gradient Boosting," Computational Statistics and Data Analysis 38(4):367-378.

B. Kriegler (2007). Cost-Sensitive Stochastic Gradient Boosting Within a Quantitative Regression Framework. Ph.D. Dissertation. University of California at Los Angeles, Los Angeles, CA, USA. Advisor(s) Richard A. Berk. urlhttps://dl.acm.org/citation.cfm?id=1354603.

C. Burges (2010). "From RankNet to LambdaRank to LambdaMART: An Overview," Microsoft Research Technical Report MSR-TR-2010-82

gbm: Generalized Boosted Regression Models
gbm: Generalized Boosted Regression Models
  • cran.r-project.org
An implementation of extensions to Freund and Schapire's AdaBoost algorithm and Friedman's gradient boosting machine. Includes regression methods for least squares, absolute loss, t-distribution loss, quantile regression, logistic, multinomial logistic, Poisson, Cox proportional hazards partial likelihood, AdaBoost exponential loss, Huberized hinge loss, and Learning to Rank measures (LambdaMart). Originally developed by Greg Ridgeway.
 
Aleksey Vyazmikin #:

Here, then,


and here


But the formulas there seem to be different, or written in a more complete form....

Similarly for xgboost

xgboost: Extreme Gradient Boosting
xgboost: Extreme Gradient Boosting
  • cran.r-project.org
Extreme Gradient Boosting, which is an efficient implementation of the gradient boosting framework from Chen & Guestrin (2016) < doi:10.1145/2939672.2939785 >. This package is its R interface. The package includes efficient linear model solver and tree learning algorithms. The package can automatically do parallel computation on a single machine which could be more than 10 times faster than existing gradient boosting packages. It supports various objective functions, including regression, classification and ranking. The package is made to be extensible, so that users are also allowed to define their own objectives easily.
 
СанСаныч Фоменко #:

Collective farming is voluntary! But why propagandise kolkhoz? What lectures on YouTube, when there is technical documentation and programme code. I'll note that gbm itself is only a part of the package, there are many other things next to the function itself.


Here is a link to the theory

What is required is an explanation, not just formulas, that's why it is suggested to explain a person through the lecturer's perception of the material.

And on the link I didn't see the theory with formulas.

 
Aleksey Vyazmikin #:

What is required is an explanation, not just formulae, which is why human explanation through the lecturer's perception of the material is suggested.

And on the link I didn't see the theory with formulas.

Not so much for you, who does not want to see point-blank, as for others who love theory.

And most importantly, you don't understand the difference between yada yada on youtube and a working tool from theory to code tested by many people.

Files:
gbm.zip  257 kb
 
Aleksey Nikolayev #:

The model runs in the mql5 script, but does not run in python because the onnxruntime package is not installed.

The onnxruntime package is not installed in python 3.11 yet. I installed python 3.10, in which everything is installed and the output works.

 
СанСаныч Фоменко #:

Not so much for you, who doesn't want to see point-blank, as for others who love the theory.

And most importantly, you don't understand the difference between yada yada yada on youtube and a working tool from theory to code tested by many people.

Why arrogance? You'd better show me where you downloaded it - I personally looked again and didn't understand.

Hm, why do we need teachers in institutes?

 
About reinforcement learning
 
mytarmailS #:
About reinforcement learning
h ttps://youtu.be/I-wd3ZUrReg

In psychology, it's called projection....

And so, well, yes, it is exactly as he says and training of models occurs - what is fancied is remembered.

 

Third-party ONNX Runtime libraries are no longer needed for the terminal.

Now onnx models can be run on any platform where terminal and tester are running. Will be available in the next beta.

Reason: