Machine learning in trading: theory, models, practice and algo-trading - page 3383

 
Andrew, for example, has never had a normal TC, but he so piously believes he's been right for 20 years, and that he's doing something truly meaningful.
And all these arguments are as good as dead poultices
 
fxsaber #:

There's a certain slyness to it. The links are just to make sure they open. No one who is "interested" will delve into them. No one will read Andrei's chewed-up articles, let alone works of an academic nature.


Has anyone seen this easy-to-understand TOP with the ability to calculate the ranking of their own optimisation algorithm?

https://habr.com/ru/users/belyalova/publications/articles/


These are gradient based optimisation methods for neurons ,

and we're talking about gradient-free global optimisation methods,

Come on, Saber.

It's the ABCs.

Which, by the way, your local expert on optimisations said nothing about in his articles (because he himself is a tree oak in AO).

 
Aleksey Nikolayev #:

Read this, especially the section "Loss function != quality metric". I can hardly write any clearer.

Couldn't open it, we are blocked a bit(

Aleksey Nikolayev #:
This leads to potential unlimited number of parameters, since function spaces are infinite-dimensional. In practice, this leads to the need to control the number of parameters somehow - for trees it is leaf pruning, for example.

For example, AMO forrests, boosts consist of primitives, rules.


We can create a Beckus Naur grammar of the rules we need.

like this.

grammarDef
<ex3> ::= <ex2> | <ex2> & <ex2> | <ex2> & <ex3>
<ex2> ::= <ex1> | <com>(<ex1>, <ex1>)
<ex1> ::= <op>(<var>, <var>)
<com> ::= "&" | "|"
<op>  ::= ">=" | "<=" | "==" | "!="
<var> ::= x1 | x2 | x3 | x4

behind each rule there is a code/string/genotype.

2 3 1 3 2 4 3 2 3 1 4 4 1 2 1 4 3 2 4 3 4 1 3 4 3 1 2 1 1 3 2 4 1 4 2 4 3 3 1 4 3 2 3  ->  (x1 == x4 | x2 != x1) & (x4 >= x3 & x1 != x2) & (x3 <= x2 & x3 != x1) 
1 1 4 4 1 4 1 4 2 3 1 4 1 1 1 3 1 4 2 3 4 1 1 3 4 3 2 4 1 4 2 3 4 3 3 2 3 4 2 2 4 4 3  ->  x1 <= x2 & x4 == x2 & x2 <= x4 
2 4 1 4 1 1 1 2 1 4 4 1 2 1 3 1 2 3 3 3 3 4 4 2 2 3 1 3 4 2 2 1 2 4 2 1 4 4 3 3 1 4 3  ->  x2 >= x2 & x1 >= x2 & x2 != x3 
4 3 1 3 2 4 2 2 3 4 4 1 1 2 2 1 3 2 4 3 4 3 3 1 3 1 3 2 2 2 4 4 2 3 2 1 4 1 3 1 3 2 4  ->  (x1 == x3 | x1 != x1) & (x2 == x4 | x4 >= x1) 
1 3 3 1 4 2 2 3 4 3 3 4 4 2 2 4 3 1 4 2 1 1 3 4 2 3 1 2 3 1 1 1 3 3 2 2 2 2 2 3 3 1 2  ->  (x3 >= x3 | x4 >= x4) & x3 == x1 
2 1 2 1 3 3 1 2 3 3 2 3 3 3 2 3 4 4 4 3 4 3 2 2 3 1 4 3 4 2 4 3 4 1 2 3 1 2 1 3 1 4 3  ->  x4 != x2 & x4 != x3 & (x1 != x1 | x1 != x4) 
3 3 1 1 3 3 3 2 4 2 2 3 1 2 2 3 2 4 1 4 3 4 4 2 2 4 1 2 2 4 3 4 2 2 3 4 3 4 4 3 4 4 2  ->  x4 != x4 | x3 >= x3 
3 2 1 1 3 4 2 3 2 2 2 4 3 2 3 4 2 2 4 4 1 1 3 1 2 3 2 4 1 2 1 2 1 2 1 4 3 2 4 1 1 4 2  ->  x4 <= x1 
4 2 3 2 4 4 3 3 3 4 1 4 3 3 3 2 4 3 1 3 4 4 1 4 4 2 1 2 3 1 3 3 4 2 4 1 4 2 3 4 3 4 3  ->  x1 == x1 & (x2 >= x1 | x4 != x3) 
3 2 2 1 2 3 2 4 3 3 4 2 4 4 2 4 3 4 2 2 1 1 2 2 1 3 1 4 3 4 3 2 4 4 3 2 2 2 2 2 3 1 1  ->  x3 <= x4 

on the left is the genotype, on the right is the rule that is generated, this genotype can be searched with the help of AO.

The sum of the rules is the same boost or forest..

It's kind of a no-brainer.


Here is the code that implements it all

library(gramEvol)
grammarDef <- CreateGrammar(list(
  ex3 = grule(ex2, ex2 & ex2, ex2 & ex3),
  ex2 = grule(ex1, com(ex1, ex1)),
  ex1 = grule( op(var, var) ),
  com = grule("&","|"),
  op = grule(">=","<=","==","!="),
  var = grule(x1,x2,x3,x4)))


for(i in 1:10){
  genotype <- sample(1:4,size = 43,replace = T)
  rule <- as.character(GrammarMap(genotype, grammarDef))
  cat(as.character(genotype), " -> ", rule, "\n")
}

Packagedocumentation

So I don't see any obstacles to implement my AMO purely on optimisation and grammar.


And our hands are free on all fronts, any functions, transformations, any data, in general, anything can be implemented.



The same Forrest, no difference.

X <- iris[,1:(ncol(iris)-1)]
target <- iris[,"Species"] 


library(inTrees)
library(RRF)

target |> 
as.factor() |> 
RRF(x = X,ntree=100) |> 
RF2List() |> 
extractRules(X = X) 






697 rules (length<=6) were extracted from the first 100 trees.
       condition                                                                        
  [1,] "X[,4]<=0.8"                                                                     
  [2,] "X[,4]>0.8 & X[,4]<=1.65 & X[,4]<=1.45"                                          
  [3,] "X[,3]<=4.95 & X[,4]>0.8 & X[,4]<=1.65 & X[,4]>1.45 & X[,4]<=1.55"               
  [4,] "X[,3]>4.95 & X[,4]>0.8 & X[,4]<=1.65 & X[,4]>1.45 & X[,4]<=1.55"                
  [5,] "X[,4]>0.8 & X[,4]<=1.65 & X[,4]>1.45 & X[,4]>1.55"                              
  [6,] "X[,3]<=4.75 & X[,4]>0.8 & X[,4]>1.65 & X[,4]<=1.85 & X[,4]<=1.75"               
  [7,] "X[,3]>4.75 & X[,4]>0.8 & X[,4]>1.65 & X[,4]<=1.85 & X[,4]<=1.75"                
  [8,] "X[,1]<=5.95 & X[,3]<=4.85 & X[,4]>0.8 & X[,4]>1.65 & X[,4]<=1.85 & X[,4]>1.75"  
  [9,] "X[,1]>5.95 & X[,3]<=4.85 & X[,4]>0.8 & X[,4]>1.65 & X[,4]<=1.85 & X[,4]>1.75"   
 [10,] "X[,3]>4.85 & X[,4]>0.8 & X[,4]>1.65 & X[,4]<=1.85 & X[,4]>1.75"                 
 [11,] "X[,4]>0.8 & X[,4]>1.65 & X[,4]>1.85"                                            
 [12,] "X[,4]<=0.8"                                                                     
 [13,] "X[,3]<=4.95 & X[,4]>0.8 & X[,4]<=1.55"                                          
 [14,] "X[,3]>4.95 & X[,4]>0.8 & X[,4]<=1.55"                                           
 [15,] "X[,1]<=5.45 & X[,4]>0.8 & X[,4]>1.55 & X[,4]<=1.75"                             
 [16,] "X[,1]>5.45 & X[,3]<=5.45 & X[,4]>0.8 & X[,4]>1.55 & X[,4]<=1.75"                
 [17,] "X[,1]>5.45 & X[,3]>5.45 & X[,4]>0.8 & X[,4]>1.55 & X[,4]<=1.75"                 
 [18,] "X[,1]<=5.95 & X[,3]<=4.9 & X[,4]>0.8 & X[,4]>1.55 & X[,4]>1.75"                 
 [19,] "X[,1]>5.95 & X[,3]<=4.9 & X[,4]>0.8 & X[,4]>1.55 & X[,4]>1.75"                  
 [20,] "X[,3]>4.9 & X[,4]>0.8 & X[,4]>1.55 & X[,4]>1.75"                                
 [21,] "X[,4]<=0.8"                                                                     
 [22,] "X[,3]<=4.85 & X[,4]>0.8 & X[,4]<=1.7"                                           
 [23,] "X[,1]<=5.95 & X[,3]<=4.85 & X[,4]>0.8 & X[,4]>1.7"                              
 [24,] "X[,1]>5.95 & X[,3]<=4.85 & X[,4]>0.8 & X[,4]>1.7"                               
 [25,] "X[,1]<=6.6 & X[,3]>4.85 & X[,4]>0.8 & X[,4]<=1.65"                              
 [26,] "X[,1]>6.6 & X[,3]>4.85 & X[,4]>0.8 & X[,4]<=1.65"                               
 [27,] "X[,3]>4.85 & X[,4]>0.8 & X[,4]>1.65"                                            
 [28,] "X[,3]<=2.45"                                                                    
 [29,] "X[,3]>2.45 & X[,3]<=5.35 & X[,4]<=1.75"                                         
 [30,] "X[,3]>2.45 & X[,3]>5.35 & X[,4]<=1.75"                                          
 [31,] "X[,1]<=5.95 & X[,3]>2.45 & X[,3]<=4.85 & X[,4]>1.75"                            
 [32,] "X[,1]>5.95 & X[,3]>2.45 & X[,3]<=4.85 & X[,4]>1.75"                             
 [33,] "X[,3]>2.45 & X[,3]>4.85 & X[,4]>1.75"                                           
 [34,] "X[,3]<=2.45"           
...
..
..
..
.
.
.
 
mytarmailS #:
Couldn't open it, we're blocked a bit(

We are blocked a lot, somehow we adapt.

mytarmailS #:
Well for example AMO forests, boosts consist of primitives , rules.

I did not claim that the absence of a fixed set of parameters makes learning (optimisation) impossible. I was only talking about significant differences from conventional optimisation with a fixed set of parameters. If you don't believe me, you can try to implement trees or grammars in MT5 (where the optimiser works for a fixed set). It is not that it is absolutely impossible, but it is extremely inconvenient.

 
Aleksey Nikolayev #:

I did not claim that the absence of a fixed set of parameters makes learning (optimisation) impossible. I was only talking about significant differences from conventional optimisation with a fixed set of parameters. If you don't believe me, you can try to implement trees or grammars in MT5 (where the optimiser works for a fixed set). It is not that it is absolutely impossible, but it is extremely inconvenient.

The MQL5 language allows you to compensate for the absence of any standard features of the tester and optimiser. Where a dynamic set of parameters is required, you can write an external optimiser for the model in MQL5.

 
Andrey Dik #:

The MQL5 language allows you to compensate for the lack of any standard features of the tester and optimiser. Where a dynamic set of parameters is required, you can write an external optimiser for the model in MQL5.

It is mandatory to write gradient bousting in mql5 to become the creator of the most cramped bicycle.
 

A student asks questions to an artificial intelligence.

Question: What is your name?

Answer: Vasya. Go to hell.

Question: What is go to hell?

Answer: It is the answer to your second question Vasya.

P.Z.

The question shocks the neural system.

 
Aleksey Nikolayev #:
It is mandatory to write gradient bousting in mql5 to become the creator of the crappiest bicycle.

And what is the fundamental difference between bousting in mql5 and written in any other language? mql5 is as fast as C# and almost as fast as C++. The syntax is not much different from these languages. For MO needs, a lot of inbuilt language features are being added recently.
A standard tester is convenient as a ready-made trading environment, but everything that concerns MO and optimisation can be implemented without it if someone is cramped in it.
There are no fundamental limitations in MT5.
 
Andrey Dik #:

And what is the fundamental difference between bousting in mql5 and bousting written in any other language? mql5 is as fast as C# and almost as fast as C++. The syntax is not much different from these languages. For MO needs, a lot of inbuilt language features are being added recently.
The standard tester is convenient as a ready-made trading environment, but everything that concerns MO and optimisation can be implemented without it, if someone is cramped in it.
There are no fundamental limitations in MT5.
What mechanisms of parallelisation of calculations on CPU in mql5 do you know?
 
Aleksey Nikolayev #:
What mechanisms of parallelisation of calculations on CPU in mql5 do you know?

OpenCL for more or less advanced.

For less advanced - running agents on separate charts. Each chart in a separate thread, all CPU cores will be used in total.

Besides, terminal agents themselves can be used to parallelise application calculations on a terminal chart, this is not known to many people.

In this article I will show you how to write a binary GA covering all significant digits of a double number with an unlimitedly small step of parameters in MQL5 (in fact, it is limited to 16 decimal places for double). And even this is not the limit, you can write extensions of standard types of numbers in MQL5.

Reason: