Machine learning in trading: theory, models, practice and algo-trading - page 2633

 
Maxim Dmitrievsky #:

Download the bot from the market, run it in the MT5 tester, then there is an option to save the report with all trades and other information

You can automate, including running the test and uploading trades to csv -https://www.mql5.com/ru/code/26132

MultiTester
MultiTester
  • www.mql5.com
Множественные прогоны/оптимизации в Тестере.
 
mytarmailS #:

Sparked one gold strategy from the marketplace ))

Capital curve in my tester.

threw it into tslab to get a better look

It looks like it's a good match.


I looked at the trades.


I looked at it as if it was a manual trader with extremely long sit and vague trading algorithm...

Forrest certainly could not identify anything, but it was interesting and informative )))

So you need to read which timeframe and teach two models. One for trades, the other for timing. Or multiclass. Look at the bot description, maybe it's bad and won't fit
 
Andrey Khatimlianskii #:

Can be automated, including running a test and uploading transactions to csv -https://www.mql5.com/ru/code/26132

Yes, thanks
 
Dmytryi Voitukhov #:

Might be useful... I have a many-to-many without recurrence. And no convolution layers. And I chose this model after parsing the neuronics mechanism. We're looking for a common denominator here, aren't we? Argue.

I'm not sure what I need to argue.
 
Aleksey Nikolayev #:

Thanks, I'll have a look.

Still, with us, order does matter. It is always possible, for example, to get SB by shuffling the increments randomly.

I also remembered that you once wrote here about sequential pattern mining and sequence alignment problem. It also seems to be one of the methods of solving the problem. Although belonging of sequences to one class does not necessarily mean that they are similar.

apriori_cspade <- function(apri_supp=0.1,arpi_conf=0.1, liS , 
                             maxgap=NULL, maxlen=5,rhs=NULL,
                             apri_maxlen=2,apri_minlen=2,verbose=T,
                             redundant.rules = F, cspade_supp=0.5 , cspade_conf=0.5){
    
    
    
    library(arules)
    
    
    targ <- NULL
    if(!is.null(rhs))  {
      targ <- list(rhs=rhs, default="lhs")
      apri  <- apriori(liS, parameter=list(support=apri_supp, 
                                          confidence=arpi_conf,
                                          minlen=apri_minlen,
                                          maxlen=apri_maxlen), 
                                          appearance = targ,
                             control = list(verbose=verbose))
    }
   
    
    if(is.null(rhs)){
      apri <- apriori(liS, parameter=list(support=apri_supp, 
                                          confidence=arpi_conf, 
                                          target="rules",
                                          minlen=apri_minlen,
                                          maxlen=apri_maxlen),
                              control = list(verbose=verbose))
    }
    
    ar <- as(apri,"data.frame")
    ar <- as.character(ar$rules)
    ar <- gsub("[{}>]","" ,ar)
    ar <- unlist(stringr::str_split(ar ,pattern = "="))
    ar <- ar[!duplicated(ar)]
    ar <- stringr::str_trim(ar)
    for(i in 1:length(liS))   liS[[i]] <-  liS[[i]]  [ liS[[i]]  %in% ar ]
    
    
    
    liS <- liS[  unlist(lapply(liS ,function(x) length(x)>2 ))  ]
    
    
    if(  length(liS) <= 0 ) return(NULL)
    
    
    
    
    
    library(arulesSequences)
    cspade.form <- as.data.frame(matrix(ncol = 4,nrow = 0))
    for(i in 1:length(liS)){
      
      Q <- liS[[i]]
      cspade.form <- rbind(cspade.form,
                           cbind.data.frame( ID = rep(i,length(Q)),
                                             evID = 1:length(Q),
                                             SIZE = rep(1,length(Q)),
                                             ITEM = Q))}
    
    
    write.table(cspade.form , file = "D:\\R2\\arulesSeq\\seq\\temp.txt",
                append = F,sep = ",",row.names = F,col.names = F)
    
    x <- read_baskets(con = "D:\\R2\\arulesSeq\\seq\\temp.txt",sep = ",",
                      info = c("sequenceID","eventID","SIZE"))
    
    mod <- cspade(x, parameter=list(support = cspade_supp,
                                    maxlen = maxlen,
                                    maxgap=maxgap),
                                  control=list(verbose = verbose))
    gc(reset = T,verbose = F)
    
    rules <- ruleInduction(mod, confidence = cspade_conf, control=list(verbose = verbose))
    if(redundant.rules) rules <- rules[!is.redundant(rules)]
    
    final <- as(rules,"data.frame")
    
    #  parse rules
    R <- final$rule
    R <- gsub( "[\"<>{}]","",R)
    R <- stringr::str_split(gsub("=",",",R) , pattern = ",")
    R <- lapply(R,trimws)
    
    li <- list(rules=final, parse.rules=R)
    
    return(li)}

data

set.seed(123)
>  li <- list()
>  for(i in 1:100){
+    li <- append(li,  
+                 list(c(letters[sample(1:10,sample(5:10,1))] ,   sample(c("buy","sell"),1)))
+    )}
>  
>  head(li)
[[1]]
[1] "c"    "b"    "f"    "j"    "e"    "d"    "i"    "sell"

[[2]]
[1] "j"    "e"    "c"    "h"    "a"    "sell"

[[3]]
[1] "i"   "c"   "h"   "b"   "g"   "buy"

[[4]]
 [1] "c"   "d"   "f"   "a"   "j"   "e"   "i"   "h"   "b"   "g"   "buy"


we run the function and search for sequences that lead to our marks

ac <- apriori_cspade(liS = li,apri_supp = 0.1,
                      arpi_conf = 0.5,
                      rhs = c("buy","sell"),cspade_supp = 0.1,redundant.rules = T)
 ac
$rules
                                rule support confidence      lift
29          <{"a"},{"f"}> => <{"j"}>    0.10  0.5000000 0.7246377
56          <{"b"},{"f"}> => <{"i"}>    0.12  0.6000000 0.8333333
80          <{"e"},{"a"}> => <{"h"}>    0.14  0.5000000 0.6666667
98          <{"i"},{"e"}> => <{"g"}>    0.11  0.5000000 0.6329114
149         <{"b"},{"e"}> => <{"d"}>    0.11  0.5789474 0.7617729
168             <{"a"}> => <{"buy"}>    0.45  0.6081081 1.0484623
169             <{"b"}> => <{"buy"}>    0.44  0.6027397 1.0392064
170             <{"c"}> => <{"buy"}>    0.47  0.6103896 1.0523959
171             <{"d"}> => <{"buy"}>    0.46  0.6052632 1.0435572
172             <{"e"}> => <{"buy"}>    0.38  0.5757576 0.9926855
173             <{"f"}> => <{"buy"}>    0.42  0.6000000 1.0344828
174             <{"g"}> => <{"buy"}>    0.47  0.5949367 1.0257529
175             <{"h"}> => <{"buy"}>    0.43  0.5733333 0.9885057
176             <{"i"}> => <{"buy"}>    0.41  0.5694444 0.9818008
177             <{"j"}> => <{"buy"}>    0.45  0.6521739 1.1244378
178       <{"j"},{"i"}> => <{"buy"}>    0.17  0.6800000 1.1724138
182       <{"j"},{"g"}> => <{"buy"}>    0.18  0.6923077 1.1936340
183       <{"g"},{"j"}> => <{"buy"}>    0.18  0.6666667 1.1494253
184       <{"j"},{"f"}> => <{"buy"}>    0.14  0.7000000 1.2068966
185       <{"f"},{"j"}> => <{"buy"}>    0.21  0.7000000 1.2068966
187       <{"e"},{"j"}> => <{"buy"}>    0.17  0.7083333 1.2212644
189       <{"d"},{"j"}> => <{"buy"}>    0.23  0.7666667 1.3218391
191       <{"c"},{"j"}> => <{"buy"}>    0.25  0.7142857 1.2315271
192       <{"j"},{"b"}> => <{"buy"}>    0.16  0.6666667 1.1494253
194       <{"j"},{"a"}> => <{"buy"}>    0.14  0.6666667 1.1494253
195       <{"a"},{"j"}> => <{"buy"}>    0.22  0.7333333 1.2643678
196 <{"g"},{"c"},{"j"}> => <{"buy"}>    0.10  1.0000000 1.7241379
197       <{"i"},{"h"}> => <{"buy"}>    0.17  0.5862069 1.0107015
198       <{"h"},{"i"}> => <{"buy"}>    0.17  0.6071429 1.0467980
204       <{"e"},{"i"}> => <{"buy"}>    0.17  0.6538462 1.1273210
207       <{"i"},{"c"}> => <{"buy"}>    0.16  0.6956522 1.1994003
210       <{"b"},{"i"}> => <{"buy"}>    0.20  0.7692308 1.3262599
212       <{"a"},{"i"}> => <{"buy"}>    0.15  0.7142857 1.2315271
213 <{"c"},{"f"},{"i"}> => <{"buy"}>    0.10  0.6666667 1.1494253


The function is "dirty", but it works, change the paths in the function for your own needs and install the correct packages.

But you should know that algorithms looking for such "sparse sequences" of this type are very voracious, there is a huge search, despite the fact that the algorithm itself does an efficient search and is written in C++
 
mytarmailS #:

data


run the function and look for sequences that lead to our labels


The function is "dirty", I wrote it for myself, but it works, change the paths in the function for yourself and install the correct packages.

But you should know that algorithms looking for such "sparse sequences" of this type are very voracious, there is a huge search, despite the fact that the algorithm itself does an efficient search and is written in C++

Thanks, I will think about how to add it to my problems.

 
Aleksey Nikolayev #:

Thanks, I'll think about how to attach it to my tasks.

If you have any questions, please ask. I've spent a lot of time on this approach, so I'm more or less in the know, in fact I think this approach is the most promising
 

One approach or both?

https://habr.com/ru/post/661457/

Датацентрический и моделецентрический подходы в машинном обучении
Датацентрический и моделецентрический подходы в машинном обучении
  • 2022.04.19
  • habr.com
Код и данные — фундамент ИИ-системы. Оба эти компонента играют важную роль в разработке надёжной модели, но на каком из них следует сосредоточиться больше? В этой статье мы сравним методики, ставящие в центр данные , либо модель , и посмотрим, какая из них лучше; также мы поговорим о том, как внедрять датацентрическую инфраструктуру...
 
Mikhail Mishanin #:

One approach or both?

https://habr.com/ru/post/661457/

It is rare to see such a revelation. But it is the right approach to the market.

 
Mikhail Mishanin #:

One approach or both?

https://habr.com/ru/post/661457/

In my view, a harmonious combination of both approaches, taking into account the specifics of our objectives. Signs should "catch" the "physics" of the market, and models should be built to increase profits (rather than the probability of being right, for example)