Machine learning in trading: theory, models, practice and algo-trading - page 3077

 
СанСаныч Фоменко #:

Although your smug tendency to label people makes it unpleasant to deal with you, I will definitely reply later for the sake of publicly discussing the translation of terminology on the example of a very interesting article.

For now, the Yandex translation for nuisance is

Nuisance, nuisance, nuisance, annoyance, burden ...

doesn't suit me at all. I'll give you my translation later and explain it. I'm busy right now.

Before a very interesting story, please look up the definition on the internet (statistics section).

and also RF is just mentioned in the article, but it is not the basis of the article.

didn't read the article but drew conclusions.

 
Maxim Dmitrievsky #:

Before a very interesting story, please look up the definition on the internet (statistics section).

and also RF is just mentioned in the article, but it is not the basis of the article.

didn't read the article but drew conclusions.

4 Simulation

Study We study the finite sample performance of meta-learners for estimation of heterogeneous treatment effects based on Random Forests (Breiman, 2001; see also Biau & Scornet, 2016, for a comprehensive introduction). The focus of the Monte Carlo study lies in an assessment of the influence of sample-splitting and cross-fitting in the causal effect estimation. For this purpose we compare the above discussed metalearners estimated with full-sample, double sample-splitting, and double cross-fitting.


We rely on the Random Forest as the base learner for all meta-learners for several reasons.


didn't read the article, but drew conclusions .

I see no point in discussing anything with you!

 
СанСаныч Фоменко #:

4 Modelling

Study We investigate the effectiveness of finite sample meta-students for estimating heterogeneous treatment effects based on random forests (Breiman, 2001; see also Biau & Scornet, 2016, detailed introduction). The focus of the Monte Carlo study is to assess the impact of sample splitting and cross-fitting on the estimation of causal effects. For this purpose, we compare the meta-learners discussed above, estimated with full sampling, double sampling split, and double cross-fitting.


We rely on Random Forest as the baseline learning method for all meta-learning for several reasons.


Have not read the articles, but have made a grab.

I don't see the point of discussing anything with you!

The article is not about RF, but about causal inference, so the terminology is from there.

you are not in a position to discuss anything yet, of course you don't see the point.
 

I propose to join efforts to search for useful information in the code base, namely interesting indicators.

The task is time-consuming, but there is a probability that something underestimated will be found.

Let's make basic predictors for indicators and target ones, analyse the probability distribution for the target one.

As a result, we will select interesting custom indicators with their settings for different TFs and trading instruments.

From my side about 200 cores will be included in the work. I will organise joint work and write the necessary code.

As a result, we will be able to use any of the analysed indicators in our code, having a standard for their settings, including the range and step of variation of each setting.

All participants of this joint work will be able to use the achievements.

It will be convenient to organise the process in Discord. What do you think? It seems that everyone wins - you don't share your secrets, but you get a potentially useful result.

 

I've been learning how to display an interactive graph in R with shiny for my application....

a little bit with crutches but I got it right, I haven't tried the dash library yet....

So, if you are interested, you can use it, the graph opens in a browser, you can make a full-screen mode by double-clicking on it.


The chart is full-featured, you can display trades, draw, select objects, get values, etc... (but this is a separate code)


library(xts)
library(plotly)
library(shiny)
library(shinyfullscreen)


len <- 50000
times <- seq(as.POSIXct("2016-01-01 00:00:00"), length = len, by = "sec")
prices <- cumsum(rnorm(len)) +1000

p <- to.minutes5(as.xts(prices,order.by = times))

dat <- cbind.data.frame(index(p) , coredata(p))
colnames(dat) <- c("date","open","high","low","close")




my_plot <- function(dat,width,height){
  library(plotly)
  
  pl <- plot_ly(dat, x = ~date, type="candlestick",
                open = ~open, close = ~close,
                high = ~high, low = ~low,
                line = list(width = 1),
                width = width, height = height)
  
  pl <- layout(pl,
               xaxis = list(rangeslider = list(visible = F),
                            title = ""),
               yaxis = list(side = "right"),
               plot_bgcolor='rgb(229,229,229)',
               paper_bgcolor="white",
               margin = list(l = 0, r = 0, t = 0, b = 0))
  pl
}
resize_tag <- function(){
  tags$head(tags$script('
                        var dimension = [0, 0];
                        $(document).on("shiny:connected", function(e) {
                        dimension[0] = window.innerWidth;
                        dimension[1] = window.innerHeight;
                        Shiny.onInputChange("dimension", dimension);
                        });
                        $(window).resize(function(e) {
                        dimension[0] = window.innerWidth;
                        dimension[1] = window.innerHeight;
                        Shiny.onInputChange("dimension", dimension);
                        });
                        '))
}







ui <- fluidPage(
  resize_tag(),
  #plotlyOutput("plot")
  fullscreen_this(   plotlyOutput("plot")   )
)


server <- function(input, output) {
  output$plot <- renderPlotly( {   
    my_plot(dat,
            width  = (0.95*as.numeric(input$dimension[1])),
            height =  as.numeric(input$dimension[2]))
  })
}

#shinyApp(ui = ui, server = server)
shinyApp(ui, server, options = list(launch.browser = TRUE))

 
Aleksey Vyazmikin custom indicators with their settings for different TFs and trading instruments.

From my side about 200 cores will be included in the work. I will organise joint work and write the necessary code.

As a result, we will be able to use any of the analysed indicators in our code, having a standard for their settings, including the range and step of variation of each setting.

All participants of this joint endeavour will be able to use the achievements.

It will be convenient to organise the process in Discord. What do you think? It seems that everyone wins - you don't share your secrets, but you get a potentially useful result.

90% of МА of such indicators are replaced by digital filters and wavelets. What remains? Volatility indicators, what else?

 
Rorschach #:

90% MA of such indicators are replaced by digital filters and wavelets. What is left? Volatility indicators, what else?

You can also add an attempt to predict its values by returns to the general task of the indicator's value - if it comes out with 100% accuracy, trash it.

You can start with a simple one - categorise by types - oscillators, averagers - like MA, level indicators - which are recalculated relatively rarely.

And you can process historical news within the framework of this project.
 

So nobody liked my idea?

Everyone thinks they are smarter than others and are sure that there can be no useful idea in indicators?

Or they are simply not interested in getting useful information together with them? Like not for themselves or for others?

Or do you have 10 lives in reserve and hope to manage everything on your own?

 
Aleksey Vyazmikin custom indicators with their settings for different TFs and trading instruments.

From my side about 200 cores will be included in the work. I will organise joint work and write the necessary code.

As a result, we will be able to use any of the analysed indicators in our code, having a standard for their settings, including the range and step of variation of each setting.

All participants of this joint endeavour will be able to use the achievements.

It will be convenient to organise the process in Discord. What do you think? It seems like everyone wins - you don't share your secrets, but you get a potentially useful result.

Alexei, it's almost impossible

 
lynxntech #:

Alexei, it's almost impossible

What exactly is impossible?

Reason: