Machine learning in trading: theory, models, practice and algo-trading - page 3428

 
mytarmailS #:


the marketplace ))))))

It's crooked like mine.

 
Maxim Dmitrievsky #:

It's crooked, like mine.

Good.

That's a cool colab thing.

 
mytarmailS #:

good


t = np.linspace(0.1, .4, 100000) number of points to generate
 
Maxim Dmitrievsky #:
t = np.linspace(0.1, .4, 100000) number of points to generate

Got it.

From ticks in m1

weierstrass_mandelbrot <- function(t, D, gamma, phi, N) {
    sapply(t, \(x) { 
       sum(sapply(-N:N, function(n) {
        ((1 - exp(1 i * gamma^n * x)) * exp(1 i * phi * n)) / (gamma^((2 - D) * n))
      }))
    })
  }
  
  t <- seq(0.1, 0.4, length.out = 10000)
  D <- 1.5
  gamma <- 1.5
  phi <- 6.1 #  МЕНЯЙ ОТ 0 ДО 7
  N <- 50
  
  y <- Re(weierstrass_mandelbrot(t, D, gamma, phi, N))
  
  
  
  
  
  par(mfrow=c(1,2), mar=c(2,2,2,2))
  plot(t, y, type = "l", main = "Weierstrass-Mandelbrot Function", xlab = "t", ylab = "W(t)")
  
  
  y |> 
  xts(Sys.time()+1:length(y)) |> 
    to.minutes() |> 
    chart_Series()
  


 
mytarmailS #:

Got it.


Okay. More

phi = 5.1 # CHANGE FROM 0 TO 7

to get different curves

here also

t = np.linspace(0.1, 0.3, 100000) you get the number of cycles, that is, several fractals in a row of the same, with increasing period.

I set 0.3 or 0.4 - 3 or 4 repetitions. You can do more.

 
Maxim Dmitrievsky #:

norms. More

phi = 5.1 # CHANGE FROM 0 TO 7

to get different curves

here also

t = np.linspace(0.1, 0.3, 100000) you get the number of cycles, that is, several fractals in a row of the same, with increasing period.

I set 0.3 or 0.4 - 3 or 4 repetitions. You can do more.

Yeah, it's cool, but I don't know how this whole thing can help.

 
mytarmailS #:

Yeah, it's cool, but I don't know how this whole thing can help.

I haven't done it yet.

1. generate charts, mark up, add to the main dataset.

2. train it, test it on the new data.

 
Maxim Dmitrievsky #:

I haven't done it yet.

1. generate charts, mark them up, add them to the main dataset.

2. watch

you can go the other way round, you can teach the algorithm to process the input data itself - expand/shrink/narrowen/increase/compress, etc..

It will be more efficient than charging a 1000 GB dataset to see all the variants.

 
mytarmailS #:

you can go the other way round, you can teach the algorithm to process the input data itself - expand/shrink/narrowen/eliminate/compress etc...

This will be more efficient than charging a 1000 GB dataset to see all the variants.

No. Again, there are patterns in this data. That's why you have to look for them. There's no point in narrowing and stretching the randoms.

 
Maxim Dmitrievsky #:

No. Again, there are patterns in these data. That's why we need them. There is no point in narrowing and stretching the random.

You don't get it.

Okay, do it.