Machine learning in trading: theory, models, practice and algo-trading - page 980

 
Maxim Dmitrievsky:

It is written - library of numerical analysis alglib, ported to MT5. I have already used it up and down, in general there are no problems, the library is good. But without visualization and newer models. It seems that the library is not developed any more, their site is silent.

Yeah, missed it. Don't you feel sorry for the time wasted on this dead-end library?

 
Vladimir Perervenko:

Yeah, I missed it. Don't you regret wasting your time on a dead-end library?

If now you start praising R, I'll just have to ask you to show me your 6 months' worth of stats.

 
Alexander Ivanov:

Good afternoon))


While you are searching here until now, we have already created RPE - the Russian breakthrough element.

This is the "fifth element" - grail, philosopher's stone, cinnabar, qi system, the achievement of our scientific specialists in algorithms.

Now any economic-financial projects will be optimized through deep neural analysis on RPE.

I.e. in the future 1 ruble will be equal to 1 dollar, through economic breakthroughs.


we are all on our way to a brighter future!)

Where do you live, how to get to you? I also want to try such mushrooms!

 
 

When I was trying to teach my robot..... at the same time the robot taught me :)))

you don't know what the hell you're talking about))

 
Alexander Ivanov:

When I was trying to teach my robot..... at the same time the robot taught me :)))

you wouldn't know))

Well, yeah. You make an ATC, you train yourself. It's quite normal.

 
Here's the rub. Someone who knows what to feed and what to teach a neuronet does not need it. I know what to teach and what predictors are needed to make it work. But I do not need to shove them into the grid, and teach them something, because everything is going great without it. A properly sharpened brain is the best neural network. The painter I know used his eyes to find all the patterns in two evenings drinking booze :) And here's 980 pages and not a single way to do it. This is bullshit over bullshit over bullshit.
 
Wizard2018:
Here's the rub. Someone who knows what to feed and what to teach the neuronet, it is not necessary. I know what to teach and what predictors are needed to make it work. But I do not need to shove them into the grid, and teach them something, because everything is going great without it. A properly sharpened brain is the best neural network. The painter I know used his eyes to find all the patterns in two evenings drinking booze :) And here's 980 pages and not a single way to do it. This is bullshit over bullshit over bullshit.

It doesn't matter. It's enough to have your own brains. The main thing is to have some thoughts.

I have no other experience but my own). I got the idea from here, and I spent 3 months figuring out how to feed the NS. Figured it out, no predictors, no overkill - the NS does it all by itself.

 

In R-3.5.0, packages are compiled into bytecode when they are updated. This was not the case before.

What about packages which are not updated?

I would like to have all packages in bytecode. I saw an article that showed that bytecode is many times faster.

 
SanSanych Fomenko:

In R-3.5.0, packages are compiled into bytecode when they are updated. This was not the case before.

What about packages which are not updated?

I would like to have all packages in bytecode. Saw an article that showed that bytecode is times faster.

Maybe uninstall and reinstall? And then they would compile into byte code?

I wonder if the code that the interpreter writes by itself can't be compiled into byte code? For example, it can be formed as a package.

Reason: