Machine learning in trading: theory, models, practice and algo-trading - page 602

 
Vladimir Perervenko:

As for interest, you may have heard about TensorFlow graphs, Protocol Buffers, code generation for different platforms and languages, i.e. essentially low level, so I do the same, only for my NS and MQL language.

Not only I've heard about it, but I use it as well. But with the R language for execution in MT. So we have a different approach and direction. My experience will not be useful to you. You probably haven't heard - Hlaiman EA Generator.

I've heard of it, I've read it. Not the way I want to go.

I hope you understand from what I wrote above, the direction in which I may be interested in helping.

Visualization of graphs, NS topologies, serialization, ProtoBuf formats, batch processing and import/export of n-dimensional arrays NumPy weights of NS, etc.

If you have this kind of information or experience in implementing it, I would be happy to discuss it with you.

I will repeat once again. We have a different approach and direction. My work will not be useful to you.

Good luck

Maybe our developments will be useful for you, for example Hlaiman EA Generator with Python and R consoles - the integrated environment containing elements of TensorBoard and RStudio, native NS components and a connection to MT4, MT5 terminals through DLL, NamedPipes and REST.

 
Maxim Dmitrievsky:

What's up, Michael.

It's true there's a virtual machine, and they're using their own gpuha.)


That reminds me. I have already seen this project. I am sure that such projects have the ability to create and save, as well as exchange models before you want to teach them. Because as push came to shove this resource did not work. There is a liter on the subject... Read it ...

 
Mihail Marchukajtes:

That reminds me. I've seen this project before. I am sure that such projects have the ability to create and save, as well as exchange models before you want to teach them. Because as push came to shove this resource did not work. There is a liter on the subject... I could read it...


There's the usual Python code, the interface is learned in 15 minutes. At least I got it all at once, even without knowing python very well... the basics of python are learned in a few days. Everything else - the libs from the MO. In short, it's very easy to get started. But if you're not sure why you need it - it's better not to get involved :) I'm just studying the different neural networks there.

You should... because you're attached to a predictor and don't know the basics so you can get started. It takes at least a year to learn the basics without any math skills.

The advantage is that you don't need to install the environment on your computer and you don't need it for research or studying the language.

For example Bayesian terwer - do you know how NS work in essence? about a priori and a posteriori distributions there, Monty Hall paradox, training methods... Or what is the difference between SVM and RBM?
 
Maxim Dmitrievsky:

There's the usual Python code, the interface is learned in 15 minutes. At least I understood everything right away, even without knowing python well... the basics of python are learned in a few days. Everything else - the libs from the MO. In short, it's very easy to get started. But if you're not sure why you need it - it's better not to get involved :) I'm just studying the different neural networks there.

You should... because you're attached to a predictor and don't know the basics so you can get started. It takes at least a year to learn the basics without any math skills.

the plus side is you don't have to install the environment on your computer and you don't need it to do any research or learn the language


What are you doing? I know the basics of MO thoroughly enough. Python syntax, etc. is another matter. I agree. I want to check public services, how they build models and whether they are better than mine on OOS. I can already draw conclusions there. Theoretically it is possible to repeat the basic principles of the optimizer to make a comparative characteristic adequate....

 
Mihail Marchukajtes:

What are you doing? I know the basics of MO thoroughly enough. Another thing is the syntax of Python, etc. I agree. I want to check the public services, how they build models and whether they are better than mine on the OOS. I can already draw conclusions there. Theoretically it is possible to repeat the basic principles of the optimizer so that the comparative characteristic was adequate....


Well, to repeat the optimizer you need to study the whole theory, including probabilities... such things are not just realized at the snap of a finger )

They're written by extremely smart people )

Ivan, for example, wrote his Hlaimann... it's psz, try to write something like that :)

You and me, users like us, need only envy in silence and learn the basics

 
Maxim Dmitrievsky:


Well, to repeat the optimizer you need to study all the theory, including probabilities... such things are not just realized at the snap of a finger)

They're written by really smart people )

Ivan, for example, wrote his Hlaimann... it's psz, try to write something like that :)

Users like you and me have nothing to do but envy in silence and learn the basics.


The main thing is to understand the principle of the approach. The implementation will be different for everyone....

 
Mihail Marchukajtes:

The main thing is to understand the principle of the approach. Implementation will be different for everyone....


the principle of the approach - the whole unintuitive math :)

simple example

 
Vizard_:

That was a joke. No banter - twist the twist, see how it affects.
If the inputs are adequate for the task, you can do it on "1 neuron".
In the context of mo, ideologically correct at toxic-a.
--------------
Professor on deep networks - youtu.be/qx3iM2aa2yU
31 min. "There's still little science there, and a lot of voodoo magic."



I still can't even understand what is better - sigmoid or tangent. ) Some people say it's tangent, and now I'm reading that


 
I don't think the second paragraph is important. The shift in the neuron will shift 0.5 sigmoid to 0, if it gives a smaller error, and 0 tangent to any other value will also shift.
But the steepness would be interesting to optimize, but this can be done in a homebrew package. The off-the-shelf packages probably don't allow to do this.
I was comparing these f-ions in my own setup - sigmoid gives less error than tangent. But this is probably true only on a randomly chosen test section. We need to do a larger comparison.
 
elibrarius:
I don't think the second paragraph is important. Shift in neuron will shift 0.5 sigmoid to 0, if it gives less error, and 0 tangent to any other value will shift too.
But the steepness would be interesting to optimize, but this can be done in a homebrew package. The off-the-shelf packages probably don't allow to do this.
I was comparing these f-ions in my own setup - sigmoid gives less error than tangent. But this is probably true only on a randomly chosen test section. We need to do a larger comparison.

They say that ReLU is better :)

in libe Fuzzy mt5 there are all sorts of f-ions, and from the pieces you can build the original NS, half-NS half-expert, but train in the optimizer then