Machine learning in trading: theory, models, practice and algo-trading - page 428
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I see, I think in practice no one here and did not compare :) I will seek information, that in the end not to be fooled if it turns out that the diplerning does not give advantages over the woods. And since the component part there is an MLP, it may well be that it does not...
By the way, diplerning is anything that has more than 2 layers, MLP with 2 hidden layers is also diplerning. I was referring to deep nets, which Vladimir described in the article in the link above.
TOTALLY WRONG. Where do you get this information?
Although they write that predictors are the most important because models work approximately the same... but this is theory, in practice it turns out that model selection is also very important, for example a compromise between speed and quality, because NS is usually long...
DNN It's very fast, tested.
I want to use a native without any lefthandy software or direct connection to P server from MT5, but a native is better. You need 1 time to rewrite with C++ neural network, which you need on mql, and that's all.
How do you check it? It works for me
Oh, I forgot to add my opinion.
IMHO based on practice
Good luck
Deep learning (also known as deep structured learning or hierarchical learning) is the application to learning tasks of artificial neural networks (ANNs) that contain more than one hidden layers. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task specific algorithms. Learning can be supervised, partially supervised or unsupervised.
About diplerning with autoencoders, yes, it is quick, but I haven't got to them yet, so I had a logical question - is there an advantage over RF
p.s. Does it also fly in optimizer? Or in a cloud?
https://en.wikipedia.org/wiki/Deep_learning
Deep learning (also known as deep structured learning or hierarchical learning) is the application to learning tasks of artificial neural networks (ANNs) that contain more than one hidden layers. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task specific algorithms. Learning can be supervised, partially supervised or unsupervised.
About diplerning with autoencoders yes, fast, but I haven't gotten to them yet, so it was a logical question - are there any advantages over RF
p.s. Does it also fly in the optimizer? And if it's in the cloud?
https://en.wikipedia.org/wiki/Deep_learning
1. Where did you find this definition? Are you serious? I'll find links to serious sources when I have time.
2. The main advantage of DNN with pre-learning is transfer learning. It's much faster, more accurate and ... Use the darch package.
3. Any optimization must be done in R. Faster, more transparent and flexible.
Good luck
1. Where did you find this definition? Are you serious? I'll find links to serious sources when I have time.
2. The main advantage of DNN with pre-learning is transfer learning. It's much faster, more accurate and ... Use the darch package.
3. Any optimization must be done in R. Faster, more transparent and flexible.
Good luck
Although the term "deep learning" can be understood in a broader sense, it is mostly applied in the field of(artificial) neural networks.
https://habrahabr.ru/company/wunderfund/blog/314242/
And here.
They may all be lying, I'm not aware of it )
IMHO, of course
What's the use of predictors? A time series is a predictor.
They forgot to put (with) :))
Forgot to put (c) :))
Who did you quote?)
Although the term "deep learning" can be understood in a broader sense, it is mostly applied in the field of (artificial) neural networks.
https://habrahabr.ru/company/wunderfund/blog/314242/
And lo and behold.
Maybe they're all lying, I'm not aware of it )
IMHO, of course.
No they don't. Below is an explanation (from an article I never finished :(
Introduction
Main directions of research and applications
At present time in the research and application of deep neural networks (we talk only about multi-layer full-link neural networks - MLP) two main currents have formed, which differ in the approach to initialization of neuron weights in hidden layers.
First: It is well known that neural networks are extremely sensitive to the way of initialization of neurons in hidden layers, especially when the number of hidden layers is more than 3. The initial push to solve this problem was proposed by Professor G.Hynton. The essence of the proposal was that the weights of neurons in hidden layers of the neural network would be initiated by weights obtained during learning without a teacher of automatic associative networks composed of RBM (constrained Boltzmann machine) or AE (autoencoder). These Stacked RBM (SRBM) and Stacked AE (SAE) networks are trained in a certain way on a large unlabeled data set. The purpose of such training is to reveal hidden structures (representations, images) and dependencies in the data. Initializing MLP neurons with weights obtained during pre-training places MLPs in the solution space closest to the optimal one. This makes it possible for the subsequent fine-tuning (training) of MLPs to apply less marked data with fewer training epochs. For many practical applications (especially in processing "big data"), these are critical advantages.
Second: A group of scientists (Benjio et al.) focused their main efforts on the development and research of specific methods for the initial initialization of hidden neurons, special activation functions, stabilization and learning methods. Successes in this direction are mainly connected with the rapid development of deep convolutional and recurrent neural networks (DCNN, RNN), which have shown amazing results in image recognition, text analysis and classification and translation of live speech from one language to another. Ideas and methods developed for these neural networks have been applied to MLP with no less success.
Today both directions are actively used in practice. Comparative experiments [ ] of these two approaches have not revealed significant advantages of one approach over the other, but one still exists. Neural networks with pre-training require much less examples for training and computational resources with almost equal results. For some fields this is a very important advantage.
Good luck
Today both directions are actively used in practice. Comparative experiments [ ] of these two approaches have not revealed significant advantages of one approach over the other, but there is one advantage. Neural networks with pre-training require much less examples for training and computational resources with almost equal results. For some fields this is a very important advantage.
Good luck
Lately I've been going back to the GARCHs that I was previously familiar with. What has extremely surprised me after several years of fascination with machine learning is the sheer number of publications on the application of GARCH to financial time series, including currencies.
Do you have something similar for deep networks?