Machine learning in trading: theory, models, practice and algo-trading - page 630
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I will not assert, but it seems to me that these are illusions. Just from general considerations.
Why, Vladimir Perevenko has some information in his articles, they are trained very fast on hundreds of inputs
I have not read the articles and will not argue. I've only seen pictures).
MLP, say, can be perfectly trained in 10-15 minutes, and it will function perfectly. Yes, but that's if the data is well classified, sets are separated.
If you say there are no separable sets in market (or in your training samples), then you can train anything you like forever and there will be no results.
Why, Vladimir Perevenko has information in his articles, they learn very fast on hundreds of inputs
It all depends on architecture and amount of data.
Networks for pattern recognition learn for a week on GPU. And there are dozens of layers with three-dimensional tensors.
It all depends on the architecture and amount of data.
Networks for pattern recognition take a week to learn on a GPU. There are dozens of layers with three-dimensional tensors.
There he described the simpler ones - Boltzmann net + MLP, for example
https://www.mql5.com/ru/articles/1103#2_2_2
I have not read the articles and will not argue. I've only seen pictures).
MLP, say, can be perfectly trained in 10-15 minutes, and it will function perfectly. Yes, but that's if the data is well classified, sets are separated.
If you say there are no separable sets in market (or in your training samples), then you can train anything you like forever and there will be no results.
Let's simply conduct an experiment for the sake of "scientific knowledge".
Let's choose data, dimensions, MLP architecture, output data.
And everyone will do their own tests with their own tools.
The amount of flaming will be less.
By the way, we can make such a tradition and test every new architecture with the whole world. =)
Let's just do an experiment for the sake of "scientific knowledge".
Let's choose data, dimensions, MLP architecture, output data.
And everyone will do their own tests with their own tools.
The amount of flaming will be less.
By the way, we can make such a tradition and test every new architecture with the whole world. =)
I am sharing the first results of my NS. The architecture is as described in the god, I did not change anything.
The plateau is quite even, the NS has learned well already at 1000 passes, the results did not improve much further.
Trained for the last month at 15 minutes. I spent ~0.65$ for training. My monthly number of deals is ~300.
My results during last 2 months are not bad, but not too bad either.
I will try to add one more hidden layer and look for errors again :) and then I will try to train for a longer period.
Maxim Dmitrievsky:
Why, Vladimir Perervenko has information in his articles, they learn very fast on hundreds of inputs
All articles contain data sets and scripts that you can reproduce and get real data about the learning time specifically on your hardware. The learning time of DNN with two hidden layers is up to 1 minute.
Good luck
Let's just do an experiment for the sake of "scientific knowledge".
Let's choose data, dimensions, MLP architecture, output data.
And everyone will do their own tests with their own tools.
The amount of flaming will become less.
By the way, we can make such a tradition and test every new architecture with the whole world. =)
I am sharing the first results of my NS. The architecture is as described in the god, I did not change anything.
The plateau is quite even, the NS has learned well already at 1000 passes, the results did not improve much further.
Trained for the last month at 15 minutes. I spent ~0.65$ for training. My monthly number of deals is ~300.
My results during last 2 months are not bad, but not too bad either.
I will try to add one more hidden layer and look for errors again :) and then I will try to train for a longer period.
Do you have three neurons at the input of the second layer processed by sigmoid? How do you select weights on the second layer which range is chosen from -1 to 1 with step of 0.1 for example.
In my network the number of deals fell down after the second layer was processed and the result did not improve much. In contrast to fitting a perceptron with 9 inputs and one output neuron and then taking another independent perceptron and fitting it again with saved settings of the first one, etc.