Machine learning in trading: theory, models, practice and algo-trading - page 1526
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Gradient descent is a method by which neural networks select weights/multipliers/displacements for neurons.
Regression and classification are tasks that both forests and neural networks can perform.
Gradient descent is a method by which neural networks select weights/multipliers/displacements for neurons.
Thanks, but that's not what I meant.
About the similarity of the final result, regression and gradient descent.
Regression finds the middle point between neighbors, gradient descent finds the closest point.
In essence, it seems to me that the search algorithms are similar in the final result.
The difference would be the deviation error. So I wondered, what would produce the smaller error?
It seems to me that gradient descent will be more accurate than regression.
My point is, for example, there is a teacher, the output of the network needs to get a copy of the teacher, with minimal error.
So I can not decide which model to use with which algorithm.
Thanks, but that's not what I meant.
About the similarity of the final result, regression and gradient descent.
Regression finds the midpoint between neighbors, gradient descent finds the closest point.
In essence, it seems to me that the search algorithms are similar in the final result.
The difference would be the deviation error. That's what I thought would give a smaller error.
I think gradient descent will be more accurate than regression.
I didn't expect any other answer.
Thank you, but that's not what I meant.
About the similarity of the final result, regression and gradient descent.
Regression finds the middle point between neighbors, gradient descent finds the closest point.
In essence, it seems to me that the search algorithms are similar in the final result.
The difference would be the deviation error. So I thought, what will give the smaller error?
It seems to me that gradient descent will be more accurate than regression.
My point is, for example, there is a teacher, the output of the network needs to get a copy of the teacher, with minimal error.
So I can't decide which model to use with which algorithm.
Regression and classification are the result of the black box. Descent is what happens inside it. These things cannot be compared with each other. It's like a picture on a TV screen (the result) and the principle of a resistor that is inside that TV.
So I can't decide which model to use with which algorithm.
None of the algorithms make sense. The market is SB (if you look purely at prices).
MO works when there is a pattern. No one in this thread, for several years, did not find anything steadily earning.
Except to train your brain))).
None of the algorithms make sense. The market is SB (if you look purely at prices).
MO works when there is a pattern. No one in this thread, for several years, has found anything stable earning.
I think you'll have to use your brains to work it out.)
You can steadily earn only by careful scalping and strictly following the main trends - previous, current and next
Regression and classification are the result of the black box. Descent is what happens inside it. These things cannot be compared with each other. It is like a picture on a TV screen (the result) and the principle of a resistor that is inside that TV.
None of the algorithms make sense. The market is SB (if you look purely at prices).
MO works when there is a pattern. No one in this thread, for several years, has found anything stable earning.
I think you'll have to use your brains to work it out.)
Thanks for the clarification.
That's the point, I don't plan to look for patterns in pure form with MO.
But I'm trying to make a tool, with the help of which these patterns will be detected.
So I need to choose the right type of algorithm to copy the teacher to the output of the network, with minimal error.
In this case the network is not looking for any regularities, it is just copying the teacher.
Thanks for the clarification.
That's the thing, I don't plan to look for patterns in pure form with MO.
I'm trying to make a tool with which to detect these patterns.
So I need to choose the right type of algorithm to copy the teacher to the output of the network, with minimal error.
In this case the network is not looking for any patterns, just copies the teacher.
None of the algorithms make sense. The market is SB (if you look purely at prices).
MO works when there is a pattern. No one in this thread, for several years, has found anything stable earning.
Let it be SB.
But the SB, or rather a random process, also has regularities. We discussed them more than once in the branch "From Theory to Practice" - it is a stationary variance, as a consequence of the Einstein-Smoluchowski equation and return to the starting point = 66% in a two-dimensional walk and convergence to the Gaussian distribution of a large number of independent random variables... Yes, lots of things... There is a whole theory of random processes and you can win on SB, no matter what anyone says.
So why isn't the MoD up to the task? Philosophical, conceptual question. I don't know the answer to it...
Let it be SB.
But, in fact, the SB, or rather the random process, also has regularities. We discussed them more than once in the branch "From theory to practice" - it is a stationary variance as a consequence of the Einstein-Smoluchowski equation and return to the starting point = 66% in two-dimensional wandering and convergence to the Gaussian distribution of the sum of a large number of independent random variables... Yes, lots of things... There is a whole theory of random processes and you can win on SB, no matter what anyone says.
So why isn't the MoD up to the task? Philosophical, conceptual question. I don't know the answer to it...
I have to finish the topic in this fall, otherwise it's going to get kind of boring. I spent a lot of time studying neural network add-ons, and then the theory of application, which no one has developed for financial markets. It somehow gets squeamishly avoided by datascientists.
I have a paid subscription to all sorts of academic papers and professors' research, but they all work with options, mostly. They think it's proven that there's nothing to catch on the spot.