Machine learning in trading: theory, models, practice and algo-trading - page 1734
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Well, yes, the cats in the picture are different, but the network recognizes them and distinguishes them from the dogs somehow...
Read at least something about the principles of pattern recognition, convolutional networks, how they work, etc. Your questions are very immature, and when you read them you'll understand their stupidity.
Your answers are also not mature. Once again: the destroyed object has too big differences from the whole, and therefore the entropy in its image cannot be overcome by increasing the training sample. This sample can become infinite or get mixed up with other samples. That's obvious even to me.
Your answers are not very mature either. Once again: the destroyed object has too big differences from the whole, and therefore, the entropy in its image cannot be overcome by increasing the training sample. This sample can become infinite or get mixed up with other samples. This is obvious even to me.
Obvious but not very visible )) If you want to train a network to recognize a destroyed house, you train it on destroyed houses, not give it a whole house and wonder how it would represent a destroyed house... Obvious!!!
That's the same thing I said from the beginning.
mytarmailS:Who cares if the house is broken or not, the network learns what they teach her
Obvious but not very visible)) If you want to train a network to recognize a destroyed house, you train it on destroyed houses. You don't give it a whole house and wonder how it will represent a destroyed house... Obvious!!. Retag Konow:
Retag Konow:
Your answers aren't very mature either. Once again: the destroyed object has too big differences from the whole, and therefore the entropy in its representation cannot be overcome by increasing the training sample. This sample can become infinite or get mixed up with other samples. This is obvious even to me.
Actually algorithm of brick wall destruction and its visualization was made long time ago, the question is to know factors of destruction, if we know them, the house is restored))))
Obvious but not very visible)) If you want to train a network to recognize a destroyed house, you train it on destroyed houses. You don't give it a whole house and wonder how it would represent a destroyed house... Obvious!!!
That's the same thing I said from the beginning.
mytarmailS:Who cares if the house is broken or not, the network learns what it learns.
Use your imagination for a second. How many house breaking options can there be? Infinite. This means that you can teach it to recognize one or more types of broken homes, but not all of them. If the form of destruction is unknown beforehand, what is the point of training the network and hoping that the collapsed houses it encounters will fit into the training sample? Consequently, the network will operate with random, fluctuating success and unpredictable recognition rates.
I think a different approach is needed.
Actually, the algorithm of destruction of a brick wall and its visualization was made long ago, the question is about knowing the factors of destruction, if we know them, then the house is restored))))
Entropy is a measure of the chaos that is present in any destroyed object. Restoring the whole from its chaotic state is a fight against entropy. But, do we have a formula? We do. That formula is the intellect. It assembles a single image from the parts, bypassing chaos and disorder. It puts the parts into an equation and gets the whole object as a result.
Conclusion: Intelligence uses NS in recognition, but is not linearly dependent on learning sampling. Through symbiosis with Intellect, the effectiveness of NS grows manifold.
He assembles a single image from the parts,
bypassing chaos and disorder.
He puts the parts into an equation and gets the whole subject as a result.
Spectral analysis !!! :)
Spectral analysis !!! :)
Sort of...))
Do it your way, but do OOS.
want to see
Anyway, this method doesn't work) but it was fun...
mine works better than the other one.
Anyway, this method doesn't work), but it was fun...
mine works better, which is
sad )
lag increased?, every minute is too thin can beI will start to analyze these indicators and then I will start to analyze them. I'll never do it myself, I'll do it long time, I remembered here R and I was making dataframe four hours while I was unfamiliar with it, I have no one to advise me. And what I'll do 100 percent will be a lot of mistakes. And I need to create a script in R, which implements the entire above algorithm and check it in real life. Even if the most difficult parameter TC such as "GUARANTEED" will be 3 out of 5 already can earn.
I would know how to do it already. And I would have predicted at most 1 inflection and retrained on each bar.
I would divide this task into 3 parts: data preparation, breakdown into components, and prediction. I know how to do the first 2, but forecasting is a problem. I would like to use NS, but it is a "hot" field, besides I haven't been engaged in it since NS.
If I want to play with it quickly, I can use these indicators to make predictions. Parameters should be adjusted in the optimizer.
I do not understand, these figures Lysaju what do they show, the relationship between the two components in the expansion? That is, there are two components in the expansion that are shifted one relative to the other? The conditional sin and cos. This is redundant, you can replace it with something with a different initial phase.