Machine learning in trading: theory, models, practice and algo-trading - page 1881
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The problem is that the data preparation scheme seems to have exhausted itself, I can't get more than 65-66% of correct answers, and I should get more. I'm looking for a way to break through this wall.
What's your target?
What's your target?
65% of correct answers is the level of a good indicator, this is what neuro demonstrates now. If the answer is 70% and higher, then we can try to open a position.
i understand, but what is the target? what are you predicting with the nets? a reversal? a trend? if a trend, what have you described as a trend?
I get it, but what is the target? What are you predicting with the nets? a reversal? a trend? if a trend, what did you describe as a trend?
The task, for example, is to distinguish a cat from a dog by a photo. What is the right training option?
1. Show pictures of cats and dogs only, i.e. binary classification.
2. Separately teach only cats and "not cats" (protos chaos) + separately dogs and "not dogs", i.e. two cycles of training and two models at the output.
3. Make a classification of three - cats, dogs and chaos. I.e. there will be one model, but the answer is a classification of three options.
Right now I have the first option, and it's clearly crooked. The problem is that neuro learns only one of the options well, conditionally sees only "cats" well, and recognizes dogs poorly. Example, on backtests the models are good at detecting upward price movement and ignore downward movement. If the upward guess reaches 67%, the same model guesses downward only 55%. The "up" and "down" from model to model can change places.
I get it, but what is the target? What are you predicting with the nets? a reversal? a trend? if a trend, what did you describe as a trend?
By the way, maybe the experts can help. Here's a question:
The task, for example, to distinguish a cat from a dog by photo. What is the right way to learn?
1. Show pictures of cats and dogs only, i.e. binary classification.
2. Separately teach only cats and "not cats" (protos chaos) + separately dogs and "not dogs", i.e. two cycles of training and two models at the output.
3. Make a classification of three - cats, dogs and chaos. I.e. there will be one model, but the answer is a classification of three options.
Right now I have the first option, and it's clearly crooked. The problem is that neuro learns only one of the options well, conditionally sees only "cats" well, and recognizes dogs poorly. For example, in the backtests the models are good at detecting the upward price movement and ignore the downward one. If the guess upwards is up to 67%, the same model only guesses downwards 55%. The "up" and "down" patterns may change places from model to model.
The problem here is not in classification variants, but in unbalancing of examples for training, unbalancing either in quantity or in characteristic properties of examples.
A convolutional network ?
By the way, maybe the experts can help. Here's a question:
The task, for example, to distinguish a cat from a dog by photo. What is the right way to learn?
1. Show pictures of cats and dogs only, i.e. binary classification.
2. Separately teach only cats and "not cats" (protos chaos) + separately dogs and "not dogs", i.e. two cycles of training and two models at the output.
3. Make a classification of three - cats, dogs and chaos. I.e. there will be one model, but the answer is a classification of three options.
Right now I have the first option, and it's clearly crooked. The problem is that neuro learns only one of the options well, conditionally sees only "cats" well, and recognizes dogs poorly. Example, on backtests the models are good at detecting upward price movement and ignore downward movement. If the upward guess reaches 67%, the same model guesses downward only 55%. "Up" and "down" from model to model may change places.
There are two sets of points, I do not remember what these points are called, any recognition of the photo is to identify the points of the eyes, nose, mouth, ears, cheek area and the distance and position between them. It's that simple. So if you just show the cat, it's not that. First you have to train to recognize a cat from a cat, a dog from a dog, and only then distinguish.
And yes not that 2 cycles, but more in training if the classes are more than 2
The problem here is not in the classification variants, but in the unbalancing of examples for training, unbalancing either in the number or in the characteristic properties of the examples.
A convolutional network ?
Unbalanced learning examples may very well be the cause, but I think you have to mess with the activation function. The answer goes in the wrong hole, and there are a lot of holes. I need to master TensorBoard for visualization, but it's such a crap...
In short, I don't have enough knowledge.
No not convolutional, I do not show real pictures))
The unbalanced learning examples may very well be the cause, but I think it's necessary to mess around with the activation function. The answer goes in the wrong hole, and there are a lot of holes. I need to master TensorBoard for visualization, but it's such a crap...
I don't have enough knowledge.