Machine learning in trading: theory, models, practice and algo-trading - page 631
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Do you have three neurons at the input of the second layer processed by the sigmoid? How do you select weights on the second layer, which range is chosen from -1 to 1 in steps of 0.1, for example.
In my network the number of deals fell down after the second layer was processed and the result did not improve much. Unlike when I just fitted a perceptron with 9 inputs and one output neuron and then took another independent perceptron and fitted it again with saved settings of the first one, etc.
The 4th neuron processes the results of the 1st three, i.e. + 3 more weights
Yes, from -1 to 1 in increments of 0.1, but not sigmoid, but tangent.
I've tried to make an intermediate layer with the same weights as the first input one, but the number of trades has fallen and the quality has considerably improved, and optimizing 9 extra weights is too much :)
Your version sounds good... I was thinking about training conventional NS on the results of optimization... I'll have to try it. But I'm getting bored with this approach
The 4th neuron processes the results of the 1st three, i.e. + 3 more weights
Now I've tried to make an intermediate layer with the same weights as the first input layer - the number of trades has fallen and the quality has also fallen, while optimizing the additional 9 weights is already too much :)
Your version sounds good... I was thinking about training conventional NS on the results of optimization... I'll have to try it. But I'm getting bored with this approach
My impression is that I should make an indicator on the first layer and visually see what weights should be applied on the second layer. Or treat it with sigmoid (then you get the values from about 0.2 to 0.9, approximately) and then you can take small weights and do not need a large range.
Plus additional weight without reference to the input is simply the weight of the bias, what I was told by Dr. Trader. I think Bias somewhat improves results, for example it was 1.7 with bias and now it is 1.8.https://www.mql5.com/ru/forum/86386/page505#comment_5856699
I have the impression that you need to make an indicator for the first layer and visually see what weights to apply to the second layer. Or process it with sigmoid (then you get values from about 0.2 to 0.9, approximately) and then you can take small weights and do not need a large range.
Plus additional weight without reference to the input is just the weight of the bias, what I was told by Dr. Trader. Bias somewhat improves results, for example the profitability was 1.7 with bias became 1.8.https://www.mql5.com/ru/forum/86386/page505#comment_5856699
it's hard to do something on pure impromptu :) but the main problem for now remains - retraining
Well, it's not a problem to retrain 5-10 neurons). It seems to be somewhere like that with you.
I had an interesting specimen on my computer. You have a piece of speech. A noise is generated and superimposed on this speech. Then a simple MLP is taught, and we hear almost pure speech again.
I was really stunned by this, although there is a description of similar noise modifier in Haikin as an example of using it.
Well, it's not a problem to retrain 5-10 neurons). It seems to be somewhere like that with you.
I had an interesting specimen on my computer. You have a piece of speech. A noise is generated and superimposed on this speech. Then a simple MLP is taught, and we hear almost pure speech again.
I was really stunned by it, although it is described in Haikin as an example of such a noise-modifier.
Some trash seems to have been missed, I wish I had had time to participate
The other guys deleted my post, where I answered to Mr. Terenyev. I replied to Terentiev about time series tests and said that local writers are just amateurs, because they don't understand that with 70-80% accuracy Sharp ratio will be over 20, and they have some nonsense
deleted my message, where I responded to Mr. Terenyv. Terentyev about time series tests and said that local article writers are just amateurs, because they don't understand that with 70-80% accuracy Sharp ratio will be over 20, and they have some nonsense
ah, ok )
I'm interested in reinforcing lerning, so I found an interesting article, I'm trying to buy it and add it to the bot
https://hackernoon.com/the-self-learning-quant-d3329fcc9915
I'm trying to buy a bot and try to add it to the bot.