You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
................
What's all this for?
>> which n?
I'm sorry, gumgum, but I don't know why or what for. Either I'm dumb, or there's something you're not telling me.
What is the point of optimizing such a simple function?
In order to smoothly reduce the error, you have to choose a very low learning rate, but this can take an unacceptably long time to learn.
Here I am thinking if you functionally change the learning rate during the learning process.
I'm sorry, gumgum, but I don't know why or what for. Either I'm dumb, or there's something you're not telling me.
What is the point of optimizing such a simple function?
It doesn't need to be optimized.
>> It doesn't need to be optimised.
The gradient is actually a useful thing, but it doesn't always solve the problem
Are you talking about gradient descent or something? I don't know how, and I don't know what it would do.
I don't know everything about all optimization methods, I only talk about what I'm good at, that's why I suggested to run a ff, to compare your method with others and learn something new.
It doesn't get any clearer :-D. Do me a favour, describe it in other words from the beginning. Or just use more words.
I'll do an experiment today.... >> I'll post it tomorrow!
>> It doesn't need to be optimised.
Your thinking is correct! That's what they do as epochs grow, reduce the learning curve