Hybrid neural networks. - page 11

 
gumgum >> :

................

What's all this for?

 
joo писал(а) >>

>> which n?

 

I'm sorry, gumgum, but I don't know why or what for. Either I'm dumb, or there's something you're not telling me.

What is the point of optimizing such a simple function?

 

In order to smoothly reduce the error, you have to choose a very low learning rate, but this can take an unacceptably long time to learn.

Here I am thinking if you functionally change the learning rate during the learning process.

 
joo писал(а) >>

I'm sorry, gumgum, but I don't know why or what for. Either I'm dumb, or there's something you're not telling me.

What is the point of optimizing such a simple function?

It doesn't need to be optimized.

 
Joo, Thanks for the link, very interesting!
 
gumgum >> :

>> It doesn't need to be optimised.

The gradient is actually a useful thing, but it doesn't always solve the problem

 
gumgum >> :

Are you talking about gradient descent or something? I don't know how, and I don't know what it would do.

I don't know everything about all optimization methods, I only talk about what I'm good at, that's why I suggested to run a ff, to compare your method with others and learn something new.

 
IlyaA писал(а) >>

It doesn't get any clearer :-D. Do me a favour, describe it in other words from the beginning. Or just use more words.

I'll do an experiment today.... >> I'll post it tomorrow!

 
gumgum >> :

>> It doesn't need to be optimised.

Your thinking is correct! That's what they do as epochs grow, reduce the learning curve