Market etiquette or good manners in a minefield - page 60

 

In the literature there are estimates of the optimal value from 1/10 to 1/30, depending on the topology of the feature hypersurface, in which the NS searches for the minimum.

grasn писал(а) >>

Eh, Seryoga, you can't be such a show-off, and on a level playing field.

Agree that it's no fun to be a show-off on a bumpy terrain - you're cool as it is. Not being like that on nothing is natural - it's how we're brought up. But to be a "show-off, and on flat ground", it's not trivial. >> cool!

 
grasn >> :

It's obvious and understandable that trading levels are set close to the current price, I've written about that. But I wouldn't say "no way" unequivocally. I have to look at movement statistics (I just don't have time right now, and it requires a careful approach). One may recall winwin2007, I remember he was trading 1-2 points including spread. He earned about 20 or so points before they adjusted his filter. My made-up approach seems to be stable to "filtering", but who knows, one has to look. Maybe it's not so bad for "total loss" on 1-3 pips deals. And maybe the statistics will show steady profits only at 100% guessing,


PS: How do I explain that it was just an idea based on an equally simple awareness of the fact that a strategy can be based on "directional guessing". It was "new" to me in a way. But this "colour guessing" approach will always have a "dark side" to any strategy, without exception, for the simple reason of extremely limited information. ..., I guess I was being a bit florid, but whatever.

OK, let's consider that I failed and that means absolutely nothing, let someone else try it.

 

What I don't understand is this:

If K=2 and speed = 18, then how do I manage to teach it like this:


in just 50 epochs (24 inputs).

 

It's a cinch!

Increase the number of experiments by a factor of 2 and the result is reduced by a factor of 2, indicating that you are exploiting a "convenient" section of BP.

 
And you could recommend some parameter limits. A fitting, of course, is not a good thing...
 
paralocus писал(а) >>
Could you recommend some parameter limits. >> Adjustment, of course, is not a good thing...

Ah, here we go again with the fitting issue :)

 
YDzh >> :

Ah, here we go again with the fitting issue :)

Yeah, about how to avoid it...

 

Simple! - The number of experiments should be large. It is good, if it is possible to work with n>1000. Here we can already talk about the statistical validity of the result obtained.

 
Yes, that's a possibility. With the two, of course, it will take a little longer, but that's okay, we'll wait.
 

paralocus, let's get to the statistical analysis of the learning process, without it it is impossible to correctly fit the network parameters. See how training errors (red data) and prediction errors (blue) look depending on the epoch:

You see, by the end of the training the girl has become a "cogger" and as a consequence has lost the ability to think and generalize the acquired knowledge.

By the way, in our life we often get excellent and competent specialists in various fields from the former "losers".

Do you see?