You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
If I knew what kind of example I wanted, I wouldn't have asked. Something simple on the matadec. Preferably with explanations of what epochs are, etc. I don't understand many terms, so the meaning of what you are doing often slips away.
I once saw an example in a textbook how a network is trained on a sine wave, something like that. If it's not too much trouble.
I'll post it now. I will add comments so that it is clear where and what comes from.
Done. Check it out in your inbox.
to Neutron
I'm sitting in the middle of a two-ply. I've been doing some digging along the way...
I think I know why my single-layer works like this - that is, it doesn't work like yours.
Take a look at this:
Now I don't even know whether to consider it a mistake(for a single layer) -:) What do you think?
paralocus wrote(a) >>.
Now I don't even know whether to consider it a mistake(for a single layer) -:) What do you say.
Get the hell out of the indexes!
You should compensate this error manually, but it's just too shabby. That's why it's better to do it right away.
One more thing. Here's an expression for correctly deriving the learning error:
In other words, you first obtain the sum of squares of errors of the whole training sample within one epoch and divide this expression by the square of the training vector scatter (normalization). It is done to avoid getting linked to the number of epochs or a specific architecture. It will be easier to compare results of NS training. It turns out that if the obtained value is <1 then the network is trained, if not, then the best prediction is to throw it in the trash and go to sleep.
But I don't understand why the indices have to be removed? I think it's just that the correction square is not adding up correctly.
I mean, it has to be like this:
What did you mean by that?
to Neutron
Serega, explain the concept. You will charge your NS for the forecast of some value (Close, (H+L)/2, bar colour, ....) which is expected on the next count (i.e. forecast one count forward)? Did I get it right or something else?
But I don't understand why the indices have to be removed? I think it's just that the correction squared up the wrong way.
Why do you need indexes? You accumulate the correction (not its square, but the correction including the sign), indexes are not needed. Then normalize to the sum of squares at the root (again no indices) and you get the desired value of the correction for a given epoch.
to Neutron
Serega, enlighten me conceptually. Will you use your NS to predict some value (Close, (H+L)/2, bar colour, ....) which is expected in the next consecutive timeframe (i.e. the prediction one timeframe ahead)? Did I get it right or something else?
Yes, I only forecast one step ahead and then retrain the grid. I predict the direction of expected movement, not its magnitude or time duration.
But the correction, I accumulate for each weight personally, i.e. it will be different for different weights included in the neuron (I think that's how you explained it, let me check)
This is what it looks like:
That's right!
I confused indexing by epochs with indexing by synapses. Your implementation is a little different, so I'm peeing my piss. I'm sorry!
Then what's the point of your question? What's wrong with it?