Neuromongers, don't pass by :) need advice - page 10

 
TheXpert:
That's what I'm looking for in a fundamental component :)


On the top chart is the price and the trend component.

Is that what you need to predict? It seems to me that fundamental analysis is more suitable here than a neural network...

 
hrenfx:
No trick question, why this step?

Because the next step was volatility normalisation. And perhaps a move to a pseudo price series, but with volatility not jumping as much...
 
renegate:
Because the next step was volatility normalisation. And perhaps move to a pseudo price series, but with volatility not jumping as much...
So the volatility normalisation can't be done on the initial BP? I take it you do it only for ppr?
 
TheXpert:
So can't you normalise for volatility on the initial BP? I take it you only do it for ppr?

I defined volatility as the average of the RRP modulus. If I divide the original price series by the volatility, I get a series that grows as the price increases and/or the volatility decreases and vice versa. I've tried to build such series before. But their predictability was only held by the predictability of volatility, not price...
 
renegate:

I have defined volatility as the average of the RRP modulus.

Well that's understandable, really not the best way to do it.

If I divide the original price series by the volatility, I get a series that grows as the price increases and/or the volatility decreases and vice versa.

No, that's not the best way at all - scaling candlesticks in relative terms.
 
Belford:

You should not take the quotes of DCs (and MetaQuotes too) because the lower timeframes, especially 1999-2005, are of very poor quality.

These quotes were smoothed, and not by a sliding window, but immediately on the entire history. In other words, there is a peek into the future that is already embedded in the quotes themselves. Neural networks find this without any problems.

Waaa.... I'd call you paranoid if it wasn't true. There's something to it. The neural nets are really eating this story up with a big appetite and high gains.

And where did the wood come from? It doesn't matter, though. You'd better tell me where to get a better story.

 
TheXpert:

I would be very grateful if you could give me a better way of identifying volatility and rationing BP for volatility!
 
renegate:
I'd be very grateful if you could give me a better way of determining volatility and rationing BP for volatility!

OK, I'll do as I go anyway.

MetaDriver:

Waaa.... I'd call you paranoid if it wasn't true... You better tell me where to get a better story.

I'm planning to try this one.

 

MetaDriver:

Belford:

DC quotes (and MetaQuotes too) should not be taken because the lower timeframes, especially 1999-2005, are of very poor quality.

These quotes were smoothed, and not by a sliding window, but immediately on the whole history. In other words, there is a peek into the future that is already embedded in the quotes themselves. Neural networks find it without any problems.

Waaa.... I'd call you paranoid if it wasn't true. There's something to it. The neural nets are really eating this story up with a big appetite and high body count.

And where's the firewood coming from? Doesn't matter, though. You'd better tell me where to get a better story.

Nah, I think it's bullshit, which is what it is, but I think it needs proof. Or at least examples of the divergence in quotes confirming that the network learns great on some and bad on others.

And "looking into the future" in quotes implies that history is rewritten on every tick - well that's bullshit.

 

Are you seriously discussing the "quality" of quotes from 1999-2005? ;)))) and the extent of their impact on the NS :)))

April 1st forever