Statistics as a way of looking into the future! - page 10

 
Prival >> :

Look at the picture just above your post, the red curve has very good properties from my point of view, it is smooth (I can vary) and has less lag (I can also vary) compared to price indicators I know of

At the bottom is an oscillator based on an estimate and forecast.

I can't say anything good about the red curve in particular, because I don't see any particular use for such curves - reducing the lag to nearly usable values in all such curves leads to a sharp deterioration in smoothness and an increase in overshoot. Such a curve would be valuable if the horizon for accurately predicting its values was 30-50 steps.

I cannot say anything about the oscillator, because it is not clear what values are displayed there.

 
bstone писал(а) >>
Mmm, interesting. And what method is used to estimate the results, relative to 'random inputs'?

In other words, how exactly does 30-50% count, or is that not the question?

 
Practically. I.e. the usual approach is to calculate the % of correct inputs. Why shift it relative to "random" and how is this done? Unless it is a simple subtraction of 50%, of course.
 

Of course, a simple subtraction.

My NS predicts the sign of the increment one step ahead. Create a vector of length n from the signs of price increments and another vector from the predictions of the signs of these increments. Then we count the number of correct sign guesses for the given NS and subtract n/2 from the obtained sum - this corresponds to the 50/50 case. The obtained difference is multiplied by 200 and divided by n.

That's all.

And I need such a value to estimate TS's profitability. For this purpose it is enough to multiply the obtained percentage by the instrument volatility and we obtain the average statistical return per one transaction.

 

Aha, if I understood correctly, I was referring to multiplication by 100, not 200. Then we get:


(p-n/2)*100/n=(p/n-0.5)*100=100*p/n-50, where p is the number of correctly guessed characters

 
bstone >> :

Aha, if I understood correctly, I was referring to multiplication by 100, not 200. Then we get:


(p-n/2)*100/n=(p/n-0.5)*100=100*p/n-50, where p is the number of characters correctly guessed



No exactly by 200 on the stroke to get an interval of 0 to 100. You have a range of 0 to 50. Given that the network is as good as random :)

 
Prival писал(а) >>

Here's a picture I like better :-) biting allows

I've taken Bulashov's MEMU (red line) and built a step forward forecast for it (black) out of necessity. Did this for the Open series (green). "Good" to see how MEMA's prediction is one step ahead, coolly ahead of the cotier and allows you to bite and swallow in time.

However, on a representative sample (10,000 samples) the miracles disappear, and the predictive properties of this muving are nil and even worse (tan=-0.02). I want to emphasize that a picture, even a beautiful one, is not always able to objectively reflect the reality, and it is useful to check the algorithm by an independent method.

 
Neutron >> :

I want to emphasise that a picture, however beautiful, is not always capable of objectively reflecting reality, and it is useful to test an algorithm by an independent method.


Golden words.


P.S. The picture just shows that MEMA is very delayed and its prediction does not give anything.

 

And here's my model in the naked eye:



Efficient market theory in action!

 
bstone писал(а) >>

Efficient market theory in action!

Just like me! - Just as effective:-)

By the way, bstone, if the data you cite is related to NS performance, then we can state that there is hard overtraining. Indeed, on the training sample we see a complete agreement between predictions and real increments, while on the test sample we see complete crap! Ideally (optimal training), NS has identical elipses on the training and test samples, quite thick, most importantly identical in slope and width.