Dependency statistics in quotes (information theory, correlation and other feature selection methods) - page 69

 
faa1947:

Here's the result.

A very strange graph. Trimmed. It looks like the calculations were done with limited accuracy.


Right, I wrote, quantized series, which means the returns have been rounded to 2 decimal places and become: 0.01; 0.02; 0.03 ... 1,2. Quantising the series is necessary in order to read the reciprocal information. That is, each quantum is a symbol of the alphabet.

Next I read what you have counted.

 
HideYourRichess:
I see. Well, what can I say - I somehow trust the N-word more than I trust the getch. ;) At least with Pastukhov it is clear where the legs grow from and what the ideas are.
And who tried FARIMA (fractionally integrated series)?
 
faa1947:


Autocorrelation Partial Correlation AC PAC Q-Stat Prob

| | | 1 -0.059 -0.059 11.332 0.001

| | | 2 -0.053 -0.057 20.704 0.000

| | 3 0.025 0.019 22.820 0.000

| | 4 0.005 0.005 22.908 0.000

| | 5 -0.062 -0.059 35.486 0.000

| | | 6 0.007 -0.000 35.639 0.000

| | | 7 -0.038 -0.045 40.475 0.000

| | 8 0.032 0.030 43.845 0.000

| | 9 -0.007 -0.008 44.004 0.000

| | 10 0.025 0.026 46.003 0.000

| | | 11 -0.033 -0.032 49.674 0.000

| | 12 0.048 0.043 57.372 0.000

| | 13 0.002 0.006 57.382 0.000

| | 14 -0.032 -0.028 60.736 0.000

| | 15 -0.033 -0.033 64.288 0.000

| | 16 0.047 0.034 71.425 0.000

| | 17 -0.004 0.007 71.469 0.000

| | 18 -0.039 -0.037 76.462 0.000

| | 19 -0.004 -0.008 76.520 0.000

| | 20 0.017 0.004 77.426 0.000

| | | 21 -0.046 -0.040 84.377 0.000

| | 22 0.020 0.013 85.636 0.000

| | 23 0.006 0.006 85.767 0.000

| | 24 -0.010 -0.010 86.089 0.000

| | | 25 -0.001 -0.004 86.090 0.000

| | | 26 -0.022 -0.028 87.663 0.000

| | 27 0.025 0.031 89.677 0.000

| | | 28 -0.022 -0.028 91.250 0.000

| | 29 0.028 0.029 93.841 0.000

| | 30 0.009 0.011 94.135 0.000

| | 31 0.007 0.015 94.290 0.000

| | 32 0.004 0.001 94.350 0.000

| | | 33 -0.007 -0.009 94.501 0.000

*| | *| | 34 -0.092 -0.085 122.33 0.000

| | | 35 0.010 -0.006 122.66 0.000

| | | 36 0.008 0.003 122.89 0.000

The last column is the probability of correlation. Zero.

This data is of no interest - loss of precision. The analysis is nothing, just a figure.

It's not a bullshit figure. It's a result obtained from a discrete series. Try the Close_Returns series - it's not discretised. Let's see if we can compare the two.

 
alexeymosc:

It's not just a number. It's a result derived from a discrete series. Well, try doing a Close_Returns series - it's not discretised. Let's see if we can compare the two.

What's the difference between the clause and the opener?

I'll have lunch and do it.

 
faa1947:

What's the difference between a clown and an opener?

I'll have lunch and make it.

Bon appetit.

Because it's the Dow Jones index, are you aware that it has gaps almost day in and day out?

 
faa1947:
Has anyone tried FARIMA (fractionally integrated rows)?
No thanks, another econo-numerological method.
 
IgorM:

hmm, did this - visually it looks like this:

http://imglink.ru/pictures/14-10-12/6038b20b9bfbd1e06c08e649623cca4b.jpg

http://imglink.ru/pictures/14-10-12/47b7615b511f6b8a6f3b638a2fcda38b.jpg

Each colored triangle is the TF from right to left of M1,M5 to MN relative to the vertical line that simulates the observer's view of the history, the history in the form of ranges of High and Low extremum/historical max/min

I uploaded it to Statistica in the form of alphabet, yes there are repeated sections/words, even for 2-3 TFs, but the repetition is not periodic, the repetition periods are from 2 months to several years


I can't figure out the construction algorithm. For the dumbasses, can I?
 
alexeymosc:

It's not just a number. It's a result derived from a discrete series. Try the Close_Returns series - it's not discretised. Let's see if we can compare the two.

I got a confusion here. All I was doing in relation to the incremental opener I was counting myself, not the series you gave me.
 
HideYourRichess:
No thanks, another econo-numerological method.
Oh, come on. It's pure Hearst, which you seem to acknowledge.
 

According to your opener.

Graph.

Seems to match mine to scale.

Histogram:

Seems to be different.

ACF

Date: 10/14/12 Time: 13:48

Sample: 1 100

Included observations: 100

Autocorrelation Partial Correlation AC PAC Q-Stat Prob

.|. | .|. |. 1 0.003 0.003 0.0011 0.973

.|. | .|. | 2 0.044 0.044 0.2010 0.904

*|. | *|. | 3 -0.134 -0.134 2.0784 0.556

.|. | .|. | 4 -0.036 -0.037 2.2153 0.696

*|. | *|. | 5 -0.119 -0.109 3.7253 0.590

.|* | |00 .|* |01 6 0.115 0.104 5.1554 0.524

*|. | *|. | 7 -0.095 -0.102 6.1521 0.522

.|. | .|. | 8 0.007 -0.029 6.1581 0.630

*|. | .|. | 9 -0.067 -0.045 6.6632 0.672

.|* | |00 .|* 10 0.108 0.087 7.9741 0.631

.|. | .|. |. 11 -0.007 0.006 7.9799 0.715

.|. | .|. | 12 0.046 -0.008 8.2211 0.768

.|. | .|* | |01 13 0.066 0.106 8.7253 0.793

.|. | .|. | 14 0.060 0.051 9.1477 0.821

.|. | .|. | 15 -0.043 -0.015 9.3658 0.858

*|. | *|. | 16 -0.101 -0.122 10.603 0.833

.|. | .|. | 17 -0.040 0.009 10.804 0.867

*|. | *|. | 18 -0.102 -0.089 12.106 0.842

.|. | .|. | 19 -0.034 -0.058 12.253 0.875

.|. | .|. | 20 0.026 0.002 12.336 0.904

.|. | *|. | 21 -0.045 -0.076 12.600 0.922

.|. | .|. | 22 -0.001 0.004 12.600 0.944

.|* | .|. | 23 0.110 0.070 14.204 0.921

.|. | .|. | 24 0.026 0.011 14.296 0.940

.|. | .|. | 25 -0.020 -0.050 14.348 0.955

.|. | .|. | 26 0.042 0.061 14.590 0.964

.|. | .|* | | 27 0.051 0.077 14.958 0.970

*|. | | .|. | 28 -0.070 -0.060 15.652 0.971

.|. | .|. | 29 0.017 0.037 15.694 0.979

.|. | .|. | 30 -0.037 -0.002 15.889 0.984

.|. | .|. | 31 0.013 0.057 15.915 0.989

.|. | .|. | 32 -0.013 -0.014 15.941 0.992

.|. | .|. | 33 0.011 -0.038 15.960 0.995

.|. | .|. | 34 -0.041 -0.033 16.224 0.996

.|. | .|. | 35 -0.011 -0.027 16.244 0.997

.|. | .|. | 36 -0.017 -0.036 16.289 0.998

I think there is virtually no difference. So two different increments for the opener give the same stat picture.