You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes, one step in the opposite direction. That is step up, then probability of step down is 40% and further if step down, then probability of next step down is 60%. That is the probability of continuing the trend of the previous step.
Ah, now I realized that p changes every step, i.e. it is a function of (step number, and/or previous step, or all previous steps). then obviously I agree with everything Alexey said.
The only thing is if we take p with 10% gradation, i.e. from 0 to 10 there will be 10 steps. Then by stupid search of 10 to powers of 10 we can determine most appropriate distribution for given step, and then if we apply gradient descent - more accurate. Am I right?
OK, thank you, I will try when the weekend is over.
By definition, the stationary distribution should not change at each step. In this case, any distribution will "spread out" at each step, increasing the variance.
This is a bit of a backward-looking approach. The set of admissible variants is set in advance (-10,-8,...0...8,10), and the probabilities to stop for 10 steps exactly at one of them serve as probabilities, the relative frequencies for which are collected for 10000 realizations of a random variable. So the distribution makes sense and there is no sprawl. The limit of relative frequencies is taken not for unbounded growth of the number of steps, but for unbounded growth of the number of realizations of these 10 steps.
This is a bit of a backwards approach. The set of admissible variants is set beforehand (-10,-8,...0...8,10), and the probabilities to stop exactly at one of them in 10 steps serve as the probabilities, the relative frequencies for which are collected for 10,000 realizations of a random variable. So the distribution makes sense and there is no sprawl. The limit of relative frequencies is taken not for unbounded growth of the number of steps, but for unbounded growth of the number of realizations of these 10 steps.
Not at all. This is the usual approach for a Markov chain. You are missing the fact that in addition to the transition matrix, the determining parameter is the initial distribution - it does not necessarily have to be what TC set it - points (0,1) and (0,-1) with probabilities of 0.5 each. If a stationary distribution existed, then taken as an initial distribution, it would be the same after the tenth step as before the first one. But no such stationary distribution exists for the given chain.
Not at all. This is the usual approach for a Markov chain. You are missing the fact that in addition to the transition matrix, the determining parameter is the initial distribution - it does not necessarily have to be what TC set it - points (0,1) and (0,-1) with probabilities of 0.5 each. If a stationary distribution existed, then taken as an initial distribution, it would be the same after the tenth step as before the first one. But there is no such stationary distribution for the given chain.
Sorry, but the problem is different. TC is not figuring out the probability of P(x) stopping after an indefinitely long back and forth at a point at least as large as x. That would be the usual formulation of the problem. It analyzes a histogram of the distribution, not of the stopping point (stationary), but of one of the possible statistics of the process, located 10 steps from the starting point 0. Unusual statistics, yes. Not the mean, not the variance, not the median, not the quartile. The condition of independence from history (Markovian) certainly isn't met, as there is clearly a shift of exactly 1 from the previous value. Not for nothing Alexander_K2 here has cited a paper on non-Markovian processes"Shelepin L.A. Processes with memory as the basis for a new paradigm in science" (he cites p. 10).
If we talk about the mentioned distribution P(x), the initial Gaussian (normal) distribution would be stationary (conditionally, only in form, with constant decreasing values at 0 and increasing dispersion) at k=0.5. On the segment expanding with each step. I would not like to justify it here, the field is very far - difference schemes for the heat conduction equation.
Sorry, but the problem is different. TC is not figuring out the probability of P(x) stopping after an indefinitely long round trip at a point at least as large as x. That would be the usual formulation of the problem. It analyzes a histogram of the distribution, not of the stopping point (stationary), but of one of the possible statistics of the process, located 10 steps from the starting point 0. Unusual statistics, yes. Not the mean, not the variance, not the median, not the quartile. The condition of independence from history (Markovian) certainly isn't met, as there is clearly a shift of exactly 1 from the previous value. Not for nothing Alexander_K2 here has cited "Shelepin L.A. Processes with memory as the basis for a new paradigm in science".
If we talk about the mentioned distribution P(x), the initial Gaussian distribution (normal) would be stationary (conditionally, only in form, with constant decreasing value at 0 and increasing dispersion) at k=0.5. On the segment expanding with each step. I would not like to justify it here, the field is very far - difference schemes for thermal conductivity equation.
The usual problem on the basis of Markov chains- the initial distribution in the state space is given and one needs to find how it will change in a certain number of steps. The analogy with numerical solution of partial derivative equations is certainly visible, since the solution is built on a two-dimensional lattice.
I do not really understand what the problem about stopping is - the moment of stopping is fixed and known in advance.
Gaussian distribution cannot arise here in any way - state space and time are discrete.
Shelepin writes nonsense. Markovism is here - either they speak about a chain of the second order, or the space of states is constructed from vectors - so did Markov himself more than a hundred years ago while studying Pushkin's texts.
The usual problem in Markov chains is that the initial distribution in the state space is given and one has to find how it will change in a certain number of steps. The analogy with numerical solution of partial derivative equations is certainly visible, since the solution is built on a two-dimensional lattice.
I do not really understand what the problem about stopping is - the moment of stopping is fixed and known in advance.
Gaussian distribution cannot arise here in any way - state space and time are discrete.
Shelepin writes nonsense. There is a Markovian character here - either they speak about a chain of the second order, or the space of states is constructed from vectors - this was done by Markov himself more than one hundred years ago while studying Pushkin's texts.
I will not argue about names, maybe both TC and Shelepin, and Alexander (and me too) incorrectly call that one-dimensional random process with explicit dependence of each successive value on the previous one, is not Markovian. So be it. And as for the impossibility of Gaussian distribution, as it turns out, I have an excel spreadsheet for a long time, where it is well visible. After 212 steps from point 0 the probability spreads out to this one:
I attach the file with the table. There just with k=0.5 add up probabilities from above time point to current time point. To prove in details, I repeat, here it is not necessary. The illustration with the table of values is enough.
I will not argue about names, Maybe both TC and Shelepin and Alexander (and me too) incorrectly call that a one-dimensional random process with explicit dependence of each next value on the previous one, is not Markovian. So be it. And as for the impossibility of Gaussian distribution, as it turns out, I have an excel spreadsheet for a long time, where it is clearly visible. After 216 steps from point 0, the probability spreads out to this one:
I attach the file with the table. There just with k=0.5 add up probabilities from above time point to current time point. To prove in details, I repeat, here it is not necessary. The illustration with the table of values is enough.
Is every bell-shaped function the density of a normal distribution? What prevents you, for example, from seeing the density of the beta distribution in your illustration?
I suspect this thread was not created by accident :)))
I recall that you somehow manage to reduce the double gamma-like distribution of increments in the market to pure normal... And now you're looking for an answer to the question - what's next!
I support Bas with his advice - you need to move into options. The Black-Scholes model should obviously work on your data.
Is every bell-shaped function a density of a normal distribution? What prevents you, for example, from seeing the density of a beta distribution in your figure?
Nothing prevents you from seeing the density of the beta distribution. In the picture, by the way, the edge effect is already noticeable - on the left the probability doesn't decrease as fast, it's the edge of the table there. On the right it is not so noticeable, but the table is still bounded. And the normal distribution has no boundaries. Just like an infinite rod, the pieces of which transfer heat to each other instead of probability (a red-hot drop falling from a welder's electrode onto a long reinforcing rod generates a Gaussian temperature distribution at every moment, with ever-increasing dispersion). I'm not going to prove it here.