The market is a controlled dynamic system. - page 60

 
alsu:

You can do that, but you also need to think about how to adjust the parameters using some algorithm.

There are 9000 different algorithms, but they all have one thing in common in purely mathematical terms: in order to get to the optimum, you need to know the gradient of the optimized function by adjusted parameters. Of course, one can use PF as a criterion and even calculate all derivatives in real time (using automatic differentiation it is not so difficult). But there is one problem: the value of the profile factor is wildly dependent on the price series itself, which is known to have the character of a noisy process. Fluctuation of just 1 candle by a few points can result in 1 extra or 1 missing deal with unpredictable result, which would have a dramatic effect on the profit factor (do not forget that we must optimize the model structure on the shortest time interval possible, because initially we assume that the model has variable parameters). Thus the criterion is very non-smooth and the optimization algorithm may simply get stuck in some local optimum conditioned, I repeat, by the mere fluctuation of the price.

The error vector norm (point 3), on the other hand, does not have such a disadvantage: a 1-point change in the price of 1 candle will result in equally insignificant change in the penalty function. The same is true for items 1 and 2, while item 4 is price-independent at all.


In short, the criterion must be as stable as possible to the initial conditions (which in our case is the optimization sample), or the algorithm must have some check of globality of the optimum found. Otherwise we will get chaos instead of optimization.

I agree, transactions are discrete and it introduces some lag if the criterion is based on their result only. In this case PF is simply a ratio of price increments in the direction of forecast/ increments in the opposite direction. In general, it depends on what we are forecasting
 
avtomat:

And certainly not point 2, which requires fitting to a normal distribution. This is, pardon me, nonsense.

Strictly speaking, noise must be "red".

This is the intrinsic noise of any "correct" dynamic system.

Turn the amplifier up to maximum volume with no music input and you'll hear SHHHHHHHHH)).

 

alsu:

Here you already contradict yourself: if the process is represented as signal+noise, then the residual should ideally be exactly thermal noise, carrying exactly 0 information. In general, this premise has been generally accepted for fifty years: get the output of the LGBT (pp. 1 and 2) <=> the model adequately describes the deterministic component.

And please elaborate on point 3, since when did the error minimum become useless from the point of view of adaptation??


1) The process is represented as a mixture x(t) = s(t) + n(t) We have no a priori knowledge about the nature of interference n(t), let alone that n(t) is a thermal noise. On theother hand, an attempt to drive n(t)interference into the postulated limits will lead to a distortion of the signal s(t)

2) Minimization of the error vector norm is acceptable to describe static objects. In our case of a dynamic system, at least the second momentum must be used, which corresponds to acceleration control.

 
sergeyas:

Strictly speaking, the noise must be "red".

This is the intrinsic noise of any "correct" dynamic system.

Turn the amplifier up to maximum volume with no music input and you'll hear Ssshhhhhhhhhhhhhhhhhh)).



Strictly speaking, the noise doesn't have to be, but can be anything, including "red" and "pink" and "white"... and "gray-brown-raspberry" -- anything.
 
avtomat:


If we represent the blocks WL and WR as WLs, WLb and WRs, WRb, then we can connect them as a cross-linked structure.


The independent channels WL and WR will be connected

as a P-canonical structure

or

as a V-canonical structure

Mathematically they are equivalent. Which of them to use, apparently, depends on the convenience of interpretation.

 
avtomat:


1) The process is represented as a mixture of x(t) = s(t) + n(t) We have no a priori knowledge about the nature of noise n(t), and certainly not that n(t) is thermal noise. On the other hand, an attempt to drive n(t) interference into the postulated limits will lead to a distortion of the signal s(t)

Suggest another distribution of n(t)? I would be only glad.

But if not, some assumption about the distribution has to be made anyway. At least the normal distribution can somehow be justified: in the absence of external influences (i.e. deterministic component) market movements will be determined by the sum of actions of a large number of agents, hence by virtue of TPT, provided that the decisions of traders in common and in whole are taken independently of each other, we obtain a Gaussian noise. (White is, of course, an idealisation, so the real noise will come out coloured. But that doesn't mean you can't try to reduce the correlation time). Since there is no deterministic component, the residual of our system should ideally coincide with the input process...

2) Minimization of the norm of the error vector is acceptable for describing static objects. In our case of a dynamic system, at least the second momentum must be used, which corresponds to acceleration control.

No, well there is an input signal and its estimate in the scheme, and the difference between them is present, what difference does it make whether the object is static or not? I want the system to give the same as the real object if possible, i.e. the difference should be minimal. We want to control by acceleration? be my guest, but who will make sure that zero and first-moment error does not accumulate? And it will sneak away for sure, because our useful signal is low-frequency, so every time we take speed and acceleration, we squeeze the useful signal and multiply the noise.

 
Avals:
I agree, the trades are discrete and it introduces some lag if the criterion is only based on their result. But we haven't gotten to deals yet) In this case PF is just a ratio of price increments in the forecast direction/price increments in the opposite direction. In general, it depends on what we are forecasting


So it's like a percentage of the guessed incremental signs... it's a thankless task, it seems to me... can't get out of the noise here, have to work within 50-55% somewhere. I'll take a note though.
 
Mathemat:
Any news changes these exposures by leaps and bounds, throwing information into the system that sets a new equilibrium value for the stock price. A transient process is initiated which seeks to align the share price to the new conditions (there it is, OOS in the system!). Roughly speaking, this is a second-order linear diffura. The linearisation of the diphour is obtained by assuming a small magnitude of fluctuations, i.e. deviations from equilibrium values. We get something like a parametric oscillator (i.e. the Action subsystem is an open system!).

Alexey, I modeled such a system, but not of order 2, but of order 4 at once (I included 2 filters of order 2 in parallel). The input is a homogeneous pulse stream with a demonstrably distributed intensity + LGBT. The ratio of signal dispersion to noise dispersion is ~ 20.


It turns out very similar:


And you can even see very natural Elliott waves on the zoom, that's how the oscillator parameters are chosen.)


 
alsu:

Can you offer me another distribution of n(t)? I would be glad.

But if not, some assumption about the distribution has to be made anyway. At least the normal distribution can somehow be justified: in the absence of external influences (i.e. deterministic component) market movement will be determined by the sum of actions of a large number of agents, hence by virtue of TPT, provided that the decisions of traders in common and the whole mass are taken independently from each other, we just get Gaussian noise. (White is, of course, an idealisation, so the real noise will come out coloured. But that doesn't mean you can't try to reduce the correlation time). Since there is no deterministic component, the residual of our system should ideally coincide with the input process...


You are mistaken. In reality, for adaptation purposes such an assumption is not necessary. But in the case of a non-adaptive model, you have to make some assumptions, postulate them in order to get some ground under your feet.

No, well there is an input signal and its estimate in the scheme, and the difference between them is present, what difference does it make whether the object is static or not? I want the system to give the same as the real object if possible, i.e. the difference should be minimal. We want to control by acceleration? be my guest, but who will make sure that zero and first momentum error doesn't accumulate? And it will sneak away for sure, because our useful signal is low-frequency, so every time we take speed and acceleration, we squeeze the useful signal and multiply the noise.

The difference is very significant.

The nth-order astatism ensures zero system error up to the (n-1)-th error coefficient.

That is, with acceleration control, the error will be in acceleration, while the errors in velocity and position will be zero. In this case, no error accumulation is out of the question.

 

alsu: Понимаю, что можно свести к эквивалентному, но не логичнее ли изначально представлять реакцию по степеням воздействия, а не наоборот?

This is the way the model is built. The model should have been closed in relation to the share price. And at the same time, we need to unify all influences by dimension.

Well, like in mechanics: everything is described in a closed form, through the velocity and acceleration of the material point whose motion we are interested in.

But here I fundamentally disagree: in fact, our system only recycles incoming energy into outgoing energy by "annihilation", sorry for the sparkling terminology. The moment the seller and the buyer agree on a deal, a small portion of the incoming energy dissipates from the system, leaving behind only increased entropy. And the flow of energy through the system, roughly speaking, the volume of transactions is a far from conserved quantity, but it is what allows the system to exist.

Well, yes, I went a bit overboard with the law of conservation. Of course, in general terms - taking into account the work of all the "forces".

Let me remind you again: under certain assumptions, the action becomes very similar to a parametric oscillator. I.e. the system is in principle unclosed and energy exchange with the external environment occurs not only through dissipation, but also through parameter changes.

P.S. I see your scheme and pictures. You made it fast...