The market is a controlled dynamic system. - page 59

 
Mathemat:

Alexei, I ask the first question: why

1) Constant impact, which is independent of the Share price (Alpha impact),
2) an impact that is proportional to the Share price (Beta impact),
3) Proportional to the derivative of the share price (Gamma impact)
4) proportional to the square of the share price (introducing non-linearity) (Delta impact).

If

only the "External" influences the "Share" and not vice versa

? I understand that you can reduce to equivalent, but isn't it more logical to initially represent the response by degrees of impact rather than the other way round?

By the way, the linearity of the second-order diffura makes it easy to introduce classical mechanics concepts - momentum Action and Lagrange function (energy). In the area of invariance of external influences, one can also speak of a certain semblance of the law of conservation of energy.
But here I fundamentally disagree: in essence, our system only recycles incoming energy into outgoing energy by "annihilation", pardon the sparkling terminology. The moment the seller and buyer agree on a deal, a small portion of the incoming energy dissipates from the system, leaving behind only increased entropy. And the flow of energy through the system, roughly speaking transaction volumes, is a far from conserved quantity, but it is what allows the system to exist.
 
avtomat:
2) Selection of the optimisation criterion. This criterion determines the operating frequency range of the model.

The criterion, in my opinion, should be composite and consider the following factors simultaneously (e.g. by means of a penalty function)

- model residuals correlation time -> min

- the difference of residuals distribution from the normal one -> min

- norm of the residuals vector -> min

- number of model parameters not converging to zero -> min

This is for inoculation, without considering the input model, which I will soon break)

 
alsu:

... without taking into account the input signal model, which I will soon be present)


Involuntarily, such a famous story comes to mind. When Laplace presented Napoleon with a copy of his "Celestial Mechanics", the Emperor remarked: "Monsieur Laplace, they say that you have written this great book on the system of the world, without once mentioning the Creator". To which Laplace allegedly replied, "I did not need this hypothesis. Nature replaced God.

;)

 
avtomat:

"I didn't need the hypothesis. Nature has replaced God.

However, only a hundred and fifty years later, the hypothesis had to be revisited because it turned out that someone was "rolling the dice at every measurement", as Albertuschka put it (although he himself did not believe in this "nonsense" until his death).
 
alsu:

The criterion, I believe, should be composite and take into account the following factors simultaneously (e.g. by means of a penalty function):

- model residuals correlation time -> min

- the difference of residuals distribution from the normal one -> min

- norm of the residuals vector -> min

- number of model parameters not converging to zero -> min

This is for inoculation, without considering the input model, which I'm about to break)


You can come up with a lot of different criteria, and a lot of different ones. But such multiplicity of criteria, as a rule, does not lead to the desired result due to their inconsistency.
 
alsu:

Критерий, я так считаю, должен быть составной и учитывать одновременно следующие факторы (например, с помощью штрафной функции):

- время корреляции остатков модели -> min

- отличие распределения остатков от нормального -> min

- норма вектора остатков -> min

- количество параметров модели, не обращающихся в нуль -> min

Это для затравки, без учета модели входного сигнала, которой я скоро присутствующим плешь проем)


maybe simpler - an error is a loss, a correct prediction is a gain. We estimate income/losses. I.e. for example, PF. I.e. optimization criterion PF->max
 
avtomat:

There are many different criteria that can be devised. But this multiplicity of criteria usually does not lead to the desired result due to their inconsistency.
Here everything is important: the first two points require to bring residuals closer to GSR - it means model adequacy; the third point is clear by itself, error should be as small as possible; the fourth point - excessive model complication smells of instability and fitting and most likely will affect forecast quality. I don't see any contradictions, you just need to choose the importance weights for each component correctly.
 
alsu:
Everything is important here: the first two points require to bring residuals closer to GSR - it means that the model is adequate; the third point is clear by itself, the error should be as small as possible; the fourth point - excessive model complication smells instability and fitting and most likely will affect the forecast quality. I don't see any contradictions, you just need to choose the importance weights for each component correctly.


In my opinion, none of the criteria you listed

- model residuals correlation time -> min

- residuals distribution difference from the normal distribution -> min

- norm of the residuals vector -> min

- the number of model parameters not converging to zero -> min

is neither necessary nor useful from the point of view of model fitting.

And certainly not item 2, which requires fitting to a normal distribution. This is already, pardon me, nonsense.

 
Avals:

maybe simpler - an error is a loss, a correct prediction is a gain. We estimate income/losses. I.e. for example, PF. I.e. optimization criterion PF->max

We can do it this way but we should also think about how to tune the parameters using some algorithm.

There are 9000 different algorithms but they all have one thing in common in purely mathematical terms: in order to reach the optimum you need to know the gradient of the function being optimized by adjusted parameters. Of course, one can use PF as a criterion and even calculate all derivatives in real time (using automatic differentiation it is not so difficult). But there is one problem: the value of the profile factor is wildly dependent on the price series itself, which is known to have the character of a noisy process. Fluctuation of just 1 candle by a few points can result in 1 extra or 1 missing deal with unpredictable result, which would have a dramatic effect on the profit factor (do not forget that we must optimize the model structure on the shortest time interval possible, because initially we assume that the model has variable parameters). Thus the criterion is very non-smooth and the optimization algorithm may simply get stuck in some local optimum conditioned, I repeat, by the mere fluctuation of the price.

The error vector norm (point 3), on the other hand, does not have such a disadvantage: a 1-point change in the price in 1 candle will result in equally insignificant change in the penalty function. The same is true for items 1 and 2, while item 4 is price-independent at all.


In short, the criterion must be as stable as possible to the initial conditions (which in our case is the optimization sample), or the algorithm must have some check of globality of the optimum found. Otherwise instead of optimization we will get chaos.

 
avtomat:


And certainly not point 2, which requires fitting to a normal distribution. This is, pardon me, nonsense.

Here you already contradict yourself: if the process is represented as signal+noise, then the residual should ideally be exactly thermal noise, carrying exactly 0 information. In general, this premise has been generally accepted for fifty years: the output of the LGBT (items 1 and 2) <=> the model adequately describes the deterministic component.

And tell me more about point 3, since when did the error minimum become useless from the point of view of adaptation?