To follow up - page 25

 
Candid писал(а) >>

I seem to have overdone it in my quest for brevity. My understanding of Avals' assumption is that by making "subjective" hypotheses we use our understanding of market functioning, i.e. external information. In essence we go beyond the TA (there, you're the first to use the term :) ). This gives an additional filter, without applying which we will not see anything but noise in the market.

Yes, that's what I meant. The null hypothesis is why the system makes money in the market. Then the answer to the question when it earns is logically related to the previous one. The context is the answer to the question when the system earns. If there is no zero level, then the system will be a shamanism with some derivatives from the price and adjustment to the history or to the unknown "market phase". Even if it really worked, it is not clear why it works and when and what should be adjusted to keep it working and when it should not work at all.

 
Yurixx >>:

есть только один алгоритм, который формирует все фазовое пространство, которое, в свою очередь, определяется параметризацией рынка. Этот алгоритм является следствием не моих или твоих представлений о контексте, а указанием (если хочешь - прямым) всех точек истории, в которых должны приниматься торговые решения. То есть здесь рулит именно принцип наибольшего (с точки зрения создателя ТС) профита. Эти точки отображаются в фазовом пространстве. Если они группируются, то получаются контекстные области - типы контекста. Сколько их получится заранее неизвестно.


Still my conscience is gnawing at me concerning the distortion of Yuri's position. Searching for a brief formulation of his position, I stopped at the paragraph quoted above. Now, let me break it down to the bone.

... there is only one algorithm that forms the entire phase space, which in turn is determined by the market parameterisation.

To my understanding "forms" here means simply measures in real time (though "all phase space" probably means history) state parameters from a predetermined (subjectively determined) set. It is because of this phrase that I considered Yuri's and Peter's approaches to be of the same type. Since superfluous parameters are undoubtedly harmful, only those directly affecting the result should be included. And since we are only now going through history, this set was defined in an exclusively subjective way.

This algorithm is a consequence not of my or your ideas about the context, but an indication (if you like - direct) of all points of history, in which trading decisions should be made. That is, it is the principle of the largest (from the TS creator's point of view) profit that rules here.

This I can't interpret in a consistent way, the algorithm has a second (after forming the phase space) independent function - specifying the ideal entry points. That is, to call it all one algorithm can only formally, one can form FP in realtime, but ideal entry points can only be specified retrospectively.

These points are displayed in the phase space. If they are grouped, then we obtain context areas - types of context. It is not known in advance how many of them there will be.

Now we are definitely dealing with "learning" from history. However, there is no set of negative examples, only "good" points are displayed. Thus, the question of real-time determination of entry points hangs in the air, because nothing prohibits "bad" points to lie in the same contextual areas as good ones. In fact, in my experience, that's usually where they end up :) .

It may not be quite obvious, but it is a very important classification feature - due to absence of negative examples no optimization is possible, only testing of initial assumptions is possible. That is, the described approach again falls into the first type.

 
avatara >>:

Продолжая мысль о возможных координатах.

What about the axes? Or is it just an illustration of one of the ways humanity knows how to represent information?

 
Candid >>:

А что по осям? Или это просто иллюстрация одного из известных человечеству способов представления информации?

they are signed.

And previously defined.

Through certain characters - the "Byduge" observer and the "Barking Dog". ;)

But what kind of inductor to measure each one with is the question.

-----

If interest in the topic continues, a little later, I'll continue the reasoning.

And hypotheses about optimal behaviour in each area will be expressed.

 
avatara >>:

они подписаны.

И ранее определены..

Ah, I thought the inscriptions referred to numbers and/or zones. Can I get a link to the definitions, I've definitely missed them.

 

read the previous reasoning.

Although these are ideal characters - the requirements for their characteristics are outlined.

 
Candid писал(а) >>

Is this a consequence of the global trend during the "reporting" period or were there returns that did not fall into either the first or the second sample?

A consequence of the trend. Buy-and-hold is working in the positive for that period :)

Returns were not thrown out or missed.

But it is asymmetric to the up and down movements, does that not confuse you?

It doesn't confuse me. The nature of the upward and downward movement is different in itself.
 
lea >>:

Характер движений вверх и вниз сам по себе разный.

This seems to be the case, but it could be a consequence of the global context. That is, when it changes it will have to change eurusd to usdeur :)

 
Candid писал(а) >>

P.S. Yuri, I must have tried to mislead readers in all or part of this post about your approach, maybe you could briefly formulate it yourself (if possible, not to the detriment of correctness)?

No way, dismissed. I've already put so much effort into it. What is the point of repeating everything all over again? Anyone who wants to can simply reread our polemics.

But about your comparison of two approaches I'll make a couple of remarks.

Candid wrote >>

It seems that here we can identify the micro-context with the state parameter, i.e. it rather coincides with Yuri's approach.
We see that we advance a hypothesis that division into contexts by a particular characteristic (or set of them) will allow making expectation of payoff positive. And then this hypothesis is tested in real-time trading (or its imitation on history).

I would add: it is possible to identify the microcontext with the area of values of state parameters. As for the rest, everything is correct. Taking into account everything that has been said before.

Candid wrote: >>

The second approach that I have tried to formulate in this thread is that we first break the history into contexts using a pre-selected algorithm for getting trading signals. Then we divide the obtained set of contexts into two (or more) subsets (context types), each of which will be assigned one or another trading tactic (strategy). Then we try to find an algorithm for recognizing context types in real time. It is done in the same way as in the first case by making hypotheses about the effect of certain state parameters on the result and testing them. In terms of neural networks, we actually form "good" and "bad" training samples. Although of course NS is only one possible approach to the problem.

The most tidbit of your post. I dared to make some extracts from it:

1. We first break down the history into contexts using a pre-selected algorithm for obtaining trading signals.

2. we divide the obtained set of contexts into two (or more) subsets (context types)

3. we try to find an algorithm for recognizing context types in real time.

That is the algorithm for getting trading signals - such a highly subjective thing which depends entirely on our personal ideas about the strategy, signals and the procedure of getting them - becomes the basis for our (own hands) separation of the history into contexts. We then divide them into types. Since no objective criteria are given for this, it also seems to be based on our own perceptions/preferences/insights. And after all this gigantic work we have to wrestle with the solution of a problem which we have created for ourselves - how do we recognise them now?

Candid wrote: >>

In these terms, in the first approach, the operations of selection and recognition are simply the same.

The first approach looks less objective, that is, the second seems more suitable for minimizing risks and maximizing profits. Firstly, because of the profit-oriented context separation algorithm, and secondly, because of the possibility of applying mathematical optimization methods. While the first one in its pure form should not be subject to any optimization at all. IMHO, of course.

The first approach cannot appear less objective, because there is simply no such thing as less objective. I did not see any objective grounds in the second approach described at all. By comparison, in the first approach in item 1, the creative act is the parametrization of the phase space. If it is adequate to the complexity of the system, then the partitioning of the phase space into contexts and their classification are obtained automatically. Therefore, in the first approach, it is not "operations of selection and recognition" but operations of selection and typing of contexts that coincide. And real-time context recognition (item 3) is automatic. And there is no need to find a special algorithm for this. Thus, the whole first approach is built on the adequacy of parameterization. Isn't it an indicator of objectivity of the model?

But the second approach, unfortunately, at each step stops at this very "we". Is it objectivity?

As for optimization, I would not be especially happy. It is subjectively moulded systems with a large number of parameters that provide the widest optimization possibilities. Secondly, the use of profit maximisation criterion (however, not of a particular trading strategy or algorithm, but of an objective one) is the very source of context formation in the first approach. The second has no advantage here. I would even say on the contrary, it is inferior due to its subjectivity. And third, in the first approach there will in any case be parameters that need to be optimized. This has long been debated, so let me clarify: The said optimization will be in its essence customized to a trader. For example, on their trading horizon, risk level, MM. We must exclude arbitrage where it does not belong. And trading is a human game, so there will necessarily be some range for arbitrariness.

Candid wrote: >>

However, I would point out Avals' assumption in this regard: any attempt to "objectively" separate contexts due to high noise levels is doomed to fail (or turn into a fit). The wording does not seem to coincide with the author's, let him correct if anything is wrong.

Fortunately there is an element of subjectivism in the second scheme as well, all hope for him :) . On the other hand, in the first one as well there is a temptation to "improve" it through optimization or by adding parameters (and again, optimization). Which, in fact, brings this approach closer to the second one, at least with respect to rakes and traps.

I will note only one thing. In the first approach it is possible to check effectiveness of different parameters separately. And, if a parameter turns out to be ineffective, throw it away for good. It is possible, for example, to check in this way all TA indicators, both classical and the newest ones. The work, of course, is not trivial and routine. But the diagnosis does. So the first approach does not need to add parameters. It just needs parameters, objective and adequate.

Who has ? :-)

 

Yeah, well, the important thing is that you agree with the classification.

Concerning subjectivity of the second approach, I think you overdramatize the situation. We have quite an objective criterion for the signal acquisition algorithm - the degree of proximity to ideal inputs. We can simply classify a context as "good" only if our inputs are sufficiently close to the ideal ones.

The problem that we create for ourselves - well, yes, but at least we have clearly defined inputs and outputs. You see, I once spent quite a lot of time on perfect zigzag inputs. And as a result I came to a conclusion, that such approach does not give any "objective" automatic context recognition (in terms of current discussion) and no "objective" inputs-outputs. It just turns out that any algorithm tuned to perfect inputs produces just as many bad inputs as good ones. That's the nature of the market. Maybe I just haven't found "the right" parameterisation, though. If you find it, we can get back to the discussion :).

Regarding optimization, I have the impression that you understand it only as optimization in the tester. Yes, indeed, in the first approach it is the only available way of optimization.

However, according to my new belief, the more subjective the better :) . So your post just warmed my soul :) .