To follow up - page 24

 
Sorento писал(а) >>

the size of the sample could have an impact?

The sample volume is sufficient.

And there is also the "string limit tension" state - there is virtually no variance.

Maybe that is the state you captured?

If you integrate the second series - the last value is about 20% of the last price value. I think if there was practically no dispersion, the contribution of this series would be less.

What is the data?

eurusd, all the history I had in the terminal

 

Folks, I'm going to jump in a little off topic here, eh?

I'm in a celebratory mood for the third day, I'd like to figure it out (I've already got what I usually can't figure out, so I'm good to go :)))

In short, let's say we somehow determined the exact algorithm, by which the brokerage company filters (in the broadest sense) quotes. Can this remarkable fact be used in a real situation for profit? My head is already broken...

 

No, no, don't go off-topic, alsu. You'd better search the forum, the topic comes up regularly.

 
Mathemat >>:

Не-не, не надо не в тему, alsu. Поищи лучше на форуме, тема поднимается регулярно.

I'd rather sort it out myself, I didn't run to the shop for nothing :))

 
Mathemat писал(а) >>

No, no, don't go off-topic, alsu. You'd better search the forum, the topic comes up regularly.

That's what you said. But now that you've said it, let's go off-topic. :-)))

 
lea >>:

Оба распределения скошены в сторону положительных значений.


Is this a consequence of the global trend over the "reporting" period or were there returns that did not fall into either the first or the second sample?


The Wege-Easing model looks curious to me first of all because it assigns a certain "physical" meaning to its parameters, i.e. one can try to model them with additional information. But it is asymmetric with respect to up and down motions, is this not confusing? It seems (inflation-adjusted for securities) that markets are rather symmetric in this respect.

 
Perhaps while people are walking around somewhere you can still philosophize :)
Svinozavr >>:

Разбиение по контексту в общем случае м.б. по реальным торговым моделям. Но тут дело вот в чем. Я пришел к разбиение по микроконтексту, который должен быть а) общей основой, кирпичиком для идентификаций/анализа/прогноза всего остального, а следовательно б) содержать в себе достаточную для этого информацию о рынке, и в) основан на относительно устоявшемся (квазистационарным) процессе.

It seems that here we can identify the micro-context with the state parameter, i.e. it rather coincides with Yuri's approach.
We see that the hypothesis is put forward that division into contexts by a particular characteristic (or set of them) will allow making the expected payoff positive. And then this hypothesis is tested in real-time trading (or its imitation on history).

The second approach, which I tried to formulate in this thread, is that we first break the history into contexts using a pre-selected algorithm for getting trading signals. Then we divide the resulting set of contexts into two (or more) subsets (types of context), each of which will be associated with one or another trading tactic (strategy). Then we try to find an algorithm for recognizing context types in real time. It is done in the same way as in the first case by making hypotheses about the effect of certain state parameters on the result and testing them. In terms of neural networks, we actually form "good" and "bad" training samples. Although of course NS is only one of possible approaches to the task.

In these terms in the first approach selection and recognition operations simply coincide.

The first approach looks less objective, i.e. the second one is more suitable for minimizing risks and maximizing profit. Firstly, because of the profit-oriented context separation algorithm, and secondly, because of the possibility of applying mathematical optimization methods. While the first one in its pure form should not be subject to any optimization at all. IMHO, of course.

However, in this regard I would draw attention to Avals' assumption: any attempt to "objectively" separate contexts due to high noise levels is doomed to fail (or turn into a fit). The wording doesn't seem to match the author's, let him correct if anything is wrong.

Fortunately in the second scheme there is also an element of subjectivity, all hope for it :) . On the other hand, in the first one as well there is a temptation of "improvement" by means of optimization or adding parameters (and optimization again). Which, in fact, brings this approach closer to the second one, at least with respect to rakes and traps.


P.S. Yury, for sure I tried in this post to confuse the readers in full or partial way about your approach, may be you will formulate it briefly yourself (if possible, not to the detriment of correctness)?

 
Candid >>:

Пожалуй пока народ где-то ходит можно ещё пофилософствовать :)

Welcome! )))

It seems that we can identify the microcontext here with the state parameter, that is, it rather coincides with Yuri's approach.
We see that we put forward a hypothesis that division into contexts by a particular characteristic (or their sets) will allow to make the expectation of winning (profit) positive. And then this hypothesis is tested in real-time trading (or its imitation on history).

And it really is. Here's a simple check on a row formed by a trivial ZZ.

The second approach I tried to formulate in this topic is when we first break the history into contexts using a preliminarily selected algorithm for getting trading signals.

This is quite a separate topic - it's a very interesting approach! Well to mycrocontact it has ... definitely has something to do with it!

Then we divide the resulting set of contexts into two (or more) subsets (context types), each of which will be associated with a particular trading tactic (strategy). Then we try to find an algorithm for recognizing context types in real time. It is done in the same way as in the first case by making hypotheses about the effect of certain state parameters on the result and testing them. In terms of neural networks, we actually form "good" and "bad" training samples. Although of course NS is only one of possible approaches to the problem.

In these terms, in the first approach, the selection and recognition operations are simply the same.

Yes. Methods are another matter. The main thing is what for?

The first approach looks less objective, i.e. the second is more suitable for risk minimization and profit maximization. Firstly, because of the profit-oriented context separation algorithm, and secondly, because of the possibility of applying mathematical optimization methods. While the first one in its pure form should not be subject to any optimization at all. IMHO, of course.

Of course I shouldn't. Why should I? "Or am I not a great Russian writer? ))) In all seriousness, the basics, by definition, should not be 'variant'.

However, in this regard I would draw attention to Avals' assumption: any attempt to "objectively" separate contexts due to high noise levels is doomed to fail (or turn into a fit). The wording doesn't seem to match the author's, let him correct if anything is wrong.

So we've agreed that it's money for us. What's the noise? You simply measure your profits according to your understanding of it. Of course, you don't have to get excited - it's clear what to expect from the market.

Fortunately, the second scheme also has an element of subjectivism, all hope is in it :) . On the other hand, in the first one as well there is a temptation to "improve" it through optimization or by adding parameters (and optimization again). Which, in fact, brings this approach closer to the second one, at least with respect to rakes and traps.

Right. That's normal.

 
Svinozavr >>:

Так мы же договорились, что для нас деньги. Какой шум? Вы просто меряете свой профит сообразно вашему о нем представлению. Ессно, завираться не надо - понятно, что ждать от рынка.

I seem to have overdone it in my quest for brevity. My understanding of Avals' assumption is that by making "subjective" hypotheses we use our understanding of market functioning, i.e. external information. In essence we go beyond the TA (there, you're the first to use the term :) ). This gives an additional filter, without the application of which we will not see anything but noise in the market.

 

Continuing with the idea of possible coordinates.

It seems to me that if you use them, there are nine possible evaluations of the context.

Zero - unambiguous "sitting on the fence". ;)


"phase"... ;)