Mechanisation of optimal parameter selection. Finding a common denominator. - page 3

 
Mischek:


Easy, easy. It was proposed by the topicstarter, I just agreed.

Why is this so? As soon as you respond honestly and substantively to a suggestion to work constructively, you are immediately accused of megalomania?

It must be megalomania.


Accepted.

Let's stay on point and no insults.

The rest in private...

 
Le0n:

simultaneously hanging lots - volume of open positions dangling between SL and TP at the same time....

The cumulative Stop Loss is the sum of all the lots that "might" be caught, and the "current" is the balance - which was in the account at the time the positions with these potential losses were opened...


That's it.

I've lost all sight of the possible purpose of the study, next - without me.

 
tara:


That's it.

I've lost all sight of the possible purpose of the research, next - without me.


Lesh, I laughed when I saw the look on your face when you wrote that. Man, it brought tears to my eyes... Uh...

By the way, you're next - where to?

Take me with you )))

 
Jump in
 
lasso:

IMHO.

1) The TS to be optimised should work with a constant lot.

2) No more than one position at a time, otherwise it is in fact two or more TS.

...........

Such specific points should be discussed on the shore, and the differences in fitting/optimization should be discussed by theoretical philosophers in other threads.


Yeah... is there any chance of getting off the coast...? But we'll try to negotiate...

- Optimizing TS may work with any lot - whatever you want... In this context it doesn't matter...

- Any number of lots - in any direction

- It doesn't matter how many adjustments you need to optimize

- Whether this process is called adjustment or optimization is irrelevant

For MTS comparison, only the size of the starting deposit and the period from - to...

But actually that's secondary in this topic as well... The task is declared in the title...

We have to discuss and possibly work out and/or agree on ... as an example ... one gram is a thousandth of the weight of one cubic decimetre of water...:)... and with these grams we can measure the IQ of advisers.

Let's look for a dimensionless coefficient, which would reflect the quality of advisor's performance or optimality of the set... Taking into account both usual test results (FF, IO ...) and any (possibly artificial) ... Please speak up - what factors do you pay attention to when selecting?

 
Le0n:

Please speak up - what factors do you look for in the selection process?


I don't want to repeat myself, I already gave the link on the first page.

Have you read it? Just in case, I'll say it again.

The methodology works, but not the way I'd like it to. Yes, and so much water has run off since then. Gave up Excel...

............

If you're interested and have specific questions, I'm available to discuss.

 
Le0n:


Yeah... is there any chance of getting off the coast...? But let's try to negotiate...

- The TC being optimised can work with any lot - whatever it wants... In this context, it does not matter...

- Any number of lots - in any direction


Have you ever wondered why 99.9% of people who are implementing these ideas do not show positive results?

And in vain.

To understand something you need to simplify it as much as possible. Decompose it into bricks, atoms...

But you do the opposite: by any lot, in any direction. Porridge.

So at this point you and I are definitely not going to agree.

 
lasso:


If you want to understand something, you must simplify it as much as possible. Decompose it into bricks and pieces, into atoms...

that's for sure... And each atom must be tested separately and pass the robustness criteria.

I.e. we must evaluate both the robustness of the system as a whole, and its individual components as part of the system.
To evaluate the whole, it is logical to use data grouping methods. We divide all data into parts and compare indicators on different parts. If they coincide (or do not change much) then it is good. The important thing here is which indicators we are comparing. The indicator should evaluate "goodness" of results of the system. imha you can come up with various composite indicators or take standard ones like sharpe or sortino. All have certain advantages and disadvantages. So it is better to take a simple one like profit factor.
The grouping of data may be different. The easiest way is to divide system returns into N equal consecutive segments. It is possible to group them by a random selection.
Tests with out-of-sample are done separately, though the logic is the same - grouping and criterion comparison. The difference is that the optimization was not done on the out-of-sample.
But it seems to me more promising to check each element of the system separately. The point is also to compare subsamples, but the criterion of their formation is the subject of evaluation.
I.e. let's calculate a particular filter of the system, for example. Let it be volatility. It is supposed that the system works better with increasing volatility. Form a system filter, e.g. APR>X and watch how the target index (profit factor) changes when X increases.
If PF increases with every increase of X, then it is very likely that the filter works (robust). Of course, with increasing X the number of deals will decrease and the total profit will be reduced, while choosing a specific X is a compromise between the frequency of deals (profit) and the risk. I.e. the montony and smoothness of the target ratio change when the factor strength changes.
If PF goes up and down when increasing X, it means that our assumption is either wrong or the volatility filter must be formulated differently.
The same logic is applied when choosing the entry level. For example, there is a calculated level from which we enter. We artificially introduce the parameter - the shift from this level in points. For example, the target level is L and we select L+X and look at how PF changes when X changes, for example, from -20 to +20.
If the way we select the L level is optimal, PF will be maximal at it (X=0) and will gradually decrease as we move away from it until there is no perimodality.
I.e. the idea is to disassemble the system into screws and check the robustness of each as a part of the system. Discard the superfluous, change the doubtful.

There are, of course, system parameters that are not essentially filters. For example MAC parameter (if you use it :)) Of course, profitability should not increase as its period increases or decreases, but near the extremum of goodness it should also have smooth increasing and decreasing as in the example with the level.

Another important aspect is to understand what the system earns, or why others lose where you do)))

 
lasso:


Have you ever wondered why 99.9% of the people implementing these ideas do not lead to positive results?

And in vain.

To understand something, you have to simplify it as much as possible. Decompose it into bricks, into atoms...

And you do the opposite: in any lot, in any direction. Porridge.

So at this point you and I definitely don't agree.


... oh no, I've even wondered why "any" ideas with 99.9% of the authors of those ideas don't lead to results... :))

And in order to understand something you don't have to atomize everything, then split it up and... further... and deeper... and so on... Would you evaluate a painting (let's say a Repin :) by examining individual pixels? It's destructive... OK? Come on plz. Constructive.

we've been reading here, idle blowing...

Mush is probably not quite right as mush implies a kind of homogeneity... more of a stew or a vinaigrette... But it's this medley of heterogeneous TS that moves the price of the pair and makes its movement so "disharmonious" :)) but that's lyric.

Let's pare down the specific considerations on the subject.

 
Avals:

... That's why it's better to take a simple profit-factor type of thing.

and if you don't see a profit factor in the tester at all - there is no loss ... it has long been agreed not to divide by zero ... what should you "take" next?