Mechanisation of optimal parameter selection. Finding a common denominator.

 
Good day! I am convinced that every MTS builder (or the vast majority of builders) has thought about the question: what are the results of the Optimizer Optimal? This topic has of course been touched upon on the forum many times... and simple rules such as: higher profitability, lower drawdown, smooth balance curve... have been memorized like a multiplication table. However, there was no clear formalization of the selection criteria, and most importantly, no analysis of their comparative weight... I.e. of course tables and their subsequent processing with MS Office is understandable, accessible, but alas not effective, in any case with respect to "result"/"time costs". Suppose you have a hundred or two "result sets" and it's probably not worth trying, but if you have a hundred or two thousand sets... Do you feel the difference? And if you also take into account the tester's quirks... So, I propose to discuss "Optimization Optimization" and work out some common (as much as possible) coefficient to estimate goodness of set of optimization parameters for any MTS.
 

If this thread becomes a worthy continuation and finds answers to the questions raised here, I would be very happy.

..........

Moderators: Gentlemen, I ask you to take this thread under special control, and immediately put an end to any attempts to plunge it into flooding.

To the members of the community: Colleagues, I ask you to speak on the merits. For communication and clarification of relations on the forum is in private.

 
If we imagine that "quality of set"=(profit/(time*depo))*K - then this very "K" (KPI if you want) will help to simplify significantly the analysis of optimization results (and just mechanize it) and probably in future will allow to compare different MTS objectively...
 
lasso:

to the issues raised here, I would be very happy to.

Since then, I have become very well versed in these matters). I am not comforted by anything, each class of TS has its own criteria by trial and error, taking into account the characteristics of TS. I cannot use universal ones, they work here and not there... I believe there is a big difference between pipsator and reverse system. And you can't measure them with one yardstick.
 
If you don't draw a clear line between optimisation and tinkering, you can forget about the branch's objectives straight away.
 
Mischek:
If you don't draw a clear line between optimisation and tinkering, you can forget about the branch's objectives straight away.
Is there such a boundary?)
 
OnGoing:
Is there a boundary?)


1 It doesn't matter, it's just that if you don't mark the boundary, the branch will actually be called - "Mechanisation of optimal fitting parameter selection".

2 Of course there is . You can swear at the wording for a long time, a very long time, but if you don't get past this part of the road, the branch will actually be called - "Mechanisation of optimal fitting parameter selection".

 
Mischek:
If you don't draw a clear line between optimisation and tinkering, you can forget about the branch's objectives straight away.

Let's assume that we don't need to distinguish anything... just assume... :))) that we have to measure the length of a boa constrictor... not in parrots or monkeys, but in a "common" yardstick...
 
imho, a terrific mess of thoughts from the writer.
First things first:
1. The criterion is a necessary and sufficient condition; it is always one.
2. The criterion is entirely determined by the optimization goal.
3. The aim is either to estimate the quality of the set, or the adequacy of the optimization parameters set, or to compare different MTS...
4. what's going to happen next?
 
Le0n:

Let's assume that we don't need to delimit anything... just assume... :))) that we have to measure the length of the boa constrictor... not in parrots or monkeys, but in some "common" yardstick...

I agree to assume. But to avoid going backwards, I need to know the conditions for assuming. For example, fitting does not exist in nature. Everything is optimization. OK.

Now we need to understand what is hidden behind "the goodness of the optimization parameter set" ?

 
Mischek:
If you don't make a clear distinction between optimization and fitting, you can immediately forget about the goals of the branch.

Copied from the thread Where is the line between fitting and actual patterns?

I think this is the boundary.

=========================================================

The criterion (or line) sought in this thread should not depend on the type of TS.

I cited this TS only as a clear example for everyone.

.......................

Ok. Let's set a problem from the opposite direction:

You need any available TS by any optimization in the tester (even most severe overoptimization) finally produce the following results:

1) Number of transactions in 1 year at least 250-300.

2) The mathematical expectation of at least two spreads.

3) Let the recovery factor be equal to four (minimum).

.............

Who can present a tester report with these results?

Immediately I see a forest of hands...

Ah, yes I completely forgot:

4) Testing range all available history from 1999 to 12.2010 (12 years)

=====================

If anyone can show something like this,

it would be appreciated.

PS And probably surprised. ))