Criterion for automatic selection of optimisation results. - page 4

 
LeoV >>:

Согласен. Усложнение не даёт гарантии улучшения результата.

I just indicated it as a variant of the task, that's all.

And the main idea was to divide optimization criteria into typical classes (multicriteria optimization) without using filters sifting out results, because filters can sift out promising types of TC. And dead-end branches anyway will be discarded at the end of optimization, since only the most relevant to each of the classes will be selected.

 
joo писал(а) >>

I just marked as a variant of the solution of the task, that's all.

And the main idea was to divide optimization criteria into typical classes (multicriteria optimization) without using filters sifting out results, because filters can sift out promising types of TC. And dead-end branches anyway will be discarded at the end of optimization, as only the most relevant to each of classes will be selected.

Theoretically it can be, but practically..... it is not clear how it can be realized in practice, so that even brains do not boil during "multi-criteria optimization"....)))) and after that, one needs to have more strength for trading.....))))

 
StatBars писал(а) >>...

joo wrote >>....

Sad as it is, you friends are absolutely right. I myself understand it, although I've heard "multi-criteria optimization" for the first time. Googled it, it's an interesting dark forest), the task itself is comparable to successful trading.

What I'm trying to deduce is just an approximation for practical daily use, albeit far from a theoretical ideal, but still better than what the vast majority of tester/optimizer users use. The quick-witted criterion I provided above, composed on the fly in a couple of seconds, shows itself better than commonly used optimization criteria. Collectively we can come up with something more reasonable. The idea of linear equation is very appealing, there is a problem with weights but the criterion turns out to be very flexible, versatile enough to be applicable (using different weights) for different classes of TS, not so many of them...

And this criterion will not suffice, if nothing good comes out, maybe after some thought we will work out a technique for using what we have. That's not bad either.

So let's think about the problem from the point of view of practical application. We don't need a Nobel Prize, but a tool for work. The task is nothing: to select a good result of optimization), I won't shout about the best one.) We can go in the opposite direction, to think over the criterion of rejection of obviously bad results, though it will turn out that way too.

 
Figar0 >>:

Согласен, фактор не маловажный. Сможете показать как практически рассчитать, облечь в математическую форму это распределения имея полный результат теста?


Variant 1

strategy results cumulative sum of last N trades, there should ideally be a straight line, or a line very similar to it . Let's draw a straight line, let's take a linear regression by the slope angle and estimate profitability. furthermore, standard deviation as a risk measure, imho dispersion calculation is correct if we have a bell-shaped symmetric distribution curve. In the opposite case we have a hypothetical distribution of trades 10,10,10,10,10 etc. The sum is clear, calculate the variance on this line, it will depend on the slope angle, sample length and in case of broken line also on the degree of dispersion. Just what Sharpe suggests is not quite adequate, imho.

I measure the variance as a scatter of a random variable, in this case the results of the transaction, from the mean, but from the line of a linear regression. Then this tricky variance on a straight line equals 0. So the straighter the curve, the smaller the cunning variance.

If we divide the slope angle of the linear regression by the cunning variance, we obtain a number characterizing TC.

Comparable TS should be calibrated by the same lot and number of last trades
Variant 2

https://www.mql5.com/ru/forum/122464

 
Criterion = Sum of the lengths in which the TS was in profit divided by the sum of the lengths in which the TS was in drawdown. The segments with profit and with drawdown can overlap. Segments can be measured in minutes or in the number of transactions.
 

Figar0, what you are trying to estimate is robustness and there is no single complex parameter to estimate it. To start with, you need to find the minimum backbone of the system, e.g. signal input, output and one filter. All of them may have parameters, optimise the backbone as a whole. It must be profitable with the maximum number of trades and in the wide range of parameters, preferably in different symbols. Then comes the addition of filters for a particular pair and even for a particular piece of history (maybe they will be over-optimized). The variation of results in the diapason of changing parameters for each filter is analyzed separately. Decision about usefulness or uselessness of the filter is made, according to some criteria. This is to preserve statistical validity. The final system is built. The system is assembled like a matryoshka doll, but the core of the system should be really quite simple and effective with the maximum number of trades to guarantee that it really uses the real property of the market and not a phantom.

I.e. I think the basis is in the correct synthesis of the TS and staged evaluation (each stage has its own priorities and evaluation parameters), rather than in the choice of a global complex parameter for evaluating the TS in the final assembly. There is simply not enough information for it to assess robustness. imho

 

I use one pre-fab criterion to assess the results of an expert in the first approximation - to select among other options. I remember reading about something like this in an interview after some championship. I call it the "quality of equity" (estimation of "smoothness", in other words) - it is the ratio of the profit gained (by even lot, of course) to the maximal drawdown. The subtlety is not quite obvious - it is reasonable to compare only the passes with nearly the same number of trades, so it cannot be used automatically.

I am also reading an interesting book by mehanizator (his electronic version is available in Zhj) about MTS creation. It seems that things are obvious, but puts everything into place. However, half of the book is the specifics of the stock market. This weekend I'm going to "file" a system, from my last series of experimental Expert Advisors (I erase them from time to time and start a new one :) ) - Primarily for the purpose of learning. So I thought I'd offer it to people for a more constructive discussion. The system is intended for EURUSD on D1.

As a matter of fact, I suggest topics for discussion:

1.Is it stable (try to run it - the parameters slowly float, but continue to work from year to year!); 2.Can it be made to work on crosses and other pairs that are not heavily corrected with EURUSD. (Again - is it sustainable, if not?)

Files:
ea19.mq4  3 kb
 
Figar0 >>:

Как это не грустно, но Вы друзья совершенно правы. Простых решений тут нет и быть не может, грустно именно это, а не то что вы оба правы) Я и сам это понимаю, хотя "многокритериальная оптимизация" услышал впервые. Погуглил, да уж интересный темным лес), задача сама по себе сопоставимая с успешным трейдингом.

То что я пытаюсь вывести, это лишь некое приближение для практического повседневного использования, пусть и далекое от теоретического идеала, но все же лучше, чем пользуется подавляющее большинство пользователей тестера/оптимизатора. Корявенький критерий, приведенный мной выше, сочиненный на лету за пару секунд, показывается себя лучше общеиспользуемых критериев оптимизации. Помозговав коллективно можно вывести что-то более разумное. Идея линейного уравнения подкупает, засада с весами, зато критерий получается весьма гибкий, достаточно универсальный, чтобы подходить (при использовании разных весов) разным классам ТС, не так и их много..

Да и не сошелся свет клином на этом критерии, неполучится ничего путного - может, подумав, выработаем технику использования того что есть. Тоже неплохо.

Так что давайте подумаем над задачей с точки зрения практического применения. Нам же не Нобелевка нужна, а инструмент для работы. Задача-то всего ничего: отобрать хороший результат оптимизации), про лучший уже не заикаюсь, понял - не поднять). Можно пойти от противного, подумать над критерием отбраковки заведомо плохих результатов, хотя тож на то и выдет.

If you want something simpler, but better than what you have, use a linear equation with weights for evaluation criteria.

I forgot to suggest one more criterion - system symmetry, to avoid adjustment for trends you can introduce several such criteria, of course it's better to have one. Simply, difference between orders on BUY and SELL divided by the total amount of deals - the criterion should be minimized (actually, division is not necessary). This criterion may not be suitable for all systems, since I've only seen systems that sell, but work successfully.

By the way, I look, people do not quite understand that the main difficulty is not to think of evaluation parameters (they are just a lot), and to think of a way to bring them together, ie, there are 1000 passes, we have for each passage of several indicators, how to choose the best strategy on these several indicators? or 5 best strategies?

 

Azzx,

I wonder how many orders will actually be closed with a profit on the real?

      if(OrderProfit() > 0)
close_order(OrderTicket());
 
StatBars >>:

т.е. имеется 1000 проходов, мы имеем на каждый проход несколько показателей, как по этим нескольким показателям выбрать лучшую стратегию? или 5 лучших стратегий?

It is possible to normalise these few indicators and feed them into the input of the NS or a committee of the NS. The output is the profit on the OOS.