Machine learning in trading: theory, models, practice and algo-trading - page 2835

 
Aleksey Nikolayev #:

There is no happiness in life)

Even from the name loss f-ma is the loss f-ma between the reference values and the model.

On which side is the sleeve in the form of profit
 
Maxim Dmitrievsky #:
Even from the name loss ph-y is the loss ph-y between the reference values and the model

On which side is the profit sleeve here

In fact, we are moving from the area of MO to the more general area of optimisation. Still, "earn as much as possible" is not quite the same as "be right as often as possible".

 
Andrey Dik #:

1. v

2. here is a figure, like a curve of some hypothetical learning function.

Are you satisfied that the grid will stop at local extremum 1? Or maybe 2? Or what, 3, like in the middle? So we don't know in advance how many local extrema there are, there may be 100500 or even more. That is why it is important to try to find the highest of all local extrema that the algorithm is able to reach.

It is very beautiful and provable in the sense of justifying the search for a global maximum.

But this is on history.

What if we add a bar on the right? Would it still be beautiful? Or would it collapse? The market is non-stationary. Do you believe the tester? He finds it, it's beautiful, in colour...

 
Maxim Dmitrievsky #:
In genetics, we take variables and maximise on a criterion. You can't do that here, because it's a classification. There's no relationship between profit and class labels. At best, you'll get nonsense. That's why such criteria are placed in eval_metrics.
The quality of classification can be evaluated somehow, so maximising this evaluation is the goal of optimisation.
 
Maxim Dmitrievsky #:
Even from the name loss ph-y is the loss ph-y between the reference values and the model

On which side is the profit sleeve here

there is a sleeve

 
Andrey Dik #:
The quality of classification can be evaluated somehow. Therefore, maximising this evaluation is the goal of optimisation
Well, the loss between the original and trained is minimised. It is possible to add a small deduction in the form of profit normalised, of course, to try, but is it necessary?
 
СанСаныч Фоменко #:

Very nice and provable in the sense of justifying the search for a global maximum.

But it's in history.

What if we add a bar on the right? Would the beauty remain? Or would it collapse? The market is not stationary. Do you believe the tester? He finds it, it's beautiful, in colour...

It doesn't matter if it's in history or in the future. And the tester itself has nothing to do with it.

What matters is the property of an algorithm (an optimisation algorithm individually or as part of a grid) to find the global optimum of an evaluation criterion. I emphasise - the evaluation criterion. The evaluation criterion is not necessarily and/or only profit. It can be anything, for example, the evaluation criterion of work on OOS is not a criterion (minimising the difference between sample and OOS)? - it's just a thought. The criteria can be anything and of any complexity. It's important to understand that the "Profit" criterion is a very gullied, discrete thing, so people try to come up with smoother, more monotonic evaluation criteria, which generally improves the quality of optimisation itself and neuronics training in particular.

So, coming back to what I drew on the highly artistic picture - a clear illustration of the fact that in conditions when neither the number nor the characteristics of local extrema are known, the only way out is to search as far as possible for the one that is at all possible in conditions of limited computational capabilities.

Plateau - yes, there is such a notion, but it is not related to optimisation, it is a question of classifying similar sets of parameters by some attribute. Looking for a stable plateau is a separate complex task.

 

I finally got my own loss function, the derivative is represented as a product of Sharpe, error and weights.

is_max_optimal=False indicates that the value is decreasing, but since I also multiplied by -1, the opposite is true.

class SharpeObjective(object):
  def calc_ders_range(self, approxes, targets, weights):
    assert len(approxes) == len(targets)
    if weights is not None:
      assert len(weights) == len(approxes)

    result = []
    for index in range(len(targets)):
      preds = [approxes[i] for i in range(len(targets))]
      preds = [i if i>0 else -1*i for i in preds]
      sharpe = np.mean(preds)/np.std(preds)
      der1 = targets[index] - approxes[index]
      der2 = -1

      if weights is not None:
        der1 = -1 * sharpe * der1 * weights[index]
        der2 = der2 * weights[index]

      result.append((der1, der2))
    return result

class Sharpe(object):
    def get_final_error(self, error, weight):
        return error

    def is_max_optimal(self):
        return False

    def evaluate(self, approxes, target, weight):
        assert len(approxes) == 1
        assert len(target) == len(approxes[0])
        preds = np.array(approxes[0])
        target = np.array(target)
        preds = [i if i > 0 else -1*i for i in preds]
        sharpe = np.mean(preds)/np.std(preds)
        return sharpe, 0

model = CatBoostRegressor(learning_rate=0.1, loss_function=SharpeObjective(), eval_metric=Sharpe())
 
Evgeni Gavrilovi #:

Finally got its own loss function, the derivative is represented as a product of Sharpe, error and weights.

is_max_optimal=False indicates that the value is decreasing, but since I also multiplied by -1, the opposite is true.

Did the training results improve?)
 
Maxim Dmitrievsky #:
Have your learning outcomes improved?)

No, unfortunately.

I'm looking at Lopez de Prado's website right now . https://quantresearch.org/Patents.htm.

He has a new patent, issued in September (Tactical Investment Algorithms through Monte Carlo Backtesting).

Lots of valuable ideas, for example he emphasises nowcasting (short term forecasting).

Quote: "Shortrange predictions are statistically more reliable than longrange predictions."

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3562025

Key findings from the coronavirus pandemic.

What lessons can we learn amid this crisis?

1. more nowcasting , less forecasting

2. Develop theories, not trading rules

3. Avoid allregime strategies

Reason: