Machine learning in trading: theory, models, practice and algo-trading - page 2832

 

The cushioning between R and the chair is just too thin.

another unrealised fantasy

 

why do you delete posts, paranoid? so that you don't get poked again for getting into trouble? :)

How many times can you screw up like this?

 
Maxim Dmitrievsky #:

So you made a big project in R, put it on the server. And who will maintain it? And no one, because there are no specialists in such quantity and no one wants to learn R because of one statistic.

and for python, hire any student for a stick of sausage and you'll be fine.


Who needs students with a stick of sausage?

They need students who know statistics, MOE, which takes 5 years to study. And then it is still desirable to work in a specialised organisation. And R or Python can be taught in a week, as all such students-statisticians know C++.

But for people who have NOT studied statistics for 5 years, R is much more useful than Python, as in R only what is needed, everything is chewed up, documented....., as it is a specialised language after all.

 
СанСаныч Фоменко #:

Who needs students with a stick of sausage?

We need students who know statistics, MOE, which takes 5 years to study. And then it is desirable to work in a specialised organisation. And R or Python can be taught in a week, as all such students-statisticians know C++.

But for people who have NOT studied statistics for 5 years, R is much more useful than Python, as in R only what is needed, everything is chewed up, documented....., as it is a specialised language after all.

Believe me, a student will learn statistics in 5 days for a stick of sausage, and many other things as well.

The main condition for success is that the student must be hungry.

while we've been talking about the same thing for months and years.

 

Why even discuss the correctness of optimisation? Local, global - I don't care.


Dick's question is purely theoretical and has NO practical value, because even very correctly found extrema are referred to the PAST and with the arrival of a new bar there will almost always be new, unknown to us extrema. Let's remember the tester. It finds extrema. And what? An optimum from the tester is worthless if there are no considerations that it will live in the future. But the lifetime of an optimum has NOTHING to do with the correctness and correctness of finding this optimum, which Dick writes about.

 
Maxim Dmitrievsky #:

Believe me, a student will learn statistics in 5 days for a stick of sausage, and many other things along the way.


Statistics is taught for 5 years, and not everyone can be taught, they are specially selected at the entrance exams.

 
СанСаныч Фоменко #:

Statistics is taught for 5 years, and not everyone can be taught, they are specially selected at the entrance exams.

If you omit details and give an applied problem from the real world, the process will go faster.

Mostly dumbnesses in training, when a person does not understand why it is necessary and has never encountered it in his life. They don't see the end goal.

 

Can someone tell me how to make a custom metric for catboost, I need Sharp.

The result of training the model with my version is almost the same as with RMSE, so there is an error somewhere in the code.

preds and target are return (a[i]-a[i+1])

class Sharpe(object):
    def get_final_error(self, error, weight):
        return error

    def is_max_optimal(self):
        return True

    def evaluate(self, approxes, target, weight):
        assert len(approxes) == 1
        assert len(target) == len(approxes[0])
        preds = np.array(approxes[0])
        target = np.array(target)
        data = [i if i > 0 else -1*i for i in preds]
        sharpe = np.mean(data)/np.std(preds)
        return sharpe, 0

model = CatBoostRegressor(learning_rate=0.1, n_estimators=2000, eval_metric=Sharpe()) 
 
СанСаныч Фоменко lifetime of an optimum has NOTHING to do with the correctness and correctness of finding that optimum that Dick writes about.

my last name is not declined.
the question is not whether the global one will change or not (it will necessarily change), but whether it can find a global extremum at all. if you don't care, you can just initialise the network weights with random numbers and that's it, because what difference does it make if it's global or not? local.))
 
Evgeni Gavrilovi catboost, need Sharp.

The result of training a model with my version is almost the same as with RMSE, so there is an error somewhere in the code.

preds and target are return (a[i]-a[i+1])

I don't know much about python or katbusta, but I' ll ask stupid questions)

1) What is data, why is the mean not from preds?

2) It seems that for gradient bousting you need to specify more formulas for gradient and hessian?

Reason: