Machine learning in trading: theory, models, practice and algo-trading - page 2213

 
mytarmailS:

So you train the neuron for "max profit". This is training by one criterion ( "max profit").


Alexander Alexandrovich here says that the neuron finds the best solution "not to trade". Although I can't figure out how he did it, but okay...

So if the neuron decided "not to trade" So if the neuron decided "not to trade", then we have to add one more criterion (a minimum number of trades that the neuron can do): "min. number of trades".


It turns out that we already need to optimize by two criteria (or by 10)

We cannot normalize anything here as we don't know the final result

There is a lot of target. Usually we have a 2x target. Maximum profit and not to drain the balance. Profit has a risk to drain.

At a nuclear power plant from 19 to 30 parameters. The goal is maximum and stable return and what would not stop and not explode. At maximum recoil may explode, and if the rods are taken out too much, it definitely will not explode, but it may stop.

Different boundary states, or classes.

 
mytarmailS:

probably....

========================

made a large test sample

the square is the piece of the test (new data) that I showed

Anyway, for 5 minutes, it'll eat up the commission.

But it is possible to synthesize interesting models


It is necessary to include the training of the model in the fitness function at once and check it on shaft and test samples

I've done it very badly so far.

Train the system a couple of days ago to the current one, and then test a week or two ago. See what happens. You'll see a lot of interesting things.

 
Maxim Dmitrievsky:

oops, they're not clear.

forget it....

I didn't get your resampling either.

It's often hard to understand something complicated.)

Valeriy Yastremskiy:

A lot of target. Usually a 2x target. Maximum profit and don't drain balance. Profit has a risk of draining.

You can do anything with fitness functions....

This is the most "free" way to communicate to the neuron what you want from it...

Uladzimir Izerski:

Train the system a couple of days ago to the current one, then test it a week or two ago. See what happens. You'll see a lot of interesting things.

I don't understand why you should do this.

 
mytarmailS:

forget it....

I didn't get your resampling either((.

it's often hard to understand something complicated.)

you don't get it because you haven't read it

I said draw a diagram of what you're doing, otherwise I don't know what you're talking about.

 
Maxim Dmitrievsky:

You don't get it because you haven't read it.

I'm telling you to draw a diagram of what you're doing, otherwise I don't know what you're talking about.

I read it, the last one...

A little later, now I'm writing code, I want to try to select only those models that passed the test, but automatically

Try multi-criteria search
 
Maxim Dmitrievsky:

I say draw a diagram of what you're doing, otherwise I don't know what you're talking about.

I'm doing the same thing as Vladimir in the last third of this article

Only I don't adjust the parameters for MASD to maximize profits, I adjust the weights of the neuronics at once.

But it's the same.

 
mytarmailS:

I'm doing the same thing as Vladimir in the last third of this article

Only I don't adjust parameters for MASD to maximize profit, I adjust neuron weights directly

And this is the same...

Well it's optimization by hyperparameter grid

 
Maxim Dmitrievsky:

Well, it's optimization by hyperparameter grid

Well, you could say so...

The essence is in the possibilities.

Any idea can be put into a neural network via a fitness function, even the one you can't describe in code yourself.

 
mytarmailS:

Well, you could say that...

The point is the possibilities.


Any idea can be put into the neural network via the fitness function, even the ones you can't describe in code yourself

The grid still learns through the minimization of the enropy. And the stopping criterion can be made from any custom loss

 
Maxim Dmitrievsky:

the grid is still learned by minimizing the enropy. And the stopping criterion can be made from any custom loss

I don't know about it in python, but in r-ka it's not so good, or I don't know how to do it, that's why I made up this...