Machine learning in trading: theory, models, practice and algo-trading - page 1885

 
Evgeny Dyuka:
Yes, you have to go through everything, otherwise there is no way.
Try playing this, it's good for understanding how the parameters affect the result.

Yeah saw that, too bad you can't tuck your own in.

Try reducing lerning_rate for starters.


Nice picture, clearly see where overtraining went


 
Maxim Dmitrievsky:

Trading your clusters, just like I told you

And this despite the fact that the error in the new data 0.82%

So you should have listened to me when I told you to make a terminal and see what kind of optimizers, parsers and other rubbish you made

 

The regular network seems to be better than lstm. And tanh is better than relu.

The network parameters are the same everywhere. The data are normalized to a range of +-1.

On the left are errors on trayn and validations from epochs. In the center, the network output on the trayn and the benchmark. On the right, network output on validations and the benchmark.

tanh

tanh tanh train tanh val

relu

relu relu train relu val

lstm

train val

I had to shake things up with lstm for a long time to make it go away from 0.5. And the result was not very good and the parameter window was very narrow. And it took me about 10 minutes to train it. But here the network was practicing for a little over a minute. They write that lstm takes longer to train, on this example the grids were trained in the same time (upd still lstm takes longer to train).

 
Rorschach:

The regular network seems to be better than lstm. And tanh is better than relu.

Everywhere the same network parameters

tanh

relu

lstm

I had to shake things up with lstm for a long time to make it go away from 0.5. And the result was not very good and the parameter window was very narrow. And it took me about 10 minutes to train it. But here the network was practicing for a little over a minute. They say that lstm takes longer to train, on this example grids train in the same time.

is Sequential Dense ?
 
Rorschach:
some kind of activation doesn't work with negative values, didn't get into that?
 

I have a primitive NS without training. I don't see the point of training. I'm not discouraging anyone by any means.

The bar graph in percent shows the probable further direction of movement.

It is important to feed the necessary information to the network.

If you feed the garbage. The result with any packet will be chaotic.

I hope it is clear that blue is up and carrot is down.

d462

 
Evgeny Dyuka:
normal is Sequential Dense ?

Yes

Evgeny Dyuka:
some kind of activation does not work with negative values, did not go into it?

Didn't think of that. tanh +-1, relu 0-inf
Upd looked at examples, with relu still lead to 0 average.

 
Uladzimir Izerski:

I have a primitive NS without training. I don't see the point of training. I'm not discouraging anyone by any means.

The bar graph in percent shows the probable further direction of movement.

It is important to feed the necessary information to the network.

If you feed the garbage. The result with any packet will be chaotic.

I hope it is clear that blue is up and carrot is down.


m5 short, d1 buy, I agree with others.

 
Rorschach:

m5 short, d1 buy, others agree.

In an hour there has been a change.

d245

 
mytarmailS:

Trading your clusters, just like I told you

And this despite the fact that the error in the new data 0.82%.

So you should have listened to me when I said - make a trading platform and look at once, and you're some optimizers, parsers and other crap saw

not crap but candy

everything there should work like clockwork, you're doing something wrong

But let's close the public discussion on this. Because I imagined the cash flowing into my pockets after such an ingenious discovery. I don't want to share it with anyone.