Machine learning in trading: theory, models, practice and algo-trading - page 597

 
sibirqk:
Imho, of course, but here every page of the branch, you need to start with the slogan from SanSanych - "garbage in, garbage out. And all your cognitive and creative talents should first of all be aimed at reducing the garbage at the input, and only then try to put an extreme load on the computer hardware.

Are you also a fan of digging through garbage in search of edible things?

 
Maxim Dmitrievsky:

Are you also a fan of rummaging through garbage in search of edible things?

Life will make - ...
 

I made a non-rodynamic classifier scheme, which is easy to implement

maybe something is missing? any ideas? :)

the number of neurons can be added, now there are 2 in the hidden layer


 
Maxim Dmitrievsky:

I made a non-rodynamic classifier scheme, which is easy to implement

maybe something is missing? any ideas? :)

i can add the number of neurons, now there are 2 in a hidden layer


1) What is the situation with learning? I don't see how the weights are applied.
2) Are the weights of the neuron itself available?
3) As a medium, you can take the derivative of a close or fast MA of order 1-4. Or increments.
4) I would put the size of the hidden layer equal to the input.
 
sibirqk:
Imho, of course, but here every page of the branch, you need to start with the slogan from SanSanych - "garbage in - garbage out". And all your cognitive and creative talents should first of all be aimed at reducing the garbage at the input, and only then try to extremely overload the computer hardware.
This is not a slogan from SanSanych. At least google it.
 
Maxim Dmitrievsky:

I made a non-rodynamic classifier scheme, which is easy to implement

maybe something is missing? any ideas? :)

you can add the number of neurons, now there are 2 in a hidden layer

You will only waste your time. On the real data it will not work.

For example: One wrong answer of the NS, and it will affect all subsequent ones.

 
Yuriy Asaulenko:
This is not a slogan from SanSanych. At least google it.

That's for sure - it's a sign on the "Statistics" building.

 
SanSanych Fomenko:

That's for sure - it's a sign on the "Statistics" building.

Not only.This sign still hangs a lot of places).
 
Yuriy Asaulenko:

Only a waste of time. It will not work on the data reaad.

For example: One wrong answer of the NS, and it will affect all subsequent ones.

Neural network is slowly sifting out errors 👍😀😎
 
Yuriy Asaulenko:
Not only. There are many other places where this sign hangs).

I'm sticking to the theme, so to speak.

But in statistics, it's a matter of principle.

And it's all about correlation, because one of the basic concepts and the meanest, and all because correlation ALWAYS has a value and no value "no value = NA". If you think back to the Middle Ages, several hundred years were devoted to this - finding correlations where there can't be any in principle.


When I started to learn R, I was struck by how much this NA is carried around in it.