Machine learning in trading: theory, models, practice and algo-trading - page 1897

 
You won't believe this, but I have a physical rationale for deep learning that fits my theory so nicely... High matter, what can I say... I'd better get to the bottom of this sound and write it. And then marasmus in people grows stronger, it is urgent to break these myths....
 
Rorschach:

"The deep learning algorithm with a teacher achieves acceptable quality with about 5,000 marked examples per category."

For m1 it's retraining every day on stories a week, for m5 once a week on stories a month.

For the other methods, are there any numbers to compare?

5000 per chip is the general standard, better between 5000 and 10000.
 
Rorschach:
I don't see the point in retraining along the way... Nothing fundamentally changes, I have worked for the third month without retraining, there is no change in quality. It all depends on the length of the history on which the network was trained. If you charge for 3-5 years, the network will form stable rules that have been working all this time and remember them.
 
Valeriy Yastremskiy:

I do not understand about the time, 1, 2, 9 o'clock is just terminal time?

It seems difficult to make a mistake here.

I can write an article, because it is not an option to explain each point on the forum

Well, there are a lot of interesting things like python code, clustering, parser trees

I figured out the terms of entry into trades, you can add ready-made bots to generate at once, it's cool
 
Maxim Dmitrievsky:

It seems hard to make a mistake here.

I can make an article, because to explain each point on the forum is not an option

well, there are a lot of interesting things like python code, clustering, tree parser

I have understood the terms of entering the trades, I can add ready-made bots, it is cool

I see, it would be a good idea to write an article))

 
Valeriy Yastremskiy:

I see, the article would be good)))

but on the new data pours like a tsuchka, like all MO. On the training period it is beautiful.

I wanted to bypass retraining by introducing coarse clusters, but something went wrong ))

 
Maxim Dmitrievsky:

but on the new data pours like a tsuchka, like all the MO. On the training period it is beautiful.

I wanted to bypass retraining by introducing coarse clusters, but something went wrong ))

I need a control of correspondence of the real series to the test one (just to say). But how to do it, I don't understand, so far, that the lag would be acceptable, or at least understandable.

 
Valeriy Yastremskiy:

But how to do it, I do not yet understand, that the lag would be acceptable, or at least understandable.

Everything is already in the idea - clustered seasonal patterns, which supposedly repeat (and in fact sometimes do)

But...wrong coat. Or the tree is heavily overeducated and you need to train and parse a neuronet.

But it's all bullshit, if the tree doesn't show anything, it means there is no regularity. There is no point in doing deep learning.
 
Evgeny Dyuka:
5000 per feature is the general standard, better from 5000 to 10000.

Full sentence: a deep learning algorithm with a teacher achieves acceptable quality with about 5000 labeled examples per category and turns out to be comparable or even superior to a human if trained on a dataset containing at least 10 million labeled examples.

Data and power are everything.


Evgeny Dyuka:
I don't see the point in additional training along the way... Nothing fundamentally changes, I have been working for the third month without any retraining, there are no changes in quality. It all depends on the length of history on which the network was trained. If you charge for 3-5 years, the network will form stable rules that have been working all this time and remember them.

Depends on the approach to the problem. If you believe that systems live for a finite time, then you need to optimize regularly, the smaller the TF, the more often.


To exclude local minima as apossible cause of problems, it makes sense to build a graph ofgradient norm vs. time .If the gradient norm does not decrease nearly to zero, the problem is not local minima

Have you ever done that?

 
Rorschach:

Didn't do that?

It's kind of tricky. I just feed the network with different sets of features and catch it when it starts to show some signs of learning. Then I immediately check it with a real market. The net answers the question "up/down", that is why the answers are in every candle, but with different degree of confidence. It's simple )) no position opening, no profits and no losses.