Machine learning in trading: theory, models, practice and algo-trading - page 417

 
Mihail Marchukajtes:

here's the answer...blah, blah, blah.... and the result is zero.....

post your trading report, there will be a result, and if it's positive, we'll have a serious conversation ...

 
Ivan Negreshniy:

post a report on the trade, there will be a result and if it is positive, then perhaps a serious conversation


what conversation????? I do not understand..... I offered to build a model for a problem different from forex, but not less profitable, and you tell me about trading..... strange!!!!

 
Mihail Marchukajtes:

what conversation????? I do not understand..... I offered to build a model for a task different from forex, but not less profitable, and you tell me about trading..... strange!!!!

if you offered me a task no less profitable than forex, then I have the right to know how profitable your forex is, what is strange about it?

 
Ivan Negreshniy:

If you offer me a task no less profitable than forex, I have the right to know how profitable forex is, what's so strange?


what does forex have to do with it at all???? about forex read my previous posts or article..... i'm not talking about forex to you....

 
Mihail Marchukajtes:

What does forex have to do with it at all???? read my previous posts or article..... I'm not talking about forex....

went to read...
 
elibrarius:
there just initially selects the type of network by output type, no need to rewrite anything (and all internal layers are hardwired as non-linear)
Have you experimented with retraining the same created network in alglib? For example, I trained MLP, and then I retrain it... it is retrained, no errors, but maybe this is wrong and it is necessary to create a new network object? Or it is somehow retrained there and not trained again... again, there is nothing about it in help, and I am kind of lazy to dig around in the code and look there)
 
Maxim Dmitrievsky:
Have you experimented with retraining the same created network in alglib? Let's say I trained MLP, and then I retrain it... it retrains, no errors, but maybe this is not correct and I need to create a new network object? Or it is somehow retrained there and not trained again... again, there is nothing about it in help, and I'm kind of lazy to dig around and look there for code)
If you retrain again, there will be no error (it's not retraining, because the coefficients will be reset) and just find a new combination of them.
 
Maxim Dmitrievsky:
Have you experimented with retraining the same created network in alglib? Let's say I trained an MLP and then retrain it... it is retrained, no errors, but maybe this is wrong and it is necessary to create a new network object? Or it is somehow retrained there and not trained again... again, there is nothing about it in help, and I'm kind of lazy to dig around and look for code there)

A very interesting approach and as old as the world. We train several times, get results, and then train the network again on data from several other networks. A kind of deep learning of a different format.... By the way, this approach is good...

 
Mihail Marchukajtes:

A very interesting approach and as old as the world. We train several times, get results, and then train the network again on data from several other networks. A kind of deep learning of a different format.... By the way, it's a good approach...

No, it's just re-training at certain intervals in the tester, for example at a given drawdown, etc. If you use several networks, there are ns ensembles in alglieb, I'll play with them later, now it's summer, lazy... the sea beach chicks mojito, kidding what on... the sea in Siberia, there's only a river and a gnat

and then all sorts of boosting shmustings and stuff like that, and then LSTM as a far-fetched ideal of my aspirations, which I have not yet reached

 
Vladimir Perervenko:
Mihail Marchukajtes:

Well, since you're nastivate, I'll tell you one idea about the collection for processing data. It is really very difficult to train a model with high level of generalization on a large enough area, because the market is a living organism and blah, blah, blah. The longer the training period, the worse the model performs, but longer. Task: Make a long-running model. Split or method two, however for those who use a committee of two networks.

We have three states "Yes", "No" and "Don't know" when the nets show in different directions.

We train the network on the whole section, in our case 452 entries. The network learned this set at 55-60%, let's assume that the answers "I don't know" on the training sample were 50%, respectively 226 signals the network could not learn. Okay, now we build a new model ONLY on the "Don't know" states, that is, we try to build the model on such quasi states, which misled the first model. The result is about the same out of 226 only half will be recognized, the rest will get the "Don't Know" state, then build the model again. the result is 113, then 56, then 28, then 14. At 14 records are not known to any of the previous models Jprediction Optimizer will usually count up to 100% generalizability.

As a result, we have a "Pattern System" that recognizes the entire market in a three-month area.

Here's another way, besides "Context of the Day" How you can break the market into subspaces and perform training by getting exactly the "Pattern System" Here's an example....

------------------------------------------------------------

This method is called "boosting" -Boosting is a procedure of sequential composition of machine learning algorithms, when each successive algorithm tries to compensate disadvantages of the composition of all previous algorithms.Boosting is a greedy algorithm for constructing a composition of algorithms.

The most famous recent application is XGBoost.

Good luck

What does XGBoost has to do with an optimized distributed gradient boosting library and composing machine learning algorithms?