Machine learning in trading: theory, models, practice and algo-trading - page 911

 
Vizard_:

I told you, I was selling software while the demand was there, so I got my brains in gear and went back to my own business... But what I did and what was fed to Mishekov was lame and consisted in learning the grid for a short period of time (two or three months) on the clockwise, if memory serves, multicurrency inputs. It was a fool's errand.

Who are you talking about now?


Man, I can't find that picture. I think it's on a portable drive. I'll post it later, because it's been on my mind for a long time, and we'll discuss it there... ...as they say...

 
Maxim Dmitrievsky:

Where can I read how the retraining is done internally? I thought only Bayesian ones are good at retraining

Same as initial training, only instead of a neural network with initially initiated weights, we use an already trained model. As a rule, for this purpose we use

 callback_model_checkpoint("checkpoints.h5"),

Take a look at

Good luck

Training Callbacks
Training Callbacks
  • keras.rstudio.com
You can create a custom callback by creating a new R6 class that inherits from the class. Here’s a simple example saving a list of losses over each batch during training: Fields Custom callback objects have access to the current model and it’s training parameters via the following fields: Named list with training parameters (eg. verbosity...
 
Vladimir Perervenko:

Senk!

 
Mihail Marchukajtes:

No problem. Throw it in... I just have to warn you that the target must be balanced...

What do you mean balanced? I don't know how to judge it.

 

While I was looking for a picture, I found a bookmark on the "Lab". Now it's open for some reason, and that's what I found out. Holy shit... Trickster, you're a Cossack. For some reason, I thought that's where we talked. But I don't see you on the list of attendees..... Hmm..... Where did we hang out, if we ever did. I'm talking about back in the day....


 
Aleksey Vyazmikin:

What does it mean to be balanced? I don't know how to evaluate it.

The number of zeros and ones must be equal. But this is not necessary. It is important that the data is in chronological order. The ones at the bottom are the most recent. Go ahead... waiting ...

 
Mihail Marchukajtes:

The number of zeros and ones must be equal. But this is not necessary. It is important that the data are in chronological order. The ones at the bottom are the most recent. Go ahead... waiting...

I do not have an equal number, because the TC is not supposed to make a profit on every sneeze. In the file 1-2 column information, 3 and 4 are two independent targets, and the other predictors for them.

Files:
Pred_023.zip  3210 kb
 

But I found this picture in the lab. On the right is Leonid Velichkovsky, and on the left is Steve Ward, creator of the now defunct Neuroshel Truder. I hope the participants of the photo will not mind that I posted it. In the background, on the wall is the very first NeuroShell. Leonid went to the america to their office and here is a picture of it on his arrival. The last date of the lab post is from 2010. By that time, participant activity had dropped off and it had become dead.


 

So that's what I'm getting at. At the level of 2010, the development of NS was not what it is now. The last few years the field of AI has moved on a lot. Yes the networks in NS were terribly overtrained and the quality of the models was or rather was not at all. In all the years I've been using NSh I've never been able to get a decent result. Leonid was the only user of the licensed NSh, and it cost 2500 Bakinsky rubles. But the interface and possibilities that the program offered to trader was a breakthrough. Working with NSh was not carried out in a random way, like in MT, but by whole arrays of indicators. There was no need to be a programmer for 3-4 minutes to create a trading strategy based on the indicator, indicator by indicator and to infinity. And it was all done with a mouse. That's what you need. Exactly the trader's program. So here.....

And now imagine if it was revived, but it includes the latest advances in preprocessing, new models and training methods that do not lead to overfits. In general, make a program for trading with a powerful nerocet apparatus. This will be an explosion in the software market for trading, because it is easy to understand and do not need to be a programmer. Bridges and datalinks were already created to send signals to the terminal.

The program is VERY good, the only thing it lacked was a good training block for nets, pre-processing of data, etc. I think traders would have appreciated it and used it.

I could be wrong but the program ceased to exist after Steve's death and given the fact that it was being fiercely retrained it was not well received by the trading community....

 
Vladimir Perervenko:

1. In package darch(v0.12.0) finetuning can be done repeatedly on a new portion of data. I have not checked how long this will work.

In keras/tensorflow all models are trainable, and from any stage of previous training. Of course it is necessary to save the intermediate results of training.

Thanks.

Vladimir Perervenko:

2. What kind of play do you want to change the training parameters and which ones? What is manual annealing?

Good luck

I change the training parameters of the standard BP based on the results of the previous training. In essence, it is the same as annealing, but controlled manually.

I don't use NS R packages, but I don't rule them out in the future. I work with another environment.