Machine learning in trading: theory, models, practice and algo-trading - page 3139

 
Maxim Dmitrievsky #:

it can be

he already knows because I gave him the information, he can be taught on the fly what he didn't know before... after I gave him this knowledge I loaded a new session and he already knew about this package )) cool.

 

Gentlemen, we should not forget or realise that....

As soon as any kind of robot is on the real market, its actions become predictable and transparent for the market, because it becomes integrated into the general trading system and from that moment the signals will be made by an adaptive pricing algorithm rather than dead historical data.

It is easy to adjust/model the price to the situation when the robot will give money.

And it will be more often than in the tester, 100% of the time

Well look at your balance obtained with MO ;))))

repeatedly up and down.

I'll tell you straight - a typical drainer, aka tester grail.

Such TS will never work in real life.

 
mytarmailS #:

he already knows because I gave him the information, he can be taught on the fly what he didn't know before... after I gave him this knowledge I loaded a new session and he already knew about this package )) cool

funny

 
Andrey Dik #:

does the bard work on vps?

vpn yes

 

A ship is travelling across the universe towards Mars.

The photon engines have accelerated it to the limit and it is travelling by inertia.

A cosmonaut sits on top of the spaceship, with a flag in his left hand.

With his right hand he throws a Bolt with a diameter of M40 with an acceleration of 5 metres per second.

Question. What happens to the bolt?

А. It will find its nut.

Б. Will fly at a constant speed.

В. Will fly with acceleration.

Г. Will stick to the astronaut's glove.

Д. Will be the first to reach Mars.

Е. Will return to earth.

P.Z.

Will learn to operate machines and begin to control the Cosmonaut, with the help of a spaceship?

 
Maxim Dmitrievsky #:

joke

He's dumb as hell, but he learns differently than the other models, I think when he learns it's gonna be brutal.

 
Lorarica #:

There's a ship travelling across the universe towards Mars.

The photon engines have accelerated it to the limit and it flies by inertia.

A cosmonaut sits on top of the spaceship, with a flag in his left hand.

With his right hand he throws a Bolt with a diameter of M40 with an acceleration of 5 metres per second.

Question. What happens to the bolt?

А. It will find its nut.

Б. Will fly at a constant speed.

В. It will fly with acceleration.

Г. Will stick to the astronaut's glove.

Д. Will be the first to fly to Mars.

Е. Will return to Earth.

P.Z.

Will learn to operate machines and begin to control the Cosmonaut, by means of a spaceship?


B, although A is more romantic
 
mytarmailS #:

He's dumb as hell, but he learns differently than the other models. I think when he learns, it's gonna be brutal.


Honestly, it's hard to believe he can ask himself questions. If it does, that's unimaginably cool. although one wonders if it's technically possible for him to answer his own questions.
in the chatgpt thread I suggested a theoretical possibility of someday realisable feedback with self-questioning, in humans it works with an average frequency of 30-70 Hz. of course, they threw slippers with words "nonsense, it can never happen".
 

A breakdown of Prado's article on retraining
http://csinvesting.org/wp-content/uploads/2015/02/The-Probability-of-Backtest-Overfitting-6.pdf


Bard:

The essence of the authors' method is to use cross-validation to estimate how much the model changes when new data is added. If the model changes a lot, it means that it is overtrained on the training data and is likely to be overtrained on the test data.

The authors' method works as follows:

  1. The model is trained on training data.
  2. The model is tested on the test data.
  3. The model is trained again, but on the training data with some of the test data added.
  4. The model is tested again on the remaining test data.
  5. This procedure is repeated several times and the results of each iteration are saved.
  6. Based on the results of all iterations, the probability of overtraining the model is estimated.

The authors' method is more accurate than traditional overfitting estimation methods, such as holdout, because it accounts for model changes when new data is added. The holdout method estimates overtraining based only on training and test data, but does not account for model changes when new data are added.

The authors' method can be used to estimate the overtraining probability of any machine learning model. It can be used to estimate the overtraining probability of a model that predicts stock prices, a model that classifies images, or any other machine learning model.


option 2 ====================================

The method proposed by the authors is based on the idea of using cross-validation to estimate how much the model changes when new data is added. If the model changes a lot, it means that it is overtrained on training data and is likely to be overtrained on test data.

In order to estimate the probability of overtraining, the authors use a cross-validation method called k-fold cross-validation. In k-fold cross-validation, the data is partitioned into k equal parts. The model is then trained on k-1 parts of the data and the remaining part of the data is used for testing. This process is repeated k times and the results of k tests are combined to obtain an estimate of model accuracy.

Once the model accuracy estimate is obtained, the authors use it to estimate the probability of overtraining. They do this by calculating the variance of the accuracy estimates. The higher the variance, the greater the probability of overtraining.

The authors show that their method can be more accurate than traditional methods of estimating overtraining, such as holdout. They also show that their method can be used to select the best hyperparameters for the model.

Here are the steps the authors use to estimate the probability of overtraining:

  1. The data is split into k equal parts.
  2. The model is trained on k-1 parts of the data and the rest of the data is used for testing.
  3. This process is repeated k times, and the results of the k tests are combined to obtain an estimate of the accuracy of the model.
  4. The variance of the accuracy estimates is calculated.
  5. The probability of overfitting is estimated based on the variance of the accuracy estimates.

The authors show that their method can be used to estimate the probability of overtraining for a variety of machine learning models, including linear regression, logistic regression, and decision trees. They also show that their method can be used to select the best hyperparameters for these models.

Overall, the method proposed by the authors is an accurate and useful tool for estimating the probability of overtraining. The authors' method can be used to estimate the overtraining probability of a model and to select the best hyperparameters for the model.


option 3 =========================================


The authors' method is based on the idea of using cross-validation to estimate how much the model changes when new data is added. If the model changes a lot, it means that it is overfitting on the training data and is likely to be overfitted on the test data.

To demonstrate the accuracy of their method, the authors applied it to the task of predicting stock prices. They compared the results of their method with those of holdout and showed that their method more accurately predicts stock prices.

The authors also showed that their method can be used to select the best hyperparameters of the model. They used their method to select the best hyperparameters for a machine learning model that predicts stock prices. They showed that their method can select hyperparameters that lead to a model with higher accuracy.

Overall, the authors showed that their method for estimating the probability of overtraining is an accurate and useful tool for backtesting. The authors' method can be used to estimate the model's overtraining probability and to select the best hyperparameters for the model.

Here are the steps used in the authors' methodology:

  1. Divide the data into training set, validation set and testing set.
  2. Train the model on the training set.
  3. Evaluate the model on the validation set.
  4. Repeat steps 2-3 for different values of the hyperparameters of the model.
  5. Select the set of hyperparameters that results in the best accuracy on the validation set.
  6. Estimate the model on the validation set.
  7. Estimate the probability of overfitting the model using the test set.

The authors' method is more accurate than traditional overfitting estimation methods, such as holdout, because it accounts for model change when new data is added. The holdout method only estimates the model on the training set and on the test set. This means that the holdout method cannot account for model change when new data is added.

The authors' method can also be used to select the best hyperparameters of the model. The holdout method cannot be used to select the best model hyperparameters because it does not account for model change when new data is added.

Overall, the authors' method is an accurate and useful tool for backtesting. The authors' method can be used to estimate the probability of model overfitting and to select the best model hyperparameters.

 
Andrey Dik #:

B, although A is more romantic

There should be no rush to answer, my dear friend.

P.Z..

There are many distractions present.

It's a bit like E=C*M*C, which no one has ever solved.

Although! Albert, at one time tried to get a shNobel for it.