Machine learning in trading: theory, models, practice and algo-trading - page 3313

 
СанСаныч Фоменко #:

If you mean feature selection as part of models, I completely disagree, because feature selection as part of models ranks just any rubbish.

I'm talking about feature selection

and what you call " feature selection in models" is variance importance. Don't confuse yourself and others.

And what you do with the proxy package is artisanal, incorrect, primitive feature selection , or rather its part.


And really, familiarise yourself with the concepts and don't introduce your own on top of the existing ones.

Because I'm twitching every time when you call retraining "overfitting" and there's a lot of such bloopers.

 
Maxim Dmitrievsky #:
Sanych, when will we remember that Teacher is signs + target?)

Such childish bloopers, as if from serious people who have learnt the great R, spoil the whole atmosphere. And people, it turns out, are unlearned, no matter how much you correct them.

How can I talk to you if you are still confused about the basics?)

No offence, but you do not even understand each other, what each writes about :))))))

Oh, the wisest!

Oh, the most knowledgeable!

"Teacher" (synonym target variable) in the method of learning "with teacher" is a separate VARIABLE in the formula of all machine learning models I know, for example:

randomForest(as.factor(target ) ~ .,
                                     data   = Train [, - ncol(Train )],
                                     ntree  = ntree,
                                     mtry   = mtry)

where target represents a separate column of the matrix. In meaning, for example, price increments. It's like a function and its arguments.

The other columns of the matrix have to be matched. The problem is that not every teacher will fit the features (predictors) and vice versa, not every feature will fit a particular teacher

 
mytarmailS #:

I'm talking about feature selection

and what you call " feature selection as part of models" is variance importance. Don't confuse yourself and don't confuse others.

And what you do with the proxy package is artisanal, incorrect, primitive feature selection , or rather a part of it


And really, familiarise yourself with the concepts and don't introduce your own on top of the existing ones.

Because I'm twitching every time when you call retraining "overfitting" and there are a lot of such bloopers

Thanks for the clarification!

But absolute accuracy is only possible with a specific machine learning model, as there are models that provide variance importance information, and there are models that havefeature selection built in.

If a general classification of concepts in the feature selection problem, you can use this one


Just FYI: what is the translation of overfitting? Overfitting? Or maybe overfitting?

 
СанСаныч Фоменко #:

Thanks for the clarification!

Just FYI: what is the translation of overfitting? Overfitting? Or maybe overfitting?

overate.

talked over.

The point isn't even that, it's that you're the only one who uses that word in front of hundreds of others and there's nothing good about it, it's just confusing.


==================================================================


I remember you said that your functions take a long time to count, there is such a cool thing as code memoisation, it speeds up the code a lot in some cases, you just need to wrap a slow f1() function into f2() and make a memoisable function.

f2 <- memoise::memoise(f1)

great gain

microbenchmark::microbenchmark(f1(),f2())
Unit: microseconds
 expr        min         lq       mean     median         uq      max neval
 f1() 145281.202 154547.404 163587.417 160677.641 167757.222 204640.4   100
 f2()    234.342    270.263   1989.096    402.544    413.092 164095.8   100
 
СанСаныч Фоменко #:

Oh, most wise man!

Oh, the most knowledgeable!

"Master."

Sanych, where does it say Teacher is synonymous with target? )

 
mytarmailS #:
memoise

curiously

 
Maxim Dmitrievsky #:

Sanych, where does it say Teacher is synonymous with target? )

Don't be stupid!

Supervised learning is one of the methods ofmachine learning, in which the system under test is forced to learn using stimulus-response examples. From the point of view ofcybernetics, it is a type ofcybernetic experiment. There may be some dependence between inputs and reference outputs ( stimulus-response), but it is unknown.


And most importantly, you don't need to teach anyone! Just do your own thing!

 
СанСаныч Фоменко #:

Don't be stupid!

Supervised learning is one of the methods ofmachine learning, in which the system under test is forced to learn using stimulus-response examples. From the point of view ofcybernetics, it is a type ofcybernetic experiment. There may be some dependence between inputs and reference outputs ( stimulus-response), but it is unknown.


And most importantly, you don't need to teach anyone! Just do your own thing!

Sanych, WHERE IS IT WRITTEN?


 
Learning without a teacher - when we feed all the same chips, but without the column with the target. The model will select the target itself.
It is logical to assume that if With teacher - it is with the column of target f-i, and without teacher - without this column, then this column is the teacher.
 

Wo gpt gives out))))

What is the difference in machine learning between learning with a teacher and without a teacher
ChatGPT

Supervised Learning and Unsupervised Learning are the two main approaches in machine learning, and they differ in a few key aspects:

  1. Presence of labels (targets):

    • Unsupervised Learning: In this type of learning, the data has labels (targets) that represent correct answers or target values. The model is trained on this data to learn how to predict the target variable.
    • Unsupervised learning: Unlike supervised learning, in unsupervised learning, the data has no labels. Here, the model tries to find the structure in the data without previously known correct answers.
  2. Objective:

    • Learning with teacher: The main goal is to predict or classify the target variable based on the input data. This type of learning is used when a task such as classification, regression or anomaly detection needs to be performed.
    • Unsupervised learning: The main goal is to detect hidden structures or patterns in the data. This type of learning is used for data clustering, dimensionality reduction, associative analysis and other tasks related to learning the structure of data.
  3. Examples of tasks:

    • Learning with a teacher: classification, regression, prediction, fraud detection, object detection, machine translation, etc.
    • Learning without a teacher: clustering, dimensionality reduction (PCA, t-SNE), associative rule, data visualisation, and many others.
  4. Model evaluation:

    • Learning with a teacher: A model is evaluated based on how well it is able to make predictions or classifications by comparing it to known labels. Evaluations may include accuracy, F1-measure, RMS error, and other metrics.
    • Learning without a teacher: Estimation is more difficult because there are no known labels to compare. Evaluation can be based on visual inspection of clustering quality, comparison with other algorithms, or analysis by an expert.

Both types of learning have their applications in machine learning, and the choice between them depends on the specific task and the available data. Sometimes hybrid methods are also used, combining learning with and without a teacher to achieve better results.