Machine learning in trading: theory, models, practice and algo-trading - page 1922
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Altai... But I didn't go at the last moment, I didn't want to.)
by the way, you know about the pros?
I can share code for parsing Catbust models, only for continuous variables. Reading C++ code, converting it to MQL arrays and executing it. I can't say that with all possible parameters it will work, I did it for a specific format.
What's the parsing on? I use python for everything.
It spits out in this format. Binary Classifier
MQL
Share if you don't mind
Maybe I'll get something useful out of it
share, if you don't mind
Maybe I'll get something useful out of it
I realized that this type of clustering does not create rules,
I don't know the clustering algorithm that creates the rules.
So the question remains - how to save in csv belonging string to each class?
Although here's strange, why not just continue clustering with existing data and define a new string in one of the classes, or can?
Of course you can, but not in µl!!!
But I found a book on R.
I read it, it's cool.
And I don't understand, how can I roll up the results into a specific column?
I don't understand what you want.)
This picture has the same predictors as before, but the sample size is different, and most importantly new predictors have been added.
And that's how to interpret this - the propensity to overtrain?
I already told you, interpret according to the direct purpose of the tool, and you're aiming to nail with a flower
https://ru.wikipedia.org/wiki/%D0%A1%D0%BD%D0%B8%D0%B6%D0%B5%D0%BD%D0%B8%D0%B5_%D1%80%D0%B0%D0%B7%D0%BC%D0%B5%D1%80%D0%BD%D0%BE%D1%81%D1%82%D0%B8#:~:text=%D0%B5%D0%B4%D0%B8%D0%BD%D1%81%D1%82%D0%B2%D0%B5%D0%BD%D0%BD%D0%BE%20%D0%B2%D0%BE%D0%B7%D0%BC%D0%BE%D0%B6%D0%BD%D1%8B%D0%BC%20%D0%B2%D0%B0%D1%80%D0%B8%D0%B0%D0%BD%D1%82%D0%BE%D0%BC.-,%D0%9F%D1%80%D0%B5%D0%B8%D0%BC%D1%83%D1%89%D0%B5%D1%81%D1%82%D0%B2%D0%B0%20%D1%81%D0%BD%D0%B8%D0%B6%D0%B5%D0%BD%D0%B8%D1%8F%20%D1%80%D0%B0%D0%B7%D0%BC%D0%B5%D1%80%D0%BD%D0%BE%D1%81%D1%82%D0%B8,%D1%82%D0%B0%D0%BA%D0%B8%D0%BC%20%D0%BA%D0%B0%D0%BA%202D%20%D0%B8%D0%BB%D0%B8%203D.
Feature selection[edit|edit code]
Thefeature selection method tries to find a subset of the original variables (which are called features or attributes). There are three strategies - a filter strategy(e.g.,feature accumulation [en]), awrapping strategy(e.g., search according to accuracy), and anembedding strategy(selecting attributes to add or remove as the model builds based on prediction errors). See alsocombinatorial optimization problems.
In some cases,data analysis, such asregression orclassification, can be performed in the reduced space more accurately than in the original space [3].
Feature projection[edit|edit code]
Feature projection converts data fromhigh dimensional space to low dimensional space. The data transformation can be linear, as inthe Principal Components Method(PCM), but there are a large number ofnonlinear downsizing techniques [en] [4] [5]. For multidimensional data,a tensor representation can be used to reduce dimensionality throughpolylinear training of subspaces [en] [6].
What we did yesterday.
Dimensionality reduction[edit|edit code]
For high dimensional datasets (i.e. with more than 10 dimensions), dimensionality reduction is usually done before applyingthe k-nearest neighborsalgorithm( k-NN), in order to avoid the effectof the curse of dimensionality [16].
Advantages of dimensionality reduction[edit|edit code]
Decided to look at meaningful market reversals. Significant U-turns as a target. Thought it would be chaos, but no.
green reversal up.
red reversal down.
Gray is not a reversal.
It's a lot clearer in 2D.
I have added more data, anyway I have 4 clusters for buy and 4 for sell. Now I should probably choose the necessary clusters and try to separate the reversal from the reverse one in each of them using some classifier
Just imagine how much garbage is in the data, it all should be separated from the needed information
You can't do that with ordinary clustering.
You need to try something more serious, like DBscan, or maybe manually select, somewhere heard about this technology
Decided to look at meaningful market reversals. Significant U-turns as a target. Thought it would be chaos, but no.
green reversal up.
red reversal down.
Gray is not a reversal.
It's a lot clearer in 2D.
I have added more data, anyway I have 4 clusters for buy and 4 for sell. Now I should probably choose the necessary clusters and try to separate U-turn and no-turn in each of them with the help of some qualifier
Imagine how much garbage is in the data, it all should be separated from the needed information
You can't do that with ordinary clustering.
You need to try something more serious, DBscan, for example, or maybe manually select, somewhere I heard about such a technology
Is there any way to look for attributes within a particular cluster?
Is there any way to see the attributes within a particular cluster?
What do you mean? Clusters do not have attributes, they combine parts of attributes by similarity, if I may say so
What do you mean? Clusters do not have features, they combine parts of features by similarity, so to speak.
The values of the features in the cluster are interesting.