Machine learning in trading: theory, models, practice and algo-trading - page 2811

 

Am I the only one who hears the words "algo-trading" and "alcohol-trading" sound almost identical?

kind of makes you wonder

 

set date


the first 10 lines are price information, if you want to create new features, if not they should be removed from the training.

last line - target

divide the selection in half for traine and test.


on Forest without any tuning I get on new data

Confusion Matrix and Statistics

          Reference
Prediction   -1    0    1
        -1 2428  453   23
        0   597 3295  696
        1    14  448 2046

Overall Statistics
                                         
               Accuracy : 0.7769         
                 95% CI : (0.7686, 0.785)
    No Information Rate : 0.4196         
    P-Value [Acc > NIR] : < 2.2e-16      
                                         
                  Kappa : 0.6567         
                                         
 Mcnemar's Test P-Value : 2.565e-16      

Statistics by Class:

                     Class: -1 Class: 0 Class: 1
Sensitivity             0.7989   0.7853   0.7400
Specificity             0.9316   0.7772   0.9361
Pos Pred Value          0.8361   0.7182   0.8158
Neg Pred Value          0.9139   0.8335   0.9040
Prevalence              0.3039   0.4196   0.2765
Detection Rate          0.2428   0.3295   0.2046
Detection Prevalence    0.2904   0.4588   0.2508
Balanced Accuracy       0.8653   0.7812   0.8381

on HGbusta with new chips I got Akurashi 0.83.


I wonder if it is possible to achieve 0.9 Akurasi ?

Files:
dat.zip  4562 kb
 
mytarmailS #:
?? Where did I say that?

Here.

 
Aleksey Vyazmikin #:

Here.

There was a conversation about class imbalance, and here we're talking about correlations...
All right, forget it, forget it, forget it... I have no energy or desire to chew on chewy stuff...
 
mytarmailS #:
There was a conversation about class imbalance, and here there's a conversation about correlations....
All right, forget it. Forget it, forget it... I have no energy or desire to chew on chewy stuff....

For me it's about a specific sample that has not been trained without manipulating the data.

Correlation filtering is one simple way to move the training forward.
 
mytarmailS #:

date set

What no one even touched it? (
 
iwelimorn #:

Tried it, it doesn't work, it's all about the signs again.


If you are interested, I'm throwing a multicurrency tester constructor with spread, primitive lot and a hint of opening closing positions with fractional lot.

For the tester to work, you need to prepare a dataframe with ['open', 'spread] columns, and also throw a numpy array of format x (n,2) with forecasts of buy/sell probabilities for each new bar into signal. The tester works from a loop, below is an example of initialising the use of the tester

trading logic and lot can be adjusted in the transcript_sig method of the Symbol object


The results of the test lie in the trade_history_data dictionary , for the overall test and trade_symbol_data of each symbol.

There are lists there, if anyone wants to optimise or change something - welcome).

You need to come up with some fun rewards there to capture the patterns. Otherwise it will hammer to the pseudo-optimum of any function
 
Maxim Dmitrievsky #:
You have to come up with some fun rewards there to capture the patterns. Otherwise, it will grind to the pseudo-optimum of any ph-i
It's all about q function and critics, interesting topic....
 
mytarmailS #:
It's all about q function and critics, an interesting topic....

it was discussed here more than a year ago, when I was writing RL algorithms.

I don't want to come back yet, and I already have a certain mixture of RL + supervised, I switched to author's schemes long ago.

Use RL if you don't know how to mark labels, but you need an adequate semiling mechanism. You start with random as in my articles, for example, then add conditions. You approximate with forest or NS, check the results, correct, and so on round and round you will get the results and exploitation

What is in the last article is essentially RL. You can think of the second NS as a criticism, and you put a value in the mechanism of semiling of deals yourself. The terms may be inaccurate, but it doesn't change the essence of the approach.

Q-function is not necessary, there are other methods like reinforce and so on, I forgot already
 
Maxim Dmitrievsky #:

It was discussed here more than a year ago, when I was writing RL algorithms.

I don't want to go back yet, and neither do I.
I am not from the position of labels, but from the position of e.g. some very complex multi-detailed policies of agent behaviour.