Machine learning in trading: theory, models, practice and algo-trading - page 1075

 
Maxim Dmitrievsky:


Yes, I get it...Did you copy the code?

Then, we will discuss...

 
Maxim Dmitrievsky:

yep, you can delete

Please read the first few lines in the comments section of the code and then go to the code section ... I hope you will understand ...

Next, what we need to change is to add one more dynamic array where you learn the RDF and pass that to "CalculateNeuron (double a, int b)" function like:

CalculateNeuron ( double a, int b, double & best_features [])

Something like this:

Then, copy best_features[] array to inputs [] array by ArrayCopy()...

Rest should be simple:)))

So based on the dynamic value of base function components the function will return the transformed features and retrain the RDF again and again call the function and so on.....

 
Vizard_:
Not RDF, RF.

nope, RDF http://www.alglib.net/dataanalysis/decisionforest.php

The RDF algorithm is a modification of the original Random Forest algorithm designed by Leo Breiman and Adele Cutler. Two ideas are in combination with each other in this algorithm: these are the use of a decision tree committee getting the result by voting, and the idea of training process randomization.
Decision forest - ALGLIB, C++ and C# library
  • www.alglib.net
This page contains a brief description of the RDF classification and regression algorithm. Prior to reading this page, it is necessary that you look through the paper on the general principles of data analysis methods. It contains important information which, to avoid duplication (as it is of great significance for each algorithm in this...
 
Maxim Dmitrievsky:

yes... but code too large with "case", I think can implement it much shorter... so pls wait a bit when I finish my code, if it will bad so continue with yours ^)

Yes, that is exactly what is GMDH best base function as per wikipedia which you can check :))))...after a lot of research also I could not find a better way by now still trying.................

But only 1 condition will be executed for each step and hence, remaining code will not be executed after the break statement:

So I guess the training might be slow...I don't know...if so, then we can limit the maximum value of components to 3 or 4 by using another input variable so that it will maximum break the features into 3 or 4 components and not more than that.

As I said already GMDH itself acts as a Neural network and hence, we are using double neural network now one GMDH and one RDF together :)))

 
Maxim Dmitrievsky:

He decided to make a joke and reduce machine learning to learning to code, by the looks of it - good riddance to him, and it seems that education is clearly not a problem for your very active friend...))

 
Maxim Dmitrievsky:

I didn't even get to your P-net, by the way is it much different from PNN?

I read about PNN here, it's a motherfucking grail.

I can't remember what PNN is, but a search gives PNN-Soft B2B programming services for business, but I think for sure our p-net is different

There's basically a very simple idea, but original))

 
Maxim Dmitrievsky:

Right, but we dont need 2 NN, we just need a good feature selector, so it must not be a complete gmdh

so my implementation closer to it

Exactly...it is not mandatory for us to use an exact formula or methods of GMDH or RDF or RF etc...all we are looking for is final results:)))))))))

What I am looking for is:

1.Fast training with past data

2.Fast feature selection during LIVE trading for fast trade execution

3.Convergence of the algo towards a solution over repeated training

4.High accuracy and low drawdown during trading

Now, meeting all the points is generally difficult to achieve in MQL5 , but your current implementation seems to meet most of the points...but a proper balancing of all the 4 criteria need to be done using fine tuning of the algo...

For example, if we increase the number of features to high I see some improvement in results, but the training time also increases significantly...

 
Maxim Dmitrievsky:

probabilistic NN, it builds feature trees with polynomials like MSUA, and uses these polynomials instead of sigmoids

https://en.wikipedia.org/wiki/Probabilistic_neural_network

And all sorts of Bayesian additives... much faster than MLP too

there is also pulsed Pulsed neural networks, PNN) https://ru.wikipedia.org/wiki/%D0%98%D0%BC%D0%BF%D1%83%D0%BB%D1%8C%D1%81%D0%BD%D0%B0%D1%8F_%D0%BD%D0%B5%D0%B9%D1%80%D0%BE%D0%BD%D0%BD%D0%B0%D1%8F_%D1%81%D0%B5%D1%82%D1%8C
 
Maxim Dmitrievsky:

Yeah, I'm still confused on the move . I had in mind a polynomial NA

I think we can use PNN,RDF and GMDH together in your code:))))

Logic of PNN seems great!!! ...PNN seems to act like a neuron of a human brain...I mean fast decision making process...so if you use a break statement in each for loop of my logic probably it will work as a PNN...

I mean we don't go to the end of every for loop...but it will check the delay time using TickCount() function and if it is more than 2 to 5 ms, then break the loop and continue with next RDF decision...

I just looked into PNN...so don't ask me to write the code of PNN again:))))))))))))))))))))))))))))

 
Maxim Dmitrievsky:

Need to implement different base transformation functions also - algebraic, triginometric, orthogonal polynomials

I think we can use PNN into my current GMDH logic to fast forward the decision process... I am not sure...but just looked into the logic and so don't tell me to write the code of PNN now:)))