Machine learning in trading: theory, models, practice and algo-trading - page 2623
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
and the answer wasn't meant for you - you still can't read...
You don't even need a 2nd model here, do you? - Cross Validation and Grid Search for Model Selection ...
but maybe just the confusion matrix will answer your 2nd question (the purpose of the 2nd model of your idea)...
.. . or
... I just doubt you need the 2nd model ... imho
Here's just the improvement in confusion matrix is claimed with the second model, if you read the Prado, for example. But it also uses oversampling examples for the first model to increase the number of either true positives or something else. Forgot already, unfortunately.
up-sampling & down-sampling are for Imbalanced datasets and small training sets - if that's what you mean - i.e. giving higher weights to smaller classes and vice versa... Yes, probably to increase them (tru positives)...
***
and about 2 models - well, it's probably possible to filter 2 times - first signals for setting weights, then trades on them according to these weights (launched by input's in 2nd weighing)... though it looks like it's possible to learn on deals with context - and to keep gradient for earlier time-series - good idea... BUT the implementation when working with context is still a bit different usually - the task is to use "transaction and its context" coding and 2nd RNN takes in processing result of 1st for decoding in output -- but it has little to do with working 2 networks on 2 different tasks (e.g., context and transactions), since in fact it is processed-passed through 2 networks "transaction and context" (as a pair!!!)... - it only solves the speed issue , but not (or to a lesser extent) the validity of the output... imho...
but if you really want to separate the processing of context and transaction (context separately, transactions separately) -- so far such a construction reminds me of a sandwich (or oil and butter, lubricating interrelations and dependencies of phenomena from each other - in 2 layers)... I don't pretend to interpret your TechSuite, but I have expressed my concerns and suggestion that it may still be worth preserving in the modelling process - namely the Relationships!..! I wish you a beautiful (reflective of reality! not buttery-oil) Network Architecture!
p.s. ) as an eternal problem of "contextual advertising" - "the main thing is not to break away from reality" (only their scales setup is sometimes crooked - I will not point fingers at whom - or with small samples worked in the wrong direction)
up-sampling & down-sampling are for Imbalanced datasets and small training sets - if that's what you mean - i.e. giving more weight to smaller classes... Yes, probably to increase them (tru positives)...
***
and about 2 models - well, it's probably possible to filter 2 times - first signals for setting weights, then trades on them according to those weights (launched by input's at 2nd weighing)... though it looks like it's possible to learn on deals with context - and to keep gradient for earlier time-series - good idea... BUT the implementation when working with context is still a bit different usually - the task is to use "transaction and its context" coding and 2nd RNN takes in processing result of 1st for decoding in output -- but it has little to do with working 2 networks on 2 different tasks (e.g., context and transactions), since in fact it is processed-passed through 2 networks "transaction and context" (as a pair!!!)... - it only solves the speed issue , but not (or to a lesser extent) the validity of the output... imho...
but if you really want to separate the processing of context and transaction (context separately, transactions separately) -- so far such a construction reminds me of a sandwich (or oil and butter, lubricating interrelations and dependencies of phenomena from each other - in 2 layers)... I don't pretend to interpret your TechSuite, but I have expressed my concerns and suggestion that it may still be worth preserving in the modelling process - namely the Relationships!..! I wish you a beautiful (reflective of reality! not buttery-oil) Network Architecture!
p.s. ) as an eternal problem of "contextual advertising" - "the main thing is not to break away from reality" (only their scales setup is sometimes crooked - I won't point fingers at who - or with small samples worked in the wrong direction)
The concept of regularity implies repeatability, that's important!
statistics are linear, whatever way you look at it... neural networks are dumb (or smart - depends on the developer) weighting... using 2 or more layers of Dense ns for weighting gives Non-linear dependencies (conventionally speaking, because dependency is OR dumb correlation is still a very big question)... but as long as even a dumb correlation works - you can try to make money on it... - the moment when it stops working must be detected in time (you need to notice some kind of anomaly - random or systematic - that is another question - and then, as usual, decide on your question of risk/profitability)
the ns convenience is in its flexibility - you can get/supply quite a different "nomenclature" to the output. It's flexible - you can get/supply quite different "nomenclature" from the input - i.e. you can do the transformations we need in the network itself... and do it in multi-threaded mode (depends on the library)... not just statistics...
Whether or not you need statistics to find an input is another question...
knowledge and experience help more often than statistical processing - because the first focuses on specifics, the 2nd on reduction to a common denominator ...
Everything has its place - statistics as well...
***
the point is that for a robot - there is no other way to explain (and it won't explain you any other way) except via probabilities derived from numbers... - THAT'S HOW ECONOMICS WORKED FOR THE WORLD - with numbers 0 and 1... so we have to digitise inputs to get output probabilities and set conditions of confidence intervals (which we trust, not necessarily statistics)... and we can trust anything (it's subjective) - either the binary logic or also the weighted result of this binary logic (aka % probabilities from the whole range of potential solutions)... -- it's just a matter of taste and habits, not a subject for an argument about the search for the Grail...
(and entering a forest or entering a neural network is already a detail)
no one has forbidden the joint use of trees/forests and neural networks within the same project... - the question is Where to apply what and when (speed and memory are important), not which is better... - better not to lose time - equivalent to "timing apart from the transaction is lost time, just as a transaction apart from timing is an unknown transaction"
No model can get more than probabilities (which is an advantage and a disadvantage of any digitalization), even if these probabilities are not weighted... I don't poison myself with sandwiches and don't advise anyone - no one has cancelled Bayes (even if you don't put it in the code, and especially - if you put it in the code)...
p.s. And you must be a McDonalds fan... - hypothesis, I won't check it...
Algorithmics is dearer than your conclusions
No model can get more than probabilities (which is an advantage and a disadvantage of any digitalization), even if these probabilities are not weighted... I don't poison myself with sandwiches and don't advise anyone - no one has cancelled Bayes (even if you don't put it in the code, and especially - if you put it in the code)...
p.s. And you must be a McDonalds fan... - Hypothesis, not going to test it...
Algorithmics is dearer than your conclusions.