Machine learning in trading: theory, models, practice and algo-trading - page 1113
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
And my point is this. First, set the problem in a vacuum. with your metric.
If you want to run it in a tester with trawl and other stuff:
Provide the data in .csv with the targeting (I understand you have this binary classification). then train the model and predict the targeting. the result is loaded as a list of model responses into the same tester and run. But to do it for each model is another variant of fitting, it is better to think about metrics or target. And in the tester to run only the last option.
And for realtime is a separate hassle, and not all models can be wrapped in a dll
Well I do not know I have all models are wound up in MT and feel it well.
A metric I chose this. By the way, in optimizer changed to it Mathews metric it has parabolic estimation unlike specificity or sensitivity metrics. But I understand that if the optimization algorithm is ready, then the problem with the metrics is solved....
Everything, any model is a formula, if you're using black boxes from which you can't pull shit, your problem.
What a tester, assholes and all. You have no idea who Misha is and what a fascinating journey awaits
ahead of you)))
And most importantly profitable....
I'm not a supporter of DLL and all sorts of bundles... I like pure MKUL in its original form :-)
Everything, any model is a formula, if you use black boxes where you can't get shit out of, your problem.
What kind of tester, asshole and all... You have no idea who Misha is and what a fascinating journey awaits
ahead of you)))
Listen, well, you can trade the data from the VSW file after training so that the result was in the form of a balance curve????
Everything, any model is a formula, if you use black boxes where you can't get shit out of, your problem.
What a tester, assholes and all. You have no idea who Misha is and what a fascinating journey awaits
ahead of you)))
All more or less productive ML libs nowadays are black boxes)
All more or less productive ML libs nowadays are black boxes)
All right, that's why in the first place comes the method of evaluation of the obtained result. That's the metric we're talking about and if metric adequately evaluates the result obtained, then for black box the method of error back propagation will do, the oldest method is fiercely overtrained, but if in the process of learning to evaluate the result with super duper metric, then you can optimize until this metric will not say STOP to optimization algorithm.
I have serious plans for Reshetlova's optimizer and I have already done a lot of work in it. Add to it that super duper metrics and for this I already have a couple of ideas ...
All right, that's why in the first place comes out the method of estimating the obtained result. The same metric we are talking about and if metric adequately evaluates the result obtained, then for black box the method of error back propagation will do, the oldest method is fiercely overtrained, but if in the learning process to evaluate the result by super duper metric, then you can optimize until this metric will not say STOP to the optimization algorithm.
I have serious plans for Reshetlova's optimizer and I have already done a lot of work in it. Add to it the most super duper metrics and for this I already have a couple of ideas...
The metric only allows to catch the moment when model starts to retrain.
+ writing your own metrics immediately restricts you in the development environment and in the used libs (not all of them support non-standard metrics)
Better think about the target, so it fits what you really need. And can be evaluated with standard metrics accepted in ML:
https://habr.com/company/ods/blog/328372/
https://ru.coursera.org/lecture/vvedenie-mashinnoe-obuchenie/mietriki-kachiestva-klassifikatsii-1-IVuAc
It's done in python, r or pr in a couple of lines (more spreads, slippage...)
It won't be any different from the real one, I've told you a hundred times))) Why do you need only
Equi is not clear, there are no good models and will never be... And tell the dude what you're putting in
(oi, volumes etc., you don't have long data history))).
Well, you know... I'm also deciding which model to put on by the look of the curve. What good is a model if it made 90% of profitable trades, and at key moments it was losing terribly. The type of the balance curve does matter. Of course, it will not be enough, but I will still have some idea.
How much data do you need for training????
Keep in mind, whatever metric you come up with here, optimization functions in most ML libs remain the same. metric only allows you to catch the moment when model starts to retrain.
+ writing your own metrics immediately restricts you in the development environment and in the used libs (not all of them support non-standard metrics)
Better think about the target, so it corresponds to what you really need. And can be evaluated with standard metrics accepted in ML:
https://habr.com/company/ods/blog/328372/
https://ru.coursera.org/lecture/vvedenie-mashinnoe-obuchenie/mietriki-kachiestva-klassifikatsii-1-IVuAc
Thax.... you're all clear. new guy.... I see that your face is unfamiliar :-)
With my targeting, I'm fine, do not worry about it, and the optimizer is written in Java. You think it's impossible to implement any complex metric there???? please....
Thax.... you're all clear. new guy.... I see that your face is unfamiliar :-)
With my targeting, I'm fine, do not worry about it, and the optimizer is written in Java. You think it's impossible to implement any complex metric there???? please ....
it was the 10th year of the Optimizer's development...
but happy people don't watch the clock