Machine learning in trading: theory, models, practice and algo-trading - page 1274
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Again I do not understand the question, about "you personally did something as a result" - unfold it, in what such result, and what I had to do personally? If we're talking about my application of MO, then yes, I'm working on this issue in a number of ways (model creation, selection, application) - I've written a lot about my achievements here.
That is, you are already applying what you have voiced here (haven't read everything, as it's just not realistic 1200 pages). If you have 100500 signals, maybe one of them is the real one.
Well, by the next game, this non-standard behavior will already be beaten by the bot, it is clear that at the moment, a person can surpass the AI due to non-standard behavior, but as soon as the AI "will say" "Why so could it be....", the man will have a hard time.
If this trick will be repeated constantly, as a means of fighting, then yes, will do something, but otherwise it is a common outlier, to which the model and should not adequately respond.
That is, you already apply what you have voiced here (not all read, because it's just not real 1200 pages). Can you give me a link where this is used by you, you have 100500 signals, probably one of them is the one.
I use CatBoost and the "magic" tree from Doc, I have my own methodology there. At the moment only testing is happening on a real account, which has revealed a number of problems with the predictors, as a result I will have to learn from scratch, from the tree - it's about lost half a year. Catbust bakes models fast enough, everything there is already almost automated from model creation, selection, to model application in trade. Catbust has helped me a lot, especially with model interpreter in MQL. If no new bugs are detected, I will use models with real money by spring - I will use one lot for each model and I will have two accounts - for buy and for sell.
I use CatBoost and the "magic" tree from Doc, I have my own methodology there. At the moment only testing takes place on a real account, which revealed a number of problems with predictors, as a result I will have to learn from scratch, by the tree - it is about lost half a year. Catbust bakes models fast enough, everything there is already almost automated from model creation, selection, to model application in trade. Catbust has helped me a lot, especially with model interpreter in MQL. If no new bugs are detected before the spring I plan to load models with real money - the models will be used in packs, for each model 1 lot, there will be two accounts - for buy and for sale.
What's the "magic" tree from Doc? Where can I see the details?
There's a R script with a genetic algorithm to create a tree, select generations by entropy improvement. Then there is some kind of final selection. I take all trees for final selection and pull leaves from them for separate further measurements in MT5. The script has not been publicly posted, so there are no detailed descriptions. Apparently it is like selecting the best tree from the forest, but there is a depth limitation to avoid overtraining, and the process takes about 2 days for all cores in the last sample, where not all bars, but only signals for entry, and if all bars are for 3 years, the calculation takes 1.5 months. After completing the calculation, I do split tree, ie, remove the column with the root predictor of the best population tree and run everything again, it turned out that even at 40 this procedure sometimes creates a very good leaves, so I came to the conclusion that the best mathematical tree layout is not always the most effective, and one information prevents the manifestation of another, which proved later used in the same CatBoost, when randomly select predictors from the entire sample to build one tree.
By the way, note that the man was losing by making mistakes in actions (clicking crookedly/forgetting to activate a skill), but was able to win by using a non-standard tactical move - constantly distracting the opponent by landing in the rear of the opponent's base, which forced him to deploy troops to attack the human base, which gave the man time to develop his units to a higher level, resulting in him being able to cause significant damage to the opponent and win the match.
This is also how unexpected spikes and false proboys distract the trader from his objective.
Please note that this happened because of the crude conversion to the sliding window, the program is confused windows, it's a technical problem
before such crashes were repulsed with a jerk
Watch the clips carefully.
There's a R script with a genetic algorithm to create a tree, select generations by entropy improvement. Then there is some kind of final selection. I take all trees for final selection and pull leaves from them for separate further measurements in MT5. The script has not been publicly posted, so there are no detailed descriptions. Apparently it is like selecting the best tree from the forest, but there is a depth limitation to avoid overtraining, and the process takes about 2 days for all cores in the last sample, where not all bars, but only signals for entry, and if all bars are for 3 years, the calculation takes 1.5 months. After calculation I make splitting of tree, i.e. I remove column with root predictor of the best population tree and start all over again, it appeared that even on 40 this procedure sometimes very good leaves are created, so I came to conclusion, that the best mathematical tree layout is not always the most effective, and one information prevents to show another, what has already appeared later used in the same CatBoost, when they randomly choose predictors from all sample to build one tree.
By the way, Alglib uses a random set of predictors (50% of the total number by default) to select the partitioning in each node. This seems to be a standard approach from the creators of Random Forest. The result is a wide variety of trees.
But it is hard to find the best ones, since the difference in the final error is no more than 1%. That is, all trees come to approximately the same result, but in one tree for one predictor was split earlier, in another tree for the same predictor later (because earlier it was excluded from the list for splitting).
Actually it's a problem with predictor selection. I'm thinking that to check 100 predictors you should do a full search adding 1 and leaving the improving results. If you exclude the root predictor 40 times, after complex calculations, maybe a full enumeration is easier? Or do you have about a thousand predictors there?
There's a R script with a genetic algorithm to create a tree, select generations by entropy improvement. Then there is some kind of final selection. I take all trees for final selection and pull leaves from them for separate further measurements in MT5. The script has not been publicly posted, so there are no detailed descriptions. Apparently it is like selecting the best tree from the forest, but there is a depth limitation to avoid overtraining, and the process takes about 2 days for all cores in the last sample, where not all bars, but only signals for entry, and if all bars are for 3 years, the calculation takes 1.5 months. After completing the calculation, I do split tree, ie, remove the column with the root predictor of the best population tree and run it all over again, it turned out that at 40 this procedure sometimes create very good leaves, so I came to the conclusion that the best mathematical tree layout is not always the most effective, and one information prevents the manifestation of another, which turned out later used in the same CatBoost, when randomly select predictors from all sample to build one tree.
It turned out that you're doing nonsense, because you're imitating the forest and boosting algorithm, instead of reading the theory why it works, again
Note that this happened because of a crude conversion to a sliding window, the program is confused windows, this is a technical problem.
before such drops were repulsed on easy
You need to watch videos carefully.
Unfortunately you do not analyze the information you receive, turn off the commentary and see for yourself.
Before such situations have not been, watch the video carefully.
Unfortunately, you do not analyze the information you receive, turn off the commentary and see with your own eyes.
There were no such situations before, review the video carefully.
alphastar algorithm was CHANGED specifically for revashn from a full map view to a chunk view, they didn't do it right
You can see the bot is slow, switching between windows, can not figure out where the prism is and runs back and forth
It's a bug!
I have no respect for myself when communicating with you!