Machine learning in trading: theory, models, practice and algo-trading - page 3528
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
You can just feel the effect empirically.
2. in the second case, the best result is some bullshit.
3. in the 3rd case, it is not quite bullshit, but not quite bullshit either.
Although the charts of quotes (orange at the bottom) visually do not differ much on the 3 screens,although the quotes from different clusters are traded. On 2 and 3 as if a little bit more flat. And the results are strong.
I cannot explain what happens on some clusters and does not happen on others.
It also seems obvious that there is some time structure related to market sessionality. Another thing is that it is hardly described by simple methods like "scheduled reversals".
Imho, there should be some dependence on the size of price movement. Large movements (from the daily range and above) are hardly determined by daily rhythms.
you have drawn it above yourself :-)
Volatility is the nearest consequence of sessionality. It roughly corresponds to the probability density of any market events at all. You can download any Expert Advisor in the market, run it in the tester and see your drawing in its statistics.
At least two subsequent increments (before and after the reversal) should have a different sign.
On the one hand yes, but on the other hand - this event (last straw, top up, peak peak) will never coincide exactly to minutes. And it will decay into your own chart.
Logically, a reversal is a transient process, which is time-consuming and increases the already significant volatility. Its upper extremum comes quite after the event that caused the reversal.
PS/ lately the Eurozone can be traded at 15.30 MSK. Look at the situation, drink a couple of cups of coffee and calmly choose EUR GBP CHF or XAU, direction and enter.
I delete examples that are on the conditional boundary of class separation.
Do you define the boundary by one predictor or do you search several spaces (predictors) at once?
Stability improvement is obtained because of simpler bounds.
How do you define stability improvement?
Do you determine the boundary by one predictor or do you sort of search on several spaces (predictors) at once?
How do you define stability improvement?
There can be variations, you can do one, you can do several
Cross-entropy/log-loss reduction, improvement on new data.
To give an explanation of what happens on some clusters and does not happen on others - so far laziness does not allow.
Variability allows us to randomly find patterns that are stable on new data....
How to separate one from the other with a high degree of accuracy is a mystery..... I'm trying to go the other way - assessing the damage to the structure of identified potential patterns from the split.... i.e. the goal is to continue learning without destroying the found knowledge in the current iteration.
Variance allows randomly finding pattern biases stable on new data ...
How to separate one from the other with a high degree of accuracy is a mystery..... I'm trying to go the other way - assessing the structure damage of identified potential patterns from the split.... i.e. the goal is to continue learning without destroying the found knowledge in the current iteration.
No, the number of good clusters in the set increases. That is, not just one good cluster, but almost all of them. So there's something inside each one.
You crave my mental destruction, I guess, with your definitions ))Let me be a bit of a Guru today.
Since the Guru has come a long way, he can whisper that this approach through clustering improves dopamine production, complexion and potency. Use it.
you drew it above yourself :-)
Volatility is the nearest consequence of sessionality.
It is an obvious and non-tradable pattern, and we would like non-obvious and tradable ones)
On the one hand yes, but on the other hand - this event (last straw, top top, peak peak) will never coincide exactly to minutes.
So there's something inside everyone.
Try spiking ineffective predictors, even random, if the effect remains, it's really interesting to think about. In March I also got a lot of stable clusters, but not all of them.
You crave my mental destruction, I guess, with your definitions )))
Well, it's probably hard to take, as I haven't read anyone doing this.....
The point is that at each iteration of tree building we have good variants for the next step (a set of variants from which one of the standard metrics is chosen), so, depending on the choice of split (in my case double - since the quantum segment is chosen) some of the variants statistically deteriorate - thus damaging the probabilistic structure, which does not allow to use some of the splits at the next iterations.
If it is interesting to feel the ready bot - I can send it to you next week. And to all active participants of the topic. Test on demo, reals at your own risk, I am not responsible for it.
Thanks, but there is no interest in black boxes - I'm whitewashing my algorithm :).