Principles of working with an optimiser and basic ways of avoiding fitting in. - page 5
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
all these methods are shamanism, just like TA without understanding why it should work on a range of prices and not temperatures for example))
In the Econometrics: Prediction one step ahead topic I showed in figures that this is not shamanism. The point of this proof: after modelling the GARCH residual, the unsteady residual spread decreases and becomes fractions of pips instead of tens of pips.
Can you buy something at the shop for "unsteady residues in fractions of pips"? :)
I agree with you completely and completely disagree (pardon the pun) I shall explain: we shall take as an axiom the statement put forward by you and we shall consider that it is a priori (to experience, forgive a free interpretation) - true. In this case, our task is to find a deterministic component and on its basis build a model which will give us mo within the limits of our needs. We find it (extrapolate, use Ito and necessarily Stratonovich - we precisely use it, write a neural network or find regularities expressed as a difference between two averages IMHO much more convenient than Stratonovich and other stochastic dances, the meaning is the same anyway) - who has enough imagination and what is closer to whom. Now we have a function, which, as we have defined, is deterministic (note that we set up an experiment and now assert that it is deterministic a posteriori). Everything in our logic is fine: an a priori model is accepted and on its basis we construct an a posteriori function that extracts the deterministic a posteriori regularities. The only thing is that the patterns we derive are a posteriori. We can never, ever, ever do anything about it. Our task is to find an algorithm that (if we accept your assumption as axiomatic) will be dynamic-varying depending on our data in real time(because any determinism in non-stationarity is also non-stationary).
Offtop: about half a year ago I asked on the forum what the difference between kiwi (New Zealander) and all other pairs- managed to get some model which brings very good profit (again aposteriori) on all pairs except kiwi. No fitting, no Ito-Stratonovich, no self-defeating. Exclusively opening prices, no optimisations. The model is surprisingly simple and straightforward based on the simplest candlestick statistics(which cannot work in the marketin principle - this is what surprised me), moreover random patterns generated also bring profit. But the commodity currencies and the currencies of small economies (those subject to trends) were completely out of the allegedly found pattern. That is the only reason I agree with you - we can admit that there is some determinism, even though it contradicts common sense (excuse the pun), it is this determinism that sometimes prevents ... All told is purely IMHO without any pretensions to the truth.
Why does a pattern need stationarity? Suppose we have a working pattern. The distribution of its occurrence over time is rigidly nonnormal. The main characteristics of this pattern are also nonstationary and float with time. So what? The main condition is only one, - that it continues to appear and not disappear. Simply our MO will be non-stationary, but still positive, and this is the main thing. Another matter is that non-stationarity seriously complicates the search for these very patterns. We cannot rely on standard statistical methods to identify it and in the process use it. For example, if it, appeared every day for the last year and suddenly disappeared today, statistics will say - the pattern no longer works. But this is not true, because it appears when it pleases, and is not obliged to generate stationary characteristics. This is its property, at a fundamental level, that determines the need to reoptimise algorithms. Because one way or another, we are working with fixed parameters that correspond perfectly to a given pattern only on history. Tomorrow it will be slightly different, which means that there will be a shift from the extremum of our fitting.
And it's just a matter of surviving tomorrow's shift. And we can survive using relatively stable regularities, or (and) sufficiently rough (simple) methods of identification and dealing with them that their rough estimation would allow the regularity itself to change within sufficiently wide limits.
This is my rationale, why simple methods are usually more effective than complex ones, and why it becomes possible to make money in the market in the first place.
can you buy anything at the shop on "unsteady balances in fractions of a pip"? :)
Everything is great except one thing: you don't meet the requirement of reversibility of the model. Take a part of the quotient (a trend for example) and work with it. Standard TA scheme. What about the residual? Could this residual start to twist everything in our model? How can we be sure if we don't consider the residual at all?
No certainty - God forbid.
So it's a sign of the end of modelling - I can ignore it and leave it to the cashier
You are discussing the same thing in every thread - your model. It seems that everyone has already commented on it more than once))
No confidence-God forbid.
By the way, you can make money on a zigzag :) Speaking of zigzags