You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The results so far are as follows:
This is the AUDUSD d=5, k=2.
My girl is not very smart. On H4 she cannot work at all, and on the hour chart with the number of entries greater than 5 - it is a solid loss.
I stuck mine with the hour bars just for fun. I think paralocus, too. Although it's not clear what its Bid(200) is...
Bid(200)...
That's 200 first differences taken to calculate volatility
And show me your graph of one neuron from the clock.
Quite strange that things are so bad on H4. So far with increasing TF the results could only get better.
And show me your graph of a single neuron from the watchmaker
Here's the Eurobucks watch:
There is nothing strange about the fact that as the TF increases, the profitability behaves this way. After all, the profitability is the product of predictability by the instrument volatility in the selected TF. The volatility increases proportionally to the root of TF, and predictability in interpretation of a single neuron is the linear correlation coefficient between neighboring readings in the series of the first difference. This dependence is easy to construct. By the way, in the next thread someone just pulled out my construction on this topic from several years ago. The result obtained there can safely be used now. Well, it shows that predictability of the tool (in the sense mentioned) decreases as the TF increases according to the exponential law. Thus, we have two multipliers, one of which is increasing as a root and the other is rapidly decaying, their product has a global maximum, which determines the position of the optimal TF for the linear model (in this case implemented on NS) and in your case it comes to hours.
I do not have such a picture. My aim is not even the inclination of the mink, though it is there too. I have this kind of pattern (vertical stripes) only on AUDUSD,
but not at all on Eurobucks. Besides, when working with quotes my girl has again encountered the old "problem" - dependence of the results on the initial conditions (initialization of weights). I remove weights (initialize them with zero) - everything is fine - the results are repeated one to one. While I do not have absolute confidence in the correctness of its work - try your neuron on my data - they are in the trailer with the girl, and if you have the time and desire - check the girl on your data.
Oh, and traditionally, silly question: How do you normalise the quotient by double the volatility? I've tried multiplying - not that, dividing also seems to be useless.
P.S. I have read the theme. I am not sure about ACF (i.e. its autocorrelation function) but its general idea is clear. My only use now is the market pullback. I have already lost one deposit on trend trapping (they say they still exist).
On the net, someone always finds someone else's "archives" from N years ago and it turns out they haven't lost their novelty. I remember about five years ago, on a now defunct forum of Castaneda followers (at the request of the nation ... -:)) explained one powerful technique for overcoming the fear of death. The branch grew to 1000 posts within three months. And not so long ago, one of my friends, gives me a link to a site - go and read what things dude writes. When I went and saw there some parts of his texts, all mixed up and with cuts of "succinct expressions"... the author, of course, is not me. I ask the owner of the resource: "where did you get it?" And he answers that everything is on the net... probably right.
I stuck mine with the hour bars just for fun. I think paralocus, too. Although it's not clear what his Bid(200) is...
You know what confuses me about this whole thing... It's not a fact that hourly bars alone - if you're talking about closing or opening prices or whatever - are enough. If you think about the fact that the timeframe is purely an artificial thing and forex doesn't really care about it, then the notion of opening or closing prices becomes meaningless as well. What is left is the extremums - the fractals - and maybe a corridor of price fluctuations. Plus time and volume. In any case, we cannot do away with the opening/closing prices only.
For the debugging of the method I would suggest to enter something "with a hint" to the expected result. I know by my own example (*smile) that if the network training works, it quickly "gets the hang of it" and starts producing positive results. It is very convenient to polish the learning process on such a set of data. And after that you can start searching for "real" input data...
That leaves extremes - fractals - and maybe still a corridor of price fluctuations. Plus time and volume. In any case, opening/closing prices alone will not do the trick.
Well said!
Only I would be more categorical - only the extremums and maybe the time remain. And the time - indirectly, because it is tied to the arrival / departure of 2-3 major players during the day who follow a certain tactic of behavior. It's what we're tied to, not specifically the time.
Oh, and traditionally, silly question: how do you normalize quotes by doubling the volatility? I've tried multiplying, but that's not it, and dividing seems to be useless.
It is necessary to divide increments of prices by the doubled volatility, it will allow to normalize a range of amplitudes given on an input of NS in area +/-1, approximately of course.
...time is indirect, as it is tied to the arrival/stay of 2-3 big players during the working day who adhere to a certain tactic of behaviour. It's what we're tied to, not specifically the time.
Didn't think it was that serious....
For debugging the methodology, I would suggest feeding something "with a hint" of the expected result to the input. I know from my own example (*smile) that if the network training works, it quickly "gets the hang of it" and starts producing positive results. It is very convenient to polish the learning process on such a set of data. And after that you can start working on getting the "real" input data...
Sorry, of course, but I've been having trouble understanding hints lately. Maybe it's because I've been sitting on the computer... What is this "something" you're writing about? At least give me an example.