Machine learning in trading: theory, models, practice and algo-trading - page 1441
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Specifically on the 1st question there is an articlehttps://habr.com/ru/post/127394/
From this you can already make some assumptions
About brutforce - still have to search in a limited area, otherwise it can go on forever.
What I do myself - I do not know how to explain in 3 sentences. Just going through the options based on experience.
Regarding the article and the memory of the market - see in person, threw recently. Only 3 people know so far, the topic has not yet been fully explored.
I think I read this article a couple of years ago, it's not those articles on Habra that carry knowledge - this is the area of articles - here's a hypothesis, we'll prove it because we wanted to, 90% of articles on financial markets are written that way, I read about the SSA method on Habra - very funny reasoning)))
The author of the article takes the formula and puts it on the CD
Received series of increments, randomly mixed chronologically
yesterday's closing price - yes it is an information component, the closing price of the 1st chill, yes it can be an information component - statistics, reports, etc. that is used in the economy, but the closing price of 14.04, then 1.01, 23.02, 2.02, 17.03....
what can it tell us? - maximum median of the closing prices of the days relative to the previous one for the studied period.... initially the information component was destroyed and we make a conclusion about the amount of information, imho, semi-scientific nonsense, to write a dissertation
SZS: we are talking about what secret knowledge in the LS? flipped through the month, all that have not looked - a link to hithab
I think I read this article a couple of years ago, it's not those articles on Habra that carry knowledge - this is from the field of articles - here's a hypothesis, we'll prove it because we want to, 90% of articles on financial markets are written that way, I read about the SSA method on Habra - very funny justifications)))
The author of the article takes the formula and puts it on the CD
yesterday's closing price - yes it is an information component, the closing price of the 1st chill, yes it can be an information component - statistics, reports, etc. that is used in the economy, but the closing price of 14.04, then 1.01, 23.02, 2.02, 17.03....
what can it tell us? - maximum median of the closing prices of the days relative to the previous one for the studied period.... initially the information component was destroyed and we make a conclusion about the amount of information, imho, semi-scientific nonsense, to write a dissertation
ZS: we are talking about what secret knowledge in the LS? scrolled through for a month, all that have not looked - a link to hithab
Well, if the increments are not independent, then there is already hidden information for research. This is an important premise in itself.
Yeah, that link, in furtherance of the topic. It is not a goddamn secret, but that would find it needed pogromotiv, so please keep it a great secret until the time
There are many great discoveries to come, as soon as I finish with the current ones. There is a lot of work to be done, and somewhere in there is a true grail buried, dusty and forgotten.
Well, if the increments are not independent, then there is already hidden information for research. This is an important premise in itself.
Yes, that link is a follow-up. It's not a secret, but to find it you have to do a lot of work, so please keep it a great secret for the time being.
Yes and no, you can not consider the CD as a physical process, if it is a physical process, it is always possible to distinguish the state of the process based on input data and the system's reaction to it
In Central Asia the phase of the market is important and to mix the data is a loss of information, about the phases, Vasily wrote my favorite article - everything is very clear and the man really wants to share informationhttps://www.mql5.com/ru/articles/1573
It is important to read about market phases - the price range is the same, but there will be different market phases.
ZZY: about mysteries, OK, haven't read it myself yet
SZZY: about phases, yesterday in Excel I was looking at values of valutility in logarithmic scale for all TFs. I made one quite interesting observation - the end of the current phase (up/down trend) is an increase in valutility on hourly TFs, while on higher TFs - week, month - everything is quite the opposite - increasing valutility is the beginning of the phase (up/down trend).I.e. TSs working according to some "Elder's three screens" and other postulates have the right to live, probably waves, etc. - This is nonsense of course, but such observations are still coming from the surges of volatility.
Yes and no, we shouldn't consider the CD as a physical process, if it's a physical process, we can always distinguish the states of the process based on the input data and the system's reaction to it
In Central Asia the phase of the market is important and to mix the data is a loss of information, about the phases, Vasily wrote my favorite article - everything is very clear and the man really wants to share informationhttps://www.mql5.com/ru/articles/1573
It is important to read about market phases - the price range is the same, but there will be different market phases.
ZZY: about mysteries, OK, haven't read it myself yet
SZZY: about phases, yesterday in Excel I was looking at values of valutility in logarithmic scale for all TFs. I made one quite interesting observation - the end of the current phase (up/down trend) is an increase in valutility on hourly TFs, while on higher TFs - week, month - everything is quite the opposite - increasing valutility is the beginning of the phase (up/down trend).I.e. TSs working according to some "Elder's three screens" and other postulates have the right to live, probably waves, etc. - Of course, this is nonsense, but so far such observations have been made with respect to valotility spikes
There is less information in the volya than it would be desirable. From what I have discounted you can get more
about cats - interesting, well, now other things are interesting, it's hard to tear
once long ago I was comparing curves of cats with RSI of different periods, they practically coincided
About changes in market phases, accompanied by an increase in the ox - I will hold, there was even a manual TS, and now sometimes I am guided. It is understandable.How to identify the alphabet
Alphabet and language can be set in different ways. A completely dumb option:
Alphabet: {R,U,D}, where R is the beginning of a new time interval (let it be a millisecond), U is one point up, D is down
For example: "RDRRUUUR".
It may be trickier to treat fractals (in Mandelbrot's sense) as a formal grammar.
Alphabet and language can be set in different ways. A completely dumb variant:
alphabet: {R,U,D}, where R is the beginning of a new time interval (let it be a millisecond), U is one point up, D is down
For example: "RDRRUUUR".
You can be trickier - consider fractals (in Mandelbrot's sense) as a formal grammar.
Today I was going through my old books in the garage, thinking about reading again. Looked with interest at Mandelbrot's "NOT Obedient Markets" and Bernstein's "Against the Gods, Taming Risk". I have never read it, so I thought I was smart.)
Well, I put Investments and Options under the wheelsonce long ago I was comparing curves of cats with RSI of different periods, they practically coincided
yes, about RSI, if there is any information in OHLC, then the change of this information is best shown by RSI
by the way, it's a good way to filter the data for MO - break it into sections by RSI values
Yes, about RSI, if there is any information in OHLC, then the change of this information RSI shows the best
By the way, this is a good way to filter data for MO - break it into sections based on RSI values
Unfortunately, rsi loses a lot of "memory". it remembers only a few steps back
Today I was going through my old books in the garage, thinking about reading again. I looked with interest at Mandelbrot's "NOT the obedient markets" and Bernstein's "Against the gods, taming risk". But didn't pick it up, thought I was smart enough))
Mandelbrot has a useful picture about prices, where each of their movements is represented as three (the second is the maximum correction) and a multifractal price structure is based on this. This can be written as a rule of formal grammar: T -> TCT
However, I do not see much sense in this approach. Mainly because of the same non-stationarity. Actually there is room for theoreticians there - stochastic grammars and so on)
Alphabet and language can be set in different ways. A completely dumb variant:
alphabet: {R,U,D}, where R is the beginning of a new time interval (let it be a millisecond), U is one point up, D is down
For example: "RDRRUUUR".
You can be trickier - consider fractals (in Mandelbrot's sense) as a formal grammar.
Time is not the best way to describe CD - if participants have no interest in an asset, there will be some fluctuations of CD caused by algorithms of market makers.
Just to code the sequence - I've looked, there are no repeats, or rather about 50% of OHLC will have repeats of combinations of candlesticks, and the remaining 50% will have an equal number of statically insignificant combinations
I.e. in numbers: here is the history of 132623 bars H1, my script finds 14181 repeats of combinations of 4 bars on the history, here are the statistics of repeats of these patterns? :
7480 7399 2911 2898 2430 2338 1666 1623 1352 1308 1303 1302 1020 990 981 977 928 921 704 704 700 684 682 682 667 634 627 596 591 584 583 570 570 569 566 564 553 481 467 459 453 452
447 446 437 423 408 396 389 383 350 347 346 344 342 335 324 312 306 299 290 290 279 272 263 259 254 247
And then will gradually reduce the number of found repetitions of 10-20 pieces, and so up to 30-40 pieces, then the end will be 1-2 pieces - ie, no one alphabet)),
And it doesn't matter how many candlesticks you take for analysis - even 3, even 5, even 10 - you will always get the following statistics - first there will be a lot of detected patterns, and then the number of patterns will evenly decrease, i.e. it will be a statistically inclined bell;)