Machine learning in trading: theory, models, practice and algo-trading - page 3351

 
Renat Akhtyamov #:

for medicine.

where the graphs crawl between two parallel lines,

which is nothing compared to the financial markets.

---

gradient descent smoked over the weekend.

You can do it without the MoD in a heartbeat.

I.e., an approximation to the extremum:

x0-x1

x0-x2

x0-x3

etc.

there's something to it, of course.

It's a banchi. You have to adapt it for your own tasks.
 
Maxim Dmitrievsky

You have always written that price increments have no predictive power. But still you continue to only use them. Why?)

 
Evgeni Gavrilovi #:

You have always written that price increments have no predictive power. But still you continue to only use them. Why?)

Price has to tell a story.

 
Evgeni Gavrilovi #:

You have always written that price increments have no predictive power. But still you continue to only use them. Why?)

Did I write that? I think it was written more by opponents.
Well, if you consider time series only, there is no special choice. I also wrote about fractional increments in one of my articles. They seem to retain a little more information.

If we take just training on increments, without any tricks, then fractional differentiation really wins a bit, according to the results on new data.

I also did some experiments with automatic feature creation, which didn't lead to anything. Then I realised that the problem was in the partitioning and in the signal-to-noise ratio, and that it had to be fixed by other means than feature brute force. So on and on it went, all sorts of crazy ideas at that time. And then I learnt that in general it's the right thing to do :)

Nobody teaches it, there are no gurus. There is no one to turn to.

When I was still teaching neural networks in MT5, I was experimenting. Then I felt that MT5 environment was stifling in terms of MO, so I went to python.
 

I suggest all machine learning experts to test their models on my data.

World government bond index for predicting euro-dollar exchange rate, timeframe 15 minutes.

https://drive.google.com/file/d/1W4TOLbZCTCs3hEvGvptGxvTE6_r2TrWW/view

 
Maxim Dmitrievsky #:
My last 2 articles, at a simple level and without nuance, pretty much describe all of these approaches. Let's say they don't describe them, but they come close. I'm now checking the details of what they have researched. For example, inductive from transductive conformality differs only by one or two classifiers, separately for each class label. The latter is better (more accurate) at estimating posterior. And I used the inductive method. Another thing there is to retrain the models with adding and discarding each sample, for more accurate estimation. It's very expensive, but kind of efficient. But you can take simple and fast classifiers. Which I also wrote about while training on stumps.

I don't see any applause for my brilliance



like this, huh?


 
Renat Akhtyamov #:

like this, huh?


No, until you learn MO and python you won't appreciate it :)
 

random walk.


It should not be done this way.

 
Most likely SanSanych uses increments calculated from 0 bar (you can call it cumulative increment), not increments between neighbouring bars.
Cumulative increments to the 100th bar will look like: 405,410,408 pts, while bar increments will remain 5,4,-2 pts ...
On cumulative trends remain, on bar increments they are almost invisible. Well, if they are mixed, as in the article, there will be a wandering around 0.
I thought everyone here counts increments from 0 bar....
 

Ordinary increments with arbitrary lag. No logarithms or zero bars. The question was about signs. The main problem there is the low signal/noise ratio. But they contain all the information.

The deaf phone is evolving :)

I don't read recent articles at all, especially prolific water authors, with whole water cycles :)
Reason: