Machine learning in trading: theory, models, practice and algo-trading - page 1489

 

well, the state is Markovian, but what model? tabular method or what

it means you need more points for squared window, at least

and if a random series is fed, what kind of prediction other than 50\50 can we talk about?
 
The diversity of the final result of machine learning tends to unity to the finite number of variants - patterns, limited by the creator of this mechanism. The diversity of market movements, on the other hand, tends to infinity. And only sometimes tends to unity - a straight line, in a crisis.
So whether MO is a sufficiently effective way to provide a profitable trading strategy is a rather controversial question and is unlikely to ever be solved here.
 
Maxim Dmitrievsky:

It means that you need more points for the squared window, at least

I wrote that I even took several thousand points

Maxim Dmitrievsky:
And
if a random series is fed, then what kind of prediction other than 50/50 can we talk about?

What difference does it make, the data are the same, the model is the same

i predict new data with the whole function in the package (as in all the examples on the net etc...) the results are great

I use the same model to predict the same data but with sliding window and the result is different, unacceptable.

That's actually the question - what's the problem?
 
mytarmailS:

I told you that I even took several thousand points

What difference does it make, the data are the same, the model is the same

I try to forecast new data using the function in the whole package (as in all the examples on the web, etc...) the results are great

i predict the same data with the same model but in a sliding window and the result is different, unacceptable.

That's actually the question - what's the problem?

I don't know what kind of model you're using and where you're getting your states from. without any packages, what's the point?

Maybe the gsci is such that it doesn't give out any random output, that's why it gets predicted in the 1st case. Try to change seed for train and test to see that in the first case you can not predict anything, otherwise I don't know how to help the idea is not clear

 
elibrarius:
Splits are made according to the probability of classification. More precisely, not by probability, but by classification error. Because everything is known on the training drill, and we have not probability, but an exact estimate.
Although there are different splitting fi ries, i.e. measures of impurity (left or right sampling).

I was referring to the distribution of classification accuracy over the sample, not the total as it is done now.

 
Maxim Dmitrievsky:

it is not clear what kind of model and where you get the states from. Without any packages, conceptually what is the point

Maybe the gsche is such that it doesn't give out any randomness, so it gets predicted in the 1st case. Try to change seed for train and test to see that in the first case it's impossible to predict anything, otherwise I don't know how to help the idea is not clear

Here is the file, there are prices and two columns with predictors "data1" and "data2

You take and teach HMM only two states(on a data track without a teacher) on these two columns ("data1" and "data2") in Python or whatever you like. You don't touch the price, but you do it for visualization

Then you take the Viterbi algorithm and come to (data test)

we get two states, it should look like this

It's a real grail))

And then try to calculate the same Viterbi in a sliding window on the same data

Files:
dat.txt  2566 kb
 
mytarmailS:

Here is the file, there are prices and two columns with predictors "data1" and "data2"

You take and teach HMM only two states(on the data track) on these two columns ("data1" and "data2") in Python or whatever you like and don't touch the price, only for visualization

Then you take the Viterbi algorithm and (data test)

we get two states, it should look like this

It's a real grail.)

And then try to count the same Viterbi in a sliding window on the same data

Senk, I'll look at it later, I'll write it down, because I myself work with Markovian

 
Maxim Dmitrievsky:

Senk, I'll check it out later and report back, since I'm working with Markov's myself

how are you doing?

 
mytarmailS:

How's it going?

I haven't looked yet, it's my day off) I'll write back when I have time, I mean later in the week.

I took a look at the packs for now. I think it fitshttps://hmmlearn.readthedocs.io/en/latest/tutorial.html

Tutorial — hmmlearn 0.2.1.post22+g7451142 documentation
  • hmmlearn.readthedocs.io
implements the Hidden Markov Models (HMMs). The HMM is a generative probabilistic model, in which a sequence of observable variables is generated by a sequence of internal hidden states . The hidden states are not observed directly. The transitions between hidden states are assumed to have the form of a (first-order) Markov chain. They can be...
Reason: