You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Well, that's what it's all about. Any interpolation polynomial is not suitable for extrapolation. Fourier is exactly the same as the original series, and polynomials like Lagrange or Taylor produce avalanche-like curves with increasing rate of price change. Smoothing soothes the picture, but not much, and it is not right as it loses the connection to the original source.
There is a simple, clear and effective extrapolation method, which is not connected with interpolation. Trend.
You're slow to recover from the stress of what you've read, just like previous readers, it's a different topic for discussion here
You're as slow to recover from the stress of reading as the previous readers, it's off-topic now
Yes, it's already off-topic here.
Hi Maxim,
Few days ago you were looking for kernel solutions for n input vectors instead of 2. Have you found that solution or trying to implement some other way?
If I am not wrong, then instead of K (x, y) where K is the kernel function, you are looking for the output of K (x1, x2, x3, ..., xn). Am I correct in understanding?
What I have learned is that the kernel of the function is the scalar value. So it should be the sum of all the dot products. It should be something like this:
K (x1, x2, x3, ... xn) = Sum of all z (i). Z (i + 1) for all i where 0 <i < n
It can be for a loop in MQL5 with the sum of all functions of the kernel function.
I have no way to test it. But have you tried and tested something similar? Or am I missing something here in understanding?
Hi Maxim,
Few days ago you were looking for kernel solutions for n input vectors instead of 2. Have you found that solution or trying to implement some other way?
If I am not wrong, then instead of K (x, y) where K is the kernel function, you are looking for the output of K (x1, x2, x3, ..., xn). Am I correct in understanding?
What I have learned is that the kernel of the function is the scalar value. So it should be the sum of all the dot products. It should be something like this:
K (x1, x2, x3, ... xn) = Sum of all z (i). Z (i + 1) for all i where 0 <i < n
It can be for a loop in MQL5 with the sum of all functions of the kernel function.
I have no way to test it. But have you tried and tested something similar? Or am I missing something here in understanding?
Hi, I actually dont know how to do this now, because those algorithms (like SVM or gaussian process) works only with inner products, not with a feature mapping. I'm seeking now for a good ideas how to do better
Hi, I actually do not know how to do this now, because those algorithms (like SVM or gaussian process) works only with inner products, not with a feature mapping. I'm seeking now for a good idea how to do better.
As per my understanding kernel trick is a subset of the SVM algorithm and so you mean you are no longer looking to implement kernel trick?
What you call as feature mapping is expressed in terms of dot product or inner product of the higher space polynomials in the kernel trick and so in my understanding it is just a simple multiplication of the kernel functions.
To make it clear, in K(x,y) are you planning to use the candle close price of two consecutive candles as x and y to get the kernel Or are you trying to implement something else?
As per my understanding kernel trick is a subset of the SVM algorithm and so you mean you are no longer looking to implement kernel trick?
What you call as feature mapping is expressed in terms of dot product or inner product of the higher space polynomials in kernel trick and so in my understanding it is just a simple multiplication of the kernel functions.
To make it clear, in K(x,y) are you planning to use the candle close price of two consecutive candles as x and y to get the kernel Or are you trying to implement something else?
I mean I dont understand how to change input vectors after multiplication, they are absolutely equal then. It says need to use Gram matrix to place vectors (feature mapping), and then some manipulations with it. Here is sample code with SVM
https://pythonprogramming.net/soft-margin-kernel-cvxopt-svm-machine-learning-tutorial/
Now I just learning about vector spaces to understand it
maybe better if we go to en forum )
I mean I dont understand how to change input vectors after multiplication, they are absolutely equal then. It says need to use Gram matrix to place vectors (feature mapping), and then some manipulations with it. Here is sample code with SVM
https://pythonprogramming.net/soft-margin-kernel-cvxopt-svm-machine-learning-tutorial/
Now I just learning about vector spaces to understand it
maybe better if we go to en forum )
Of course, the reference material is given in the other forums where the Gram matrix is solved in the video. I am trying to understand it also.
Also, have you already understood and implemented till now in MQL5? Otherwise, there is no point trying further:)
Of course, the reference material is given in the other forums where the Gram matrix is solved in the video. I am trying to understand it also. Here is just another quick video reference specific to gram matrix:
https://www.youtube.com/watch?v=8JiMUqbByGA
Also, have you already understood and implemented till now in MQL5? Otherwise, there is no point trying further:)
Its a simple loop which calculates gram matrix... but then working quadratic solver, i'm not sure for what... or its just SVM logic already :)
thanks for video
Its a simple loop which calculates gram matrix... but then working quadratic solver, I'm not sure for what... or its just SVM logic already :)
thanks for video
Exactly...as I said it can be implemented probably just with a for loop in MQL5.
Well, we don't need to bother about other stuff as long as our end goal is achieved:)
I mean as long as we can take the inputs in Mql5 and get the outputs as kernels as expected, then other stuff doesn't matter. Because anyway the final part will be testing part where everything will be revealed if it has been implemented correctly or not based on the results.
By the way, SVM is just a classifier technique and kernel trick makes it easy due to simple dot product. I don't think everything of SVM need to be implemented in kernel trick, because in kernel trick everything is done by the function itself and hence, nothing to do much.
Also, this video explains SVM in details along with sample code in python using kernel trick. You can have a look:
https://www.youtube.com/watch?v=N1vOgolbjSc&t=157s
But I dont understand how to work with Gram matrix now, because this is not a new transformed features, its just matrix with scalar product of old features