Machine learning in trading: theory, models, practice and algo-trading - page 2277
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
And variants close in amplitude will generalize in this way. For example, if you divide a chip in half 3 times, the tree will contain pieces from 0 to 0.25, from 0.25 to 0.5, from 0.5 to 0.75 and from 0.75 to 1.
So one sheet will contain all variants with this feature from 0.5 to 0.75, for example. It will contain 0.5 and 0.55 and 0.64 and 0.72. That's a pretty good generalization by amplitude. Probably the same for neural networks, due to nonlinear activation functions.
But there is no generalization by time in the tree.
Apparently it is necessary to fit into a pattern of 20 features and chunks of 10 features stretching them and chunks of 50 features compressing them and a dozen more intermediate variants.
No...
interpolation/extrapolation is the same as screwing with different sized windows, the same losses.
Do you realize how time-consuming that is? For every iteration.
it's very fast. You have a 1,000-character pattern. You have to interpolate smaller samples to the size of the pattern.
7 microseconds will do.
but maybe you need to correlate the smaller patterns with each other (small point by point on the x-axis), then it will be faster. Then it is better to compress large and interpolate small.
SZY. If, for example, the sample is 490 and the pattern is 500, you can add 10 Nan in a row randomly (or better evenly distributed), and then interpolate
and if you need to reduce, it is even easier, in piecewise linear approximation you set the number of pieces equal to 500, if the pattern > 500
And you don't need a big range of window sizes. From 200 to 800 there... in steps of 20-50. Everything will fly. Or maybe not, I don't even know why it's needed ) but it's so fast in video, but the patterns are terribly ugly.
I used to look for multifractals, i.e. the current fractal is part of a bigger one, similar to it. Then I took a continuation of the larger one and predicted it. Well, sometimes it works, sometimes it doesn't, because they tend to warp a lot, although they are similar in general.
I.e. just take the last piece of the chart of n bars and take the big last piece of n+100500 bars. Find what the small one correlates with in the big chunk and see what comes after that chunk, then carry that into the future. If there's more than one match, then average it out. But I also did affine prebars with it because the slope angle changes too.
it's very fast. You have a 1000-character pattern. You need to interpolate smaller samples to the size of the pattern
7 microseconds will do.
but maybe you need to correlate the smaller patterns with each other (small by points on the x-axis), then it will be faster. Then it is better to compress large and interpolate small.
SZY. If, for example, the sample is 490 and the pattern is 500, you can add 10 Nan in a row randomly (or better evenly distributed), and then interpolate
and if you need to reduce, it is even easier, in piecewise linear approximation you set the number of pieces equal to 500, if the pattern > 500
And you don't need a big range of window sizes. From 200 to 800 there... in steps of 20-50. Everything will fly. May be it won't, I don't know why it's needed in general). But video samples are fast, but the patterns are terribly ugly.
Do you have something in alglib to compress and decompress the graphs?
I see several on interpolation. Which one will work best for us? Which one is faster?
I think I found it. Straight from one grid to the other.
https://www.alglib.net/interpolation/spline3.php
Quick batch grid interpolation
spline1dconvcubicfunction
This function solves the following problem: given a table of y[] function values
in old nodes x[] and new nodes x2[], it computes and returns a table of
function values y2[] (computed in x2[]).
This function gives the same result as calling Spline1DBuildCubic () followed by
a sequence of Spline1DDiff () calls, but it can be several times faster when
calling ordered X[] and X2[].
INPUT PARAMETERS:
X-old spline nodes
Y-function values
X2-new spline nodes
ADDITIONAL PARAMETERS:
N-number of points:
* N>
=2
* if set, only the first N points from X/Y are used
* if not set, automatically determined by X/Y dimensions
(len (X) must equal len (Y))
BoundLType - type of boundary condition for left boundary
BoundL-left boundary condition (first or second derivative,
depending on BoundLType)
BoundRType - type of boundary condition for right boundary
BoundR-right boundary condition (first or second derivative,
depending on Boundr type) xml-ph-0009@de
* N2>=2
* if specified, only the first N2 points of X2 are used
* if not specified, automatically determined by the size of X2
OUTPUT PARAMETERS:
F2-values of functions at X2[]
BUNDR
The subroutine automatically sorts the points, so the caller can pass an unsorted array.
Function values are reordered correctly when returned, so F2[I] is always
equal to S(X2[I]) regardless of point order.
SETTING Boundary Values:
The BoundLType/BoundRType parameters can have the following values:
* -1, which corresponds to periodic (cyclic) boundary conditions.
In this case:
* both BoundLType and BoundRType must be equal to -1.
* BoundL/BoundR are ignored
* Y[last] is ignored (assuming it is equal to Y[first]).
* 0, which corresponds to a parabolic-complete spline
(BoundL and/or BoundR are ignored).
* 1, which corresponds to the first derivative boundary condition
* 2, which corresponds to the second derivative boundary condition
* The default is BoundType=0
TASKS WITH PERIODIC Boundary Conditions:
Tasks with periodic boundary conditions have Y[first_point]=Y[last_point].
However, this subroutine does not require that you specify equal values for
first and last points - it automatically forces them to be equal by
copying Y[first_point] (corresponding to the leftmost, minimal X []) to
Y[last_point]. However, it is recommended to pass consecutive values of Y [],
i.e. make Y[first_point]=Y[last_point].
-- PROJECT PROJECT --
Copyright 03.09.2010 Bochkanov Sergey
Is there something in alglibe to compress and decompress the graphs?
I see several for interpolation. Which one will work best for us? And which one is faster?
Adaptive filtering
Idea for ms. Assemble the system on the "wrecker", "wrecker" to change adaptively with the help of ns.
Adaptive filtering
Idea for ms. Assemble the system on the "machka", "machka" change adaptively with the help of ns.
what are you waiting for?
#22497What are you waiting for?
#22497What's on the new data?
What about the new data?
As far as I remember the TS has been working a little bit and died...
Filtration in the usual sense (wizards, filters, etc.) is always a delay, a delay in the market is a drain....
It is necessary to create a different paradigm (without delays), levels for example...