Machine learning in trading: theory, models, practice and algo-trading - page 2475

 
Evgeniy Ilin # :

To achieve this in a neural network, the type of neurons should be as different as possible, and the number of layers and their composition should also be arbitrary, then it is possible.

yes, to approximate parameters independent from each other in attempts to bring them into 1 output - really, it is necessary to weigh statistically deep historical data up to the current moment - i.e. to have a representative sample (the more - the more probability to hit the target)... to process them (weigh) in black box... but this is all statistics, and it can be far from even the current economic phase... -justget an average with a large variance (and a coefficient of variation in addition)

we seem to have been through a full economic cycle for 30 years - the signs for learning for the current moment are better taken from a similar phase of economic development (so I suppose)-- to reduce the sample of initial data ()... but I personally don't have that data (necessary for me to believe in the validity of a full-fledged analysis over such a long period)....

Evgeniy I lin # :

Everyone mostly uses neural network of fixed architecture, but for some reason they don't understand that the architecture must also be flexible, destroying this flexibility, we destroy the very possibility to minimize re-learning. In general, of course, these same criteria can be applied to simple models, even you should, then you'll get a good forward, my model allows a couple of months profit ahead and the settings may be updated in one day. One of the basic tricks is to take as much data as possible(10 years of history or more), in this case there is a search for global patterns, and they are based on the physics of the market and in most cases work for a long time.

This is not a trick but an attempt to break away from reality... imho

(it is possible to get a normal forward - with less cost - logical - without analyzing huskiness and lack of information for the current moment - even though everything is learned only by comparison, and in the black box too, but still first you would turn on your brain, and then even not so deep machine learning, but only concerning part of the current moment by signs important in the current market situation) - and then all the necessary data from history is missing...

All the same, an understanding of the Ecosystem and the knowledge base of matter and energy exchange in it, combined with timely awareness of the driving news/events, is a way to figure out Evolution without loading such PC power just for the sake of the average and variance... imho

But thanks for your observations... but for me the need for such deep learning is debatable (although I guess for black box it's undeniable)

 
mytarmailS #:


My vision of course not a standard, I reason mostly from the standpoint of saving time, just because I communicate with many and I think it is no secret that we in 5 years will read this forum and probably laugh at themselves, I just think that all the developments are not empty and you should try to scale what is received. I have many times wanted to give it all up, but for some reason I have not done it even though it does not bring money. It seems to me that all because this experience is of great value, everyone has his own, but all that in our power is either to move on or even go to the pub and drink. It seems to me that we just need to scale and strengthen everything that has the beginnings of some, and I also think that most likely everything is very simple. The more I complicate something and the more I want to put some weird math in it, the less predictable it all is. To be honest, I think everyone who spent years on this understands that they won't get 100 percent a month, and those who haven't spent that time will look at your 100 a year and buy a signal with 100 a month even without paying attention to the fact that it hangs for 2 months

 
JeeyCi #:

Yes, in order to approximate parameters independent of each other in an attempt to reduce them to 1 output - one really needs to weigh statistically deep historical data up to the current moment - i.e. to have a representative sample (the bigger - the more probability to hit the target)... to process them (weigh) in a black box... but this is all statistics, and it may be far away even from the current economic phase... -justget an average with a large variance (and a coefficient of variation in addition)

we seem to have been through a full economic cycle for 30 years - the signs for learning for the current moment are better taken from a similar phase of economic development (so I suppose)-- to reduce the sample of initial data ()... but I personally don't have that data (necessary for me to believe in the validity of a meaningful analysis over such a long period)....

it's not trickery, it's an attempt to get away from reality... imho

(it is possible to get a normal forward - with less cost - logical - without analyzing huskiness and lack of information for the current moment - even though everything is learned only by comparison, and in the black box too, but still I would involve my brain first, and then even not so deep machine learning, but only concerning part of the current moment by signs important in the current market situation) - and then all the necessary data from history is missing...

All the same, an understanding of the Ecosystem and the knowledge base of matter and energy exchange in it, combined with timely awareness of the driving news/events, is a way to figure out Evolution without loading such PC power just for the sake of the average and variance... imho

But thanks for your observations... But for me the need for such deep learning is debatable (although I guess for black box it's undeniable))

The variance and other deviations are the natural outcome of analyzing a system based on probabilities but not on differential equations, all you can get is a system of differential equations, the variables in which are attention "probabilities of certain events", those events that seem important to you, and all you can predict is probability but not exact value. Once you understand that, everything becomes easier and you won't be afraid of variance or other things. You will always get variance, your task is just to minimize it. You cannot predict the long term behavior of the system with 100% accuracy, but you can reach a certain value which will be enough for a profitable trading. I mean, don't do its job for the machine, give it some freedom and you'll see that it knows better than you what data it needs. By the way about the black box, the blacker the box, the smarter it is. The AI is built on this very principle.

 
Evgeniy Ilin #:

. I mean, you don't have to do the machine's job for it, give it freedom and you'll see that it knows better than you what data it needs. By the way about the black box, the blacker the box, the smarter it is. The AI is built on that principle.

- Well it is clear, the more data on the input (and signs for selection), the more accurate approximating estimate and even forecast on its basis (although still with probability of error)...

after your posts, the developer's area of responsibility becomes a little clearer,

Evgeniy I lin #:

The variance and other deviations are the natural outcome of analyzing a system based on probabilities but not on differential equations, all you can get is a system of differential equations, the variables in which are attention "probabilities of certain events", those events that seem important to you, and all you can predict is probability but not exact value.

Algorithm To Find Derivatives Using Newtons Forward Difference Formula

Evgeniy Ilin# :

. You will always get deviations, your job is just to minimize them.

yes, there was a picture somewhere on the link that I left earlier ~ convergence of prediction and error to the very bottom of the parabola (this is to avoid overtraining too much and to stop in time) - Evolution goes in a spiral to this point (so I guess with decreasing acceleration, until it stops completely - until the variation of difference from more to less, like falling into a funnel)

p.s.

I coded once using Calculate Implied Volatility with VBA- Implied Volatility with Newton-Raphson Iteration - couldn't find any signals... And understandably (because Black-Sholes doesn't work at all on currency, since everything is not as binomially distributed there as one would like to dream)

... To be honest, I am not familiar with Newton at all - whether he invented so many different things (?), or this (your forward and my Implied Volatility) is from the same line and in the same perspective and the essence of the same calculation... ?... I don't want to waste my time on something I don't believe in - I don't believe in financial modeling

Algorithm To Find Derivatives Using Newtons Forward Difference Formula
  • www.codesansar.com
Following steps are required inorder to find derivatives using forward difference formula:
 

but there is still the question of choosing the target function... - ...which is also the developer's responsibility... - what do you advise?

(although yes, you did use forward difference)

p.s.

on degrees of freedom - I'll look through it again

 

I believe in Demand-Supply... in a spider web model (to focus on elasticity andWalras) - in balance-disbalance - to determine direction... (for probability of going out of a flat into a trend) - only OI and time-management (including that one cannot always be guided by Walras)...

for the fact - the glass (parsing levels or oops - the popping iceberg) - although, of course, it is better not to parse, but pass calmly, when someone has already parsed the level and NO-opposite exists (better with testing after the breakdown - also visible in the glass, and on the strip)

 
JeeyCi #:

on Walras.

Loved the one about the cheese village and the wine therapy center.

 
JeeyCi #:


I can only tell you about Newton. I understand there forward forecasting based on the existing curve in the past, I did it long ago, it does not work with the price at all, from the word, if that's what you mean. But it works if you try to predict the backtest on the forward curve, but there are some nuances like this:

This is purely my experience. Any method of predicting something is based on interpolation of a function by a polynomial followed by construction of continuation, I don't know how Newton does it, but most likely the derivatives are calculated as deep order and then it's taken as constants for that function, though of course all that changes with time (in the market such predictions don't work at all, I checked). If we forecast forward backtest, it is necessary that the graph should be as straight as possible and have as many points in it as possible (data or trades in this case, then we can look a little ahead). In other words, if we have found a sample with a sufficiently narrow fluctuation range of as many first derivatives as possible, such extrapolation methods will partially work, the main thing is not to be greedy and stop on time. Below I'm just showing how to deal with uncertainty by means of lottery (if we don't know exactly where the forecast will lose its power). Methods themselves are of secondary importance here, we can interpolate Fourier and draw continuation into future, but it won't work with arbitrary functions. And about the learning funnel, well, you can control re-learning in different ways, I've never taken someone else's formulas, just because I can cobble together my own in no time if necessary, and they will most likely be easier and more useful, just because I understand everything, it's always been like that, I've never had any difficulty in doing it.

 
Evgeniy Ilin #:

I can only tell you about Newton. I understand there forward forecasting based on the existing curve in the past, I did it long ago, it does not work with the price at all, from the word go, if that's what you mean.

This is purely my experience. Any method of predicting something is based on interpolation of a function with a polynomial and then constructing its continuation, I don't know how Newton does it, but... (in the market such predictions do not work at all, tested).

This conclusion of yours was interesting to me - thank you! -

Evgeniy Ilin #:
What about the learning funnel, you can control re-learning in different ways, I never took someone else's formulas, just because I can cobble my own in two hours if I have to, and they will probably be easier and more useful, just because I understand everything, I've never had any problem with it.

+1, but I'm not a physics major... Although I'm closer to my own logic than using someone else's models

Evgeniy I lin # :

If we forecast forward backtest, it should be as straight as possible and have as many points in it as possible (data or trades in this case, then we can look ahead a little). In other words, if we have found a sample with sufficiently narrow fluctuation ranges of as many first derivatives as possible, such extrapolation methods will partially work,

in general, having a normal parabola from which the 1st derivative will be linear... in the end we just get the slope coefficient of it (like a trend cleared of noise) - with all the attendant things you described (narrow range of large number of 1st derivatives)... just have to be weighed until blue in the face? (several layers until the output is a parabola)... or rather a straight line of 1x derivatives of it?

Evgeny Ilin #:
The methods themselves are of secondary importance here. You can interpolate Fourier and draw a continuation into the future, but this will not work with arbitrary functions.

That's what intrigues me about neural networks, not to derive a distribution and not to compare with tabular/empirical ones, and not to look for confirmation of every sneeze statistically (up to "did I determine the mean correctly") by comparing the null hypothesis with the tabular one... - that's some kind of last century statistical processing... in general, not to prove the validity of both the model and the prediction and errors, and all this with tables in hand from the last century (pardon the expression)

or, as an alternative, just multilayer weighting (I understand a neural network)... like I said: until blue in the face? (several layers until we get a parabola at the output)... or rather already a direct 1x derivative of it

??? or even forget about any kind of functions (including parabola) and just search for weight*signal(event) -> next level... and on each level, the function to choose is about as trivial as in Excel Solution Finder (either for linear dependence, or for non-linear dependence, or for independent data) [though I do not know what Excel has under the hood by these names, but this is details, emphasis on logic]

and at the point of convergence of signals at the next level (taking into account the previous weights) calculate all the differences of the received signals...

? do i understand correctly neural network and differentiation by machine forces of chaos without any need to adhere to any curve/straight line - which, as it seems to me, can only be a result of structuring chaos, but not a starting point... it's all about the same developer responsibility - I don't believe and don't see any reason to put any financial models from books/blogs/articles (and statistically processed distributions) into financial analysis when approximating/interpolating chaos... to further extrapolate the output

p.s.

deep down I understand that there is only velocity (coefficient at x) and acceleration (coefficient at x^2) and displacement of free term - in parabola, and of course the 1st derivative of it is linear... formulas scare me, especially of other people's models

 
Evgeniy Ilin #:

There is some truth here, but I checked my model, the main thing there is to know what kind of forward we are counting on. The problem is in overtraining, to avoid overtraining it is necessary to strive for the maximum ratio of the number of analyzed data to the final set of criteria, otherwise speaking, there is data compression, for example, you can analyze the data of a parabola graph and take several thousand points and reduce everything to three coefficients A*X^2 + B*X + C. That is where the data compression quality is higher, that is where the forward is. Retraining can be controlled by introducing correct scalar indicators of its quality that take into account this data compression. In my case it is done in an easier way - we take a fixed number of coefficients and take as big a sample size as possible, it's less efficient but it works.

found your answer earlier... I must have rushed my previous post... probably really should at least start from a parabola as a function describing motion with velocity and acceleration... (even once I've seen somewhere this kind of graphs and Greeks (delta and gamma) of options - I can't remember and I won't find it - and I don't need it - we need the time analysis - horizontal, not vertical)