Machine learning in trading: theory, models, practice and algo-trading - page 3637
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I think you've realised that the members of the thread are well aware of this?
So what method do you suggest for a reliable solution of the problem you have set?
No, I realised that not all participants of the thread understand it, you are one of those who seem to understand it, apparently.
I have no motivation to give ready-made solutions on robustness issues, and I have no desire to prove anything either. You understand, saber understands, a few more people will read it and think about it - that's already good.
I have no motivation to give ready-made solutions on robustness issues, and no desire to prove anything either.
I see, I'd better occupy my day with more productive things.....
For general development - who are too lazy to ask a question chatgpt, which is also a machine learning algorithm
It is possible to predict an analytic function using mo
The following steps can be followed to predict an analytic function using the method of least squares (MoS):
Data collection:
Collect a data set including x values and their corresponding y values .
Model Selection:
Determine which model best fits your data. This can be linear, quadratic, or a more complex function.
Data approximation:
For a linear function:
y = a * x + b
For a quadratic function:
y = a * x^2 + b * x + c
Least squares method:
For a linear function:
a = \frac{\sum(x_i - \bar{x})(y_i - \bar{y})}{\sum(x_i - \bar{x})^2} b = \bar{y} - a \cdot \bar{x}
For the quadratic function:
a = \frac{\sum(x_i^2 - \bar{x}^2)(y_i - \bar{y})}{\sum(x_i^2 - \bar{x}^2)^2} b = \frac{\sum(x_i - \bar{x})^2} b = \frac{\sum(x_i - \bar{x})(x_i^2 - \bar{x}^2)(y_i - \bar{y})}{\sum(x_i - \bar{x})^2} c = \bar{y} - a \cdot \bar{x}^2 - b \cdot \bar{x}
Estimation of accuracy:
Use the relative approximation error and the mean square of the error to evaluate the accuracy of the model.
Prediction:
Use the resulting analytic function to predict future values based on known data.
Clarification:
interested in machine learning, not least squares methods
Machine learning is a class of artificial intelligence methods that do not solve a problem directly, but learn by applying solutions to a set of similar problems. To build such methods, tools of mathematical statistics, numerical methods, mathematical analysis, optimisation methods, probability theory, graph theory, and various techniques for working with data in digital form are used.
General statement of the problem of learning by precedents
There is a set of objects (situations) and a set of possible answers (responses, reactions). There is some dependence between the responses and the objects, but it is unknown. Only a finite set of precedents - pairs "object, response", called training sample, is known. On the basis of this data we need to recover the implicit dependence, i.e. to construct an algorithm capable of producing a sufficiently accurate classification answer for any possible input object. This dependence is not necessarily expressed analytically, and here neural networks implement the principle of empirically formed decision. An important feature is the ability of the trained system to generalise, i.e. to respond adequately to data beyond the limits of the available training sample. To measure the accuracy of responses, an estimated quality functional is introduced.
Methods of machine learning
Learning with a teacher
For each precedent, a "situation, required solution" pair is given.
Examples: artificial neural network, deep learning, error correction method, error back propagation method, support vector method.
Learning without a teacher
When you want to group objects into clusters using pairwise similarity data of objects, and/or reduce the dimensionality of the data.
Examples: alpha reinforcement learning, gamma reinforcement learning, nearest neighbour method.
Reinforcement Learning
For each precedent, there is a "situation, decision made" pair.
Examples: genetic algorithm, active learning.
Transductive learning
Learning with partial teacher involvement, where predictions are expected to be made only for precedents from a test sample.
Multitask learning
Simultaneous learning of a group of interrelated tasks, each of which is given its own "situation, required solution" pairs.
Multivariate learning
Learning when precedents can be combined into groups, each of which has a "situation" for all precedents, but only one of them (and it is not known which one) has a "situation, required solution" pair.
Boosting
A procedure of sequentially building a composition of machine learning algorithms, when each next algorithm tends to compensate for the shortcomings of the composition of all previous algorithms.
Classical problems solved with the help of machine learning
Classification
Performed by training with a teacher during the actual learning phase.
Clustering
Performed by training without a teacher.
Regression
Performed with teacher-assisted instruction in the testing phase, is a special case of prediction tasks.
Data dimensionality reduction and visualisation
Performed with the help of non-teacher training.
Reconstructing the probability distribution density function from a set of data
Performed using unsupervised learning.
Single class classification and novelty detection
Performed using unsupervised learning.
Constructing rank relationships
Performed using unsupervised learning.
Anomaly detection
Performed using unsupervised learning.
Practical applications
Machine learning has a wide range of applications:
Speech recognition
Gesture recognition
Handwriting recognition
Pattern recognition
Technical diagnostics
Medical Diagnostics
Time Series Forecasting
Bioinformatics
Fraud Detection
Spam Detection
Document Categorisation
Exchange Technical Analysis
Financial Supervision
Credit Scoring
Customer exit prediction
Chemoinformatics
Learning to rank in information retrieval
Conclusion
Machine learning is a powerful tool that allows you to automate the solution of complex professional tasks in a wide variety of areas of human activity. It is constantly expanding and adapting to new tasks and data, which makes it an indispensable tool in the modern world.
Unfortunately, it is not obvious to me from the analytical form of the function that it is periodic. But if several periods fall into the training interval, even a human can predict its behaviour. That is, it is not interesting to take such a learning interval at all.
It is much more indicative to take an interval, for example, two times smaller than the period, but without restrictions on the number of training points.
1. sin(x)/4 - periodic with period 2π
2. cos(x²)/4 - not periodic, since x² grows quadratically
3. 1/2 is a constant (periodic with any period)
If you look closely at the blue and green ones, there is periodicity with noise (they are not completely identical). Nevertheless, the model handles some error. Haven't done any tuning on it.
For general development Dick, who is too lazy to ask a question chatgpt, which is also a machine learning algorithm
it is possible to predict an analytic function using mo
For the general development of Fomenko, who finds it difficult to think independently without chatgpt, I will explain on my fingers: in a general form, with respect to CVR, the analytic form of a series is not known. The problem with the analytic function is given for example (to be able to check the methods of Mo), on the first segment of the function it is necessary to postpone the points, and on the points of the first segment to restore the second segment. The third section is also given, which can also be reconstructed if the approximation of the first section is valid for the process.
If you do not understand even such simple points, then it is not at all clear what you are doing in MO.
Your arrogant arrogant tone does not speak of knowledge of MOE, but only of lack of good education.
In the simplest case predictions will be like this, because there is not enough data for training (there are no similar examples in the training sample). The periodic component is caught, the non-periodic component is not.
We can play with features, let Dick do it. And show everyone the mother of pearls.
If you know the analytical function, you can simply enter it into the signs in full or in parts, then
That is, without knowing about the function, in this case the task will be reduced to either feature oversampling, or increasing the training sample.
But since the function is stationary, after validation there is a high chance that the features are selected correctly and the predictions on the new data will be good too.
I would like to see examples of TCs based on their principles, rather than meaningless hats from optimisers.the function is stationary.
The function he came up with is non-stationary. Neither from a purely formal approach, nor from an informal one.
Even within the framework of amateur radio theory it will not be quasi-stationary.
The function he came up with is non-stationary. Neither from the point of view of a purely formal approach, nor from the point of view of an informal one.
Even within the framework of amateur radio theory it will not be quasi-stationary.
Well he wrote that it is stationary ) I thought he was good at it. Then I don't know what exactly is being discussed at all.