Machine learning in trading: theory, models, practice and algo-trading - page 3637

 
Aleksey Vyazmikin #:

I think you've realised that the members of the thread are well aware of this?

So what method do you suggest for a reliable solution of the problem you have set?

No, I realised that not all participants of the thread understand it, you are one of those who seem to understand it, apparently.

I have no motivation to give ready-made solutions on robustness issues, and I have no desire to prove anything either. You understand, saber understands, a few more people will read it and think about it - that's already good.

 
Andrey Dik #:
I have no motivation to give ready-made solutions on robustness issues, and no desire to prove anything either.

I see, I'd better occupy my day with more productive things.....

 

For general development - who are too lazy to ask a question chatgpt, which is also a machine learning algorithm



It is possible to predict an analytic function using mo

The following steps can be followed to predict an analytic function using the method of least squares (MoS):

  1. Data collection:

    • Collect a data set including x values and their corresponding y values .

  2. Model Selection:

    • Determine which model best fits your data. This can be linear, quadratic, or a more complex function.

  3. Data approximation:

    • For a linear function:

      y = a * x + b

    • For a quadratic function:

      y = a * x^2 + b * x + c

  4. Least squares method:

    • For a linear function:

      a = \frac{\sum(x_i - \bar{x})(y_i - \bar{y})}{\sum(x_i - \bar{x})^2} b = \bar{y} - a \cdot \bar{x}

    • For the quadratic function:

      a = \frac{\sum(x_i^2 - \bar{x}^2)(y_i - \bar{y})}{\sum(x_i^2 - \bar{x}^2)^2} b = \frac{\sum(x_i - \bar{x})^2} b = \frac{\sum(x_i - \bar{x})(x_i^2 - \bar{x}^2)(y_i - \bar{y})}{\sum(x_i - \bar{x})^2} c = \bar{y} - a \cdot \bar{x}^2 - b \cdot \bar{x}

  5. Estimation of accuracy:

    • Use the relative approximation error and the mean square of the error to evaluate the accuracy of the model.

  6. Prediction:

    • Use the resulting analytic function to predict future values based on known data.


Clarification:

interested in machine learning, not least squares methods


Machine learning is a class of artificial intelligence methods that do not solve a problem directly, but learn by applying solutions to a set of similar problems. To build such methods, tools of mathematical statistics, numerical methods, mathematical analysis, optimisation methods, probability theory, graph theory, and various techniques for working with data in digital form are used.

General statement of the problem of learning by precedents

There is a set of objects (situations) and a set of possible answers (responses, reactions). There is some dependence between the responses and the objects, but it is unknown. Only a finite set of precedents - pairs "object, response", called training sample, is known. On the basis of this data we need to recover the implicit dependence, i.e. to construct an algorithm capable of producing a sufficiently accurate classification answer for any possible input object. This dependence is not necessarily expressed analytically, and here neural networks implement the principle of empirically formed decision. An important feature is the ability of the trained system to generalise, i.e. to respond adequately to data beyond the limits of the available training sample. To measure the accuracy of responses, an estimated quality functional is introduced.

Methods of machine learning

  1. Learning with a teacher

    • For each precedent, a "situation, required solution" pair is given.

    • Examples: artificial neural network, deep learning, error correction method, error back propagation method, support vector method.

  2. Learning without a teacher

    • When you want to group objects into clusters using pairwise similarity data of objects, and/or reduce the dimensionality of the data.

    • Examples: alpha reinforcement learning, gamma reinforcement learning, nearest neighbour method.

  3. Reinforcement Learning

    • For each precedent, there is a "situation, decision made" pair.

    • Examples: genetic algorithm, active learning.

  4. Transductive learning

    • Learning with partial teacher involvement, where predictions are expected to be made only for precedents from a test sample.

  5. Multitask learning

    • Simultaneous learning of a group of interrelated tasks, each of which is given its own "situation, required solution" pairs.

  6. Multivariate learning

    • Learning when precedents can be combined into groups, each of which has a "situation" for all precedents, but only one of them (and it is not known which one) has a "situation, required solution" pair.

  7. Boosting

    • A procedure of sequentially building a composition of machine learning algorithms, when each next algorithm tends to compensate for the shortcomings of the composition of all previous algorithms.

Classical problems solved with the help of machine learning

  1. Classification

    • Performed by training with a teacher during the actual learning phase.

  2. Clustering

    • Performed by training without a teacher.

  3. Regression

    • Performed with teacher-assisted instruction in the testing phase, is a special case of prediction tasks.

  4. Data dimensionality reduction and visualisation

    • Performed with the help of non-teacher training.

  5. Reconstructing the probability distribution density function from a set of data

    • Performed using unsupervised learning.

  6. Single class classification and novelty detection

    • Performed using unsupervised learning.

  7. Constructing rank relationships

    • Performed using unsupervised learning.

  8. Anomaly detection

    • Performed using unsupervised learning.

Practical applications

Machine learning has a wide range of applications:

  • Speech recognition

  • Gesture recognition

  • Handwriting recognition

  • Pattern recognition

  • Technical diagnostics

  • Medical Diagnostics

  • Time Series Forecasting

  • Bioinformatics

  • Fraud Detection

  • Spam Detection

  • Document Categorisation

  • Exchange Technical Analysis

  • Financial Supervision

  • Credit Scoring

  • Customer exit prediction

  • Chemoinformatics

  • Learning to rank in information retrieval

Conclusion

Machine learning is a powerful tool that allows you to automate the solution of complex professional tasks in a wide variety of areas of human activity. It is constantly expanding and adapting to new tasks and data, which makes it an indispensable tool in the modern world.


 
fxsaber #:

Unfortunately, it is not obvious to me from the analytical form of the function that it is periodic. But if several periods fall into the training interval, even a human can predict its behaviour. That is, it is not interesting to take such a learning interval at all.

It is much more indicative to take an interval, for example, two times smaller than the period, but without restrictions on the number of training points.

1. sin(x)/4 - periodic with period 2π

2. cos(x²)/4 - not periodic, since x² grows quadratically

3. 1/2 is a constant (periodic with any period)


If you look closely at the blue and green ones, there is periodicity with noise (they are not completely identical). Nevertheless, the model handles some error. Haven't done any tuning on it.

 
СанСаныч Фоменко #:

For general development Dick, who is too lazy to ask a question chatgpt, which is also a machine learning algorithm

it is possible to predict an analytic function using mo

For the general development of Fomenko, who finds it difficult to think independently without chatgpt, I will explain on my fingers: in a general form, with respect to CVR, the analytic form of a series is not known. The problem with the analytic function is given for example (to be able to check the methods of Mo), on the first segment of the function it is necessary to postpone the points, and on the points of the first segment to restore the second segment. The third section is also given, which can also be reconstructed if the approximation of the first section is valid for the process.

If you do not understand even such simple points, then it is not at all clear what you are doing in MO.

Your arrogant arrogant tone does not speak of knowledge of MOE, but only of lack of good education.

 
To predict by small chunks, you need to do additional preprocessing, I wrote in the beginning. Because you need to follow some rules, for example, to keep the signs within the training range. I don't have time or desire to do that. Because it's pointless.

In the simplest case predictions will be like this, because there is not enough data for training (there are no similar examples in the training sample). The periodic component is caught, the non-periodic component is not.

We can play with features, let Dick do it. And show everyone the mother of pearls.



 

If you know the analytical function, you can simply enter it into the signs in full or in parts, then


 

That is, without knowing about the function, in this case the task will be reduced to either feature oversampling, or increasing the training sample.

But since the function is stationary, after validation there is a high chance that the features are selected correctly and the predictions on the new data will be good too.

I would like to see examples of TCs based on their principles, rather than meaningless hats from optimisers.
 
Maxim Dmitrievsky #:
the function is stationary.

The function he came up with is non-stationary. Neither from a purely formal approach, nor from an informal one.

Even within the framework of amateur radio theory it will not be quasi-stationary.

 
Aleksey Nikolayev #:

The function he came up with is non-stationary. Neither from the point of view of a purely formal approach, nor from the point of view of an informal one.

Even within the framework of amateur radio theory it will not be quasi-stationary.

Well he wrote that it is stationary ) I thought he was good at it. Then I don't know what exactly is being discussed at all.