Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview
Neural networks made easy (Part 29): Advantage Actor-Critic algorithm

Neural networks made easy (Part 29): Advantage Actor-Critic algorithm

In the previous articles of this series, we have seen two reinforced learning algorithms. Each of them has its own advantages and disadvantages. As often happens in such cases, next comes the idea to combine both methods into an algorithm, using the best of the two. This would compensate for the shortcomings of each of them. One of such methods will be discussed in this article.
preview
Experiments with neural networks (Part 1): Revisiting geometry

Experiments with neural networks (Part 1): Revisiting geometry

In this article, I will use experimentation and non-standard approaches to develop a profitable trading system and check whether neural networks can be of any help for traders.
preview
Neural networks made easy (Part 6): Experimenting with the neural network learning rate

Neural networks made easy (Part 6): Experimenting with the neural network learning rate

We have previously considered various types of neural networks along with their implementations. In all cases, the neural networks were trained using the gradient decent method, for which we need to choose a learning rate. In this article, I want to show the importance of a correctly selected rate and its impact on the neural network training, using examples.
preview
How to use ONNX models in MQL5

How to use ONNX models in MQL5

ONNX (Open Neural Network Exchange) is an open format built to represent machine learning models. In this article, we will consider how to create a CNN-LSTM model to forecast financial timeseries. We will also show how to use the created ONNX model in an MQL5 Expert Advisor.
preview
Neural networks made easy (Part 5): Multithreaded calculations in OpenCL

Neural networks made easy (Part 5): Multithreaded calculations in OpenCL

We have earlier discussed some types of neural network implementations. In the considered networks, the same operations are repeated for each neuron. A logical further step is to utilize multithreaded computing capabilities provided by modern technology in an effort to speed up the neural network learning process. One of the possible implementations is described in this article.
preview
Evaluating ONNX models using regression metrics

Evaluating ONNX models using regression metrics

Regression is a task of predicting a real value from an unlabeled example. The so-called regression metrics are used to assess the accuracy of regression model predictions.
preview
Data Science and Machine Learning (Part 11): Naïve Bayes, Probability theory in Trading

Data Science and Machine Learning (Part 11): Naïve Bayes, Probability theory in Trading

Trading with probability is like walking on a tightrope - it requires precision, balance, and a keen understanding of risk. In the world of trading, the probability is everything. It's the difference between success and failure, profit and loss. By leveraging the power of probability, traders can make informed decisions, manage risk effectively, and achieve their financial goals. So, whether you're a seasoned investor or a novice trader, understanding probability is the key to unlocking your trading potential. In this article, we'll explore the exciting world of trading with probability and show you how to take your trading game to the next level.
preview
Gradient boosting in transductive and active machine learning

Gradient boosting in transductive and active machine learning

In this article, we will consider active machine learning methods utilizing real data, as well discuss their pros and cons. Perhaps you will find these methods useful and will include them in your arsenal of machine learning models. Transduction was introduced by Vladimir Vapnik, who is the co-inventor of the Support-Vector Machine (SVM).
preview
Neural networks made easy (Part 13): Batch Normalization

Neural networks made easy (Part 13): Batch Normalization

In the previous article, we started considering methods aimed at improving neural network training quality. In this article, we will continue this topic and will consider another approach — batch data normalization.
preview
Population optimization algorithms: Particle swarm (PSO)

Population optimization algorithms: Particle swarm (PSO)

In this article, I will consider the popular Particle Swarm Optimization (PSO) algorithm. Previously, we discussed such important characteristics of optimization algorithms as convergence, convergence rate, stability, scalability, as well as developed a test stand and considered the simplest RNG algorithm.
preview
Data Science and Machine Learning (Part 04): Predicting Current Stock Market Crash

Data Science and Machine Learning (Part 04): Predicting Current Stock Market Crash

In this article I am going to attempt to use our logistic model to predict the stock market crash based upon the fundamentals of the US economy, the NETFLIX and APPLE are the stocks we are going to focus on, Using the previous market crashes of 2019 and 2020 let's see how our model will perform in the current dooms and glooms.
preview
Neural networks made easy (Part 36): Relational Reinforcement Learning

Neural networks made easy (Part 36): Relational Reinforcement Learning

In the reinforcement learning models we discussed in previous article, we used various variants of convolutional networks that are able to identify various objects in the original data. The main advantage of convolutional networks is the ability to identify objects regardless of their location. At the same time, convolutional networks do not always perform well when there are various deformations of objects and noise. These are the issues which the relational model can solve.
preview
Data Science and Machine Learning (Part 09): The K-Nearest Neighbors Algorithm (KNN)

Data Science and Machine Learning (Part 09): The K-Nearest Neighbors Algorithm (KNN)

This is a lazy algorithm that doesn't learn from the training dataset, it stores the dataset instead and acts immediately when it's given a new sample. As simple as it is, it is used in a variety of real-world applications.
preview
Category Theory in MQL5 (Part 16): Functors with Multi-Layer Perceptrons

Category Theory in MQL5 (Part 16): Functors with Multi-Layer Perceptrons

This article, the 16th in our series, continues with a look at Functors and how they can be implemented using artificial neural networks. We depart from our approach so far in the series, that has involved forecasting volatility and try to implement a custom signal class for setting position entry and exit signals.
preview
Neural networks made easy (Part 27): Deep Q-Learning (DQN)

Neural networks made easy (Part 27): Deep Q-Learning (DQN)

We continue to study reinforcement learning. In this article, we will get acquainted with the Deep Q-Learning method. The use of this method has enabled the DeepMind team to create a model that can outperform a human when playing Atari computer games. I think it will be useful to evaluate the possibilities of the technology for solving trading problems.
preview
Neural networks made easy (Part 30): Genetic algorithms

Neural networks made easy (Part 30): Genetic algorithms

Today I want to introduce you to a slightly different learning method. We can say that it is borrowed from Darwin's theory of evolution. It is probably less controllable than the previously discussed methods but it allows training non-differentiable models.
preview
Neural networks made easy (Part 26): Reinforcement Learning

Neural networks made easy (Part 26): Reinforcement Learning

We continue to study machine learning methods. With this article, we begin another big topic, Reinforcement Learning. This approach allows the models to set up certain strategies for solving the problems. We can expect that this property of reinforcement learning will open up new horizons for building trading strategies.
preview
Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast

Experiments with neural networks (Part 6): Perceptron as a self-sufficient tool for price forecast

The article provides an example of using a perceptron as a self-sufficient price prediction tool by showcasing general concepts and the simplest ready-made Expert Advisor followed by the results of its optimization.
preview
Population optimization algorithms: Ant Colony Optimization (ACO)

Population optimization algorithms: Ant Colony Optimization (ACO)

This time I will analyze the Ant Colony optimization algorithm. The algorithm is very interesting and complex. In the article, I make an attempt to create a new type of ACO.
preview
Data Science and Machine Learning (Part 10): Ridge Regression

Data Science and Machine Learning (Part 10): Ridge Regression

Ridge regression is a simple technique to reduce model complexity and prevent over-fitting which may result from simple linear regression
preview
Data Science and Machine Learning (Part 02): Logistic Regression

Data Science and Machine Learning (Part 02): Logistic Regression

Data Classification is a crucial thing for an algo trader and a programmer. In this article, we are going to focus on one of classification logistic algorithms that can probability help us identify the Yes's or No's, the Ups and Downs, Buys and Sells.
preview
Neural networks made easy (Part 14): Data clustering

Neural networks made easy (Part 14): Data clustering

It has been more than a year since I published my last article. This is quite a lot time to revise ideas and to develop new approaches. In the new article, I would like to divert from the previously used supervised learning method. This time we will dip into unsupervised learning algorithms. In particular, we will consider one of the clustering algorithms—k-means.
preview
Data Science and Machine Learning (Part 05): Decision Trees

Data Science and Machine Learning (Part 05): Decision Trees

Decision trees imitate the way humans think to classify data. Let's see how to build trees and use them to classify and predict some data. The main goal of the decision trees algorithm is to separate the data with impurity and into pure or close to nodes.
preview
Metamodels in machine learning and trading: Original timing of trading orders

Metamodels in machine learning and trading: Original timing of trading orders

Metamodels in machine learning: Auto creation of trading systems with little or no human intervention — The model decides when and how to trade on its own.
preview
Neural networks made easy (Part 19): Association rules using MQL5

Neural networks made easy (Part 19): Association rules using MQL5

We continue considering association rules. In the previous article, we have discussed theoretical aspect of this type of problem. In this article, I will show the implementation of the FP Growth method using MQL5. We will also test the implemented solution using real data.
preview
Population optimization algorithms: Bacterial Foraging Optimization (BFO)

Population optimization algorithms: Bacterial Foraging Optimization (BFO)

E. coli bacterium foraging strategy inspired scientists to create the BFO optimization algorithm. The algorithm contains original ideas and promising approaches to optimization and is worthy of further study.
preview
Neural networks made easy (Part 28): Policy gradient algorithm

Neural networks made easy (Part 28): Policy gradient algorithm

We continue to study reinforcement learning methods. In the previous article, we got acquainted with the Deep Q-Learning method. In this method, the model is trained to predict the upcoming reward depending on the action taken in a particular situation. Then, an action is performed in accordance with the policy and the expected reward. But it is not always possible to approximate the Q-function. Sometimes its approximation does not generate the desired result. In such cases, approximation methods are applied not to utility functions, but to a direct policy (strategy) of actions. One of such methods is Policy Gradient.
preview
Deep Learning Forecast and ordering with Python and MetaTrader5 python package and ONNX model file

Deep Learning Forecast and ordering with Python and MetaTrader5 python package and ONNX model file

The project involves using Python for deep learning-based forecasting in financial markets. We will explore the intricacies of testing the model's performance using key metrics such as Mean Absolute Error (MAE), Mean Squared Error (MSE), and R-squared (R2) and we will learn how to wrap everything into an executable. We will also make a ONNX model file with its EA.
preview
Category Theory in MQL5 (Part 14): Functors with Linear-Orders

Category Theory in MQL5 (Part 14): Functors with Linear-Orders

This article which is part of a broader series on Category Theory implementation in MQL5, delves into Functors. We examine how a Linear Order can be mapped to a set, thanks to Functors; by considering two sets of data that one would typically dismiss as having any connection.
preview
Neural networks made easy (Part 24): Improving the tool for Transfer Learning

Neural networks made easy (Part 24): Improving the tool for Transfer Learning

In the previous article, we created a tool for creating and editing the architecture of neural networks. Today we will continue working on this tool. We will try to make it more user friendly. This may see, top be a step away form our topic. But don't you think that a well organized workspace plays an important role in achieving the result.
preview
Category Theory in MQL5 (Part 1)

Category Theory in MQL5 (Part 1)

Category Theory is a diverse and expanding branch of Mathematics which as of yet is relatively uncovered in the MQL community. These series of articles look to introduce and examine some of its concepts with the overall goal of establishing an open library that attracts comments and discussion while hopefully furthering the use of this remarkable field in Traders' strategy development.
preview
Advanced resampling and selection of CatBoost models by brute-force method

Advanced resampling and selection of CatBoost models by brute-force method

This article describes one of the possible approaches to data transformation aimed at improving the generalizability of the model, and also discusses sampling and selection of CatBoost models.
preview
Population optimization algorithms: Harmony Search (HS)

Population optimization algorithms: Harmony Search (HS)

In the current article, I will study and test the most powerful optimization algorithm - harmonic search (HS) inspired by the process of finding the perfect sound harmony. So what algorithm is now the leader in our rating?
preview
Data label for time series  mining(Part 1):Make a dataset with trend markers through the EA operation chart

Data label for time series mining(Part 1):Make a dataset with trend markers through the EA operation chart

This series of articles introduces several time series labeling methods, which can create data that meets most artificial intelligence models, and targeted data labeling according to needs can make the trained artificial intelligence model more in line with the expected design, improve the accuracy of our model, and even help the model make a qualitative leap!
preview
Category Theory (Part 9): Monoid-Actions

Category Theory (Part 9): Monoid-Actions

This article continues the series on category theory implementation in MQL5. Here we continue monoid-actions as a means of transforming monoids, covered in the previous article, leading to increased applications.
preview
Experiments with neural networks (Part 5): Normalizing inputs for passing to a neural network

Experiments with neural networks (Part 5): Normalizing inputs for passing to a neural network

Neural networks are an ultimate tool in traders' toolkit. Let's check if this assumption is true. MetaTrader 5 is approached as a self-sufficient medium for using neural networks in trading. A simple explanation is provided.
preview
Introduction to MQL5 (Part 1): A Beginner's Guide into Algorithmic Trading

Introduction to MQL5 (Part 1): A Beginner's Guide into Algorithmic Trading

Dive into the fascinating realm of algorithmic trading with our beginner-friendly guide to MQL5 programming. Discover the essentials of MQL5, the language powering MetaTrader 5, as we demystify the world of automated trading. From understanding the basics to taking your first steps in coding, this article is your key to unlocking the potential of algorithmic trading even without a programming background. Join us on a journey where simplicity meets sophistication in the exciting universe of MQL5.
preview
Data Science and Machine Learning (Part 12): Can Self-Training Neural Networks Help You Outsmart the Stock Market?

Data Science and Machine Learning (Part 12): Can Self-Training Neural Networks Help You Outsmart the Stock Market?

Are you tired of constantly trying to predict the stock market? Do you wish you had a crystal ball to help you make more informed investment decisions? Self-trained neural networks might be the solution you've been looking for. In this article, we explore whether these powerful algorithms can help you "ride the wave" and outsmart the stock market. By analyzing vast amounts of data and identifying patterns, self-trained neural networks can make predictions that are often more accurate than human traders. Discover how you can use this cutting-edge technology to maximize your profits and make smarter investment decisions.
preview
MQL5 Wizard techniques you should know (Part 03): Shannon's Entropy

MQL5 Wizard techniques you should know (Part 03): Shannon's Entropy

Todays trader is a philomath who is almost always looking up new ideas, trying them out, choosing to modify them or discard them; an exploratory process that should cost a fair amount of diligence. These series of articles will proposition that the MQL5 wizard should be a mainstay for traders.
preview
Neural networks made easy (Part 15): Data clustering using MQL5

Neural networks made easy (Part 15): Data clustering using MQL5

We continue to consider the clustering method. In this article, we will create a new CKmeans class to implement one of the most common k-means clustering methods. During tests, the model managed to identify about 500 patterns.