Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview
Data Science and Machine Learning (Part 09): The K-Nearest Neighbors Algorithm (KNN)

Data Science and Machine Learning (Part 09): The K-Nearest Neighbors Algorithm (KNN)

This is a lazy algorithm that doesn't learn from the training dataset, it stores the dataset instead and acts immediately when it's given a new sample. As simple as it is, it is used in a variety of real-world applications.
preview
Neural networks made easy (Part 27): Deep Q-Learning (DQN)

Neural networks made easy (Part 27): Deep Q-Learning (DQN)

We continue to study reinforcement learning. In this article, we will get acquainted with the Deep Q-Learning method. The use of this method has enabled the DeepMind team to create a model that can outperform a human when playing Atari computer games. I think it will be useful to evaluate the possibilities of the technology for solving trading problems.
preview
Neural networks made easy (Part 26): Reinforcement Learning

Neural networks made easy (Part 26): Reinforcement Learning

We continue to study machine learning methods. With this article, we begin another big topic, Reinforcement Learning. This approach allows the models to set up certain strategies for solving the problems. We can expect that this property of reinforcement learning will open up new horizons for building trading strategies.
preview
Neural networks made easy (Part 25): Practicing Transfer Learning

Neural networks made easy (Part 25): Practicing Transfer Learning

In the last two articles, we developed a tool for creating and editing neural network models. Now it is time to evaluate the potential use of Transfer Learning technology using practical examples.
preview
Neural networks made easy (Part 24): Improving the tool for Transfer Learning

Neural networks made easy (Part 24): Improving the tool for Transfer Learning

In the previous article, we created a tool for creating and editing the architecture of neural networks. Today we will continue working on this tool. We will try to make it more user friendly. This may see, top be a step away form our topic. But don't you think that a well organized workspace plays an important role in achieving the result.
preview
Data Science and Machine Learning (Part 08): K-Means Clustering in plain MQL5

Data Science and Machine Learning (Part 08): K-Means Clustering in plain MQL5

Data mining is crucial to a data scientist and a trader because very often, the data isn't as straightforward as we think it is. The human eye can not understand the minor underlying pattern and relationships in the dataset, maybe the K-means algorithm can help us with that. Let's find out...
preview
Neural networks made easy (Part 23): Building a tool for Transfer Learning

Neural networks made easy (Part 23): Building a tool for Transfer Learning

In this series of articles, we have already mentioned Transfer Learning more than once. However, this was only mentioning. in this article, I suggest filling this gap and taking a closer look at Transfer Learning.
preview
Neural networks made easy (Part 22): Unsupervised learning of recurrent models

Neural networks made easy (Part 22): Unsupervised learning of recurrent models

We continue to study unsupervised learning algorithms. This time I suggest that we discuss the features of autoencoders when applied to recurrent model training.
preview
Neural networks made easy (Part 21): Variational autoencoders (VAE)

Neural networks made easy (Part 21): Variational autoencoders (VAE)

In the last article, we got acquainted with the Autoencoder algorithm. Like any other algorithm, it has its advantages and disadvantages. In its original implementation, the autoenctoder is used to separate the objects from the training sample as much as possible. This time we will talk about how to deal with some of its disadvantages.
preview
Data Science and Machine Learning (Part 07): Polynomial Regression

Data Science and Machine Learning (Part 07): Polynomial Regression

Unlike linear regression, polynomial regression is a flexible model aimed to perform better at tasks the linear regression model could not handle, Let's find out how to make polynomial models in MQL5 and make something positive out of it.
preview
Experiments with neural networks (Part 2): Smart neural network optimization

Experiments with neural networks (Part 2): Smart neural network optimization

In this article, I will use experimentation and non-standard approaches to develop a profitable trading system and check whether neural networks can be of any help for traders. MetaTrader 5 as a self-sufficient tool for using neural networks in trading.
preview
Neural networks made easy (Part 20): Autoencoders

Neural networks made easy (Part 20): Autoencoders

We continue to study unsupervised learning algorithms. Some readers might have questions regarding the relevance of recent publications to the topic of neural networks. In this new article, we get back to studying neural networks.
preview
MQL5 Wizard techniques you should know (Part 03): Shannon's Entropy

MQL5 Wizard techniques you should know (Part 03): Shannon's Entropy

Todays trader is a philomath who is almost always looking up new ideas, trying them out, choosing to modify them or discard them; an exploratory process that should cost a fair amount of diligence. These series of articles will proposition that the MQL5 wizard should be a mainstay for traders.
preview
Neural networks made easy (Part 19): Association rules using MQL5

Neural networks made easy (Part 19): Association rules using MQL5

We continue considering association rules. In the previous article, we have discussed theoretical aspect of this type of problem. In this article, I will show the implementation of the FP Growth method using MQL5. We will also test the implemented solution using real data.
preview
Neural networks made easy (Part 18): Association rules

Neural networks made easy (Part 18): Association rules

As a continuation of this series of articles, let's consider another type of problems within unsupervised learning methods: mining association rules. This problem type was first used in retail, namely supermarkets, to analyze market baskets. In this article, we will talk about the applicability of such algorithms in trading.
preview
Data Science and Machine Learning — Neural Network (Part 02): Feed forward NN Architectures Design

Data Science and Machine Learning — Neural Network (Part 02): Feed forward NN Architectures Design

There are minor things to cover on the feed-forward neural network before we are through, the design being one of them. Let's see how we can build and design a flexible neural network to our inputs, the number of hidden layers, and the nodes for each of the network.
preview
Metamodels in machine learning and trading: Original timing of trading orders

Metamodels in machine learning and trading: Original timing of trading orders

Metamodels in machine learning: Auto creation of trading systems with little or no human intervention — The model decides when and how to trade on its own.
preview
Data Science and Machine Learning — Neural Network (Part 01): Feed Forward Neural Network demystified

Data Science and Machine Learning — Neural Network (Part 01): Feed Forward Neural Network demystified

Many people love them but a few understand the whole operations behind Neural Networks. In this article I will try to explain everything that goes behind closed doors of a feed-forward multi-layer perception in plain English.
preview
Experiments with neural networks (Part 1): Revisiting geometry

Experiments with neural networks (Part 1): Revisiting geometry

In this article, I will use experimentation and non-standard approaches to develop a profitable trading system and check whether neural networks can be of any help for traders.
preview
Neural networks made easy (Part 17): Dimensionality reduction

Neural networks made easy (Part 17): Dimensionality reduction

In this part we continue discussing Artificial Intelligence models. Namely, we study unsupervised learning algorithms. We have already discussed one of the clustering algorithms. In this article, I am sharing a variant of solving problems related to dimensionality reduction.
preview
Data Science and Machine Learning (Part 06): Gradient Descent

Data Science and Machine Learning (Part 06): Gradient Descent

The gradient descent plays a significant role in training neural networks and many machine learning algorithms. It is a quick and intelligent algorithm despite its impressive work it is still misunderstood by a lot of data scientists let's see what it is all about.
preview
Neural networks made easy (Part 16): Practical use of clustering

Neural networks made easy (Part 16): Practical use of clustering

In the previous article, we have created a class for data clustering. In this article, I want to share variants of the possible application of obtained results in solving practical trading tasks.
preview
Neural networks made easy (Part 15): Data clustering using MQL5

Neural networks made easy (Part 15): Data clustering using MQL5

We continue to consider the clustering method. In this article, we will create a new CKmeans class to implement one of the most common k-means clustering methods. During tests, the model managed to identify about 500 patterns.
preview
Neural networks made easy (Part 14): Data clustering

Neural networks made easy (Part 14): Data clustering

It has been more than a year since I published my last article. This is quite a lot time to revise ideas and to develop new approaches. In the new article, I would like to divert from the previously used supervised learning method. This time we will dip into unsupervised learning algorithms. In particular, we will consider one of the clustering algorithms—k-means.
preview
Data Science and Machine Learning (Part 05): Decision Trees

Data Science and Machine Learning (Part 05): Decision Trees

Decision trees imitate the way humans think to classify data. Let's see how to build trees and use them to classify and predict some data. The main goal of the decision trees algorithm is to separate the data with impurity and into pure or close to nodes.
preview
How to master Machine Learning

How to master Machine Learning

Check out this selection of useful materials which can assist traders in improving their algorithmic trading knowledge. The era of simple algorithms is passing, and it is becoming harder to succeed without the use of Machine Learning techniques and Neural Networks.
preview
Data Science and Machine Learning (Part 04): Predicting Current Stock Market Crash

Data Science and Machine Learning (Part 04): Predicting Current Stock Market Crash

In this article I am going to attempt to use our logistic model to predict the stock market crash based upon the fundamentals of the US economy, the NETFLIX and APPLE are the stocks we are going to focus on, Using the previous market crashes of 2019 and 2020 let's see how our model will perform in the current dooms and glooms.
preview
Data Science and Machine Learning (Part 03): Matrix Regressions

Data Science and Machine Learning (Part 03): Matrix Regressions

This time our models are being made by matrices, which allows flexibility while it allows us to make powerful models that can handle not only five independent variables but also many variables as long as we stay within the calculations limits of a computer, this article is going to be an interesting read, that's for sure.
preview
Data Science and Machine Learning (Part 02): Logistic Regression

Data Science and Machine Learning (Part 02): Logistic Regression

Data Classification is a crucial thing for an algo trader and a programmer. In this article, we are going to focus on one of classification logistic algorithms that can probability help us identify the Yes's or No's, the Ups and Downs, Buys and Sells.
preview
Data Science and Machine Learning (Part 01): Linear Regression

Data Science and Machine Learning (Part 01): Linear Regression

It's time for us as traders to train our systems and ourselves to make decisions based on what number says. Not on our eyes, and what our guts make us believe, this is where the world is heading so, let us move perpendicular to the direction of the wave.
preview
Matrices and vectors in MQL5

Matrices and vectors in MQL5

By using special data types 'matrix' and 'vector', it is possible to create code which is very close to mathematical notation. With these methods, you can avoid the need to create nested loops or to mind correct indexing of arrays in calculations. Therefore, the use of matrix and vector methods increases the reliability and speed in developing complex programs.
preview
Multilayer perceptron and backpropagation algorithm (Part II): Implementation in Python and integration with MQL5

Multilayer perceptron and backpropagation algorithm (Part II): Implementation in Python and integration with MQL5

There is a Python package available for developing integrations with MQL, which enables a plethora of opportunities such as data exploration, creation and use of machine learning models. The built in Python integration in MQL5 enables the creation of various solutions, from simple linear regression to deep learning models. Let's take a look at how to set up and prepare a development environment and how to use use some of the machine learning libraries.
preview
Programming a Deep Neural Network from Scratch using MQL Language

Programming a Deep Neural Network from Scratch using MQL Language

This article aims to teach the reader how to make a Deep Neural Network from scratch using the MQL4/5 language.
preview
Neural networks made easy (Part 13): Batch Normalization

Neural networks made easy (Part 13): Batch Normalization

In the previous article, we started considering methods aimed at improving neural network training quality. In this article, we will continue this topic and will consider another approach — batch data normalization.
preview
Neural networks made easy (Part 12): Dropout

Neural networks made easy (Part 12): Dropout

As the next step in studying neural networks, I suggest considering the methods of increasing convergence during neural network training. There are several such methods. In this article we will consider one of them entitled Dropout.
preview
Machine learning in Grid and Martingale trading systems. Would you bet on it?

Machine learning in Grid and Martingale trading systems. Would you bet on it?

This article describes the machine learning technique applied to grid and martingale trading. Surprisingly, this approach has little to no coverage in the global network. After reading the article, you will be able to create your own trading bots.
preview
Neural networks made easy (Part 11): A take on GPT

Neural networks made easy (Part 11): A take on GPT

Perhaps one of the most advanced models among currently existing language neural networks is GPT-3, the maximal variant of which contains 175 billion parameters. Of course, we are not going to create such a monster on our home PCs. However, we can view which architectural solutions can be used in our work and how we can benefit from them.
preview
Multilayer perceptron and backpropagation algorithm

Multilayer perceptron and backpropagation algorithm

The popularity of these two methods grows, so a lot of libraries have been developed in Matlab, R, Python, C++ and others, which receive a training set as input and automatically create an appropriate network for the problem. Let us try to understand how the basic neural network type works (including single-neuron perceptron and multilayer perceptron). We will consider an exciting algorithm which is responsible for network training - gradient descent and backpropagation. Existing complex models are often based on such simple network models.
preview
Practical application of neural networks in trading (Part 2). Computer vision

Practical application of neural networks in trading (Part 2). Computer vision

The use of computer vision allows training neural networks on the visual representation of the price chart and indicators. This method enables wider operations with the whole complex of technical indicators, since there is no need to feed them digitally into the neural network.
preview
Neural networks made easy (Part 10): Multi-Head Attention

Neural networks made easy (Part 10): Multi-Head Attention

We have previously considered the mechanism of self-attention in neural networks. In practice, modern neural network architectures use several parallel self-attention threads to find various dependencies between the elements of a sequence. Let us consider the implementation of such an approach and evaluate its impact on the overall network performance.