Articles on machine learning in trading

icon

Creating AI-based trading robots: native integration with Python, matrices and vectors, math and statistics libraries and much more.

Find out how to use machine learning in trading. Neurons, perceptrons, convolutional and recurrent networks, predictive models — start with the basics and work your way up to developing your own AI. You will learn how to train and apply neural networks for algorithmic trading in financial markets.

Add a new article
latest | best
preview

Reimagining Classic Strategies (Part IV): SP500 and US Treasury Notes

In this series of articles, we analyze classical trading strategies using modern algorithms to determine whether we can improve the strategy using AI. In today's article, we revisit a classical approach for trading the SP500 using the relationship it has with US Treasury Notes.
preview

Population optimization algorithms: Boids Algorithm

The article considers Boids algorithm based on unique examples of animal flocking behavior. In turn, the Boids algorithm serves as the basis for the creation of the whole class of algorithms united under the name "Swarm Intelligence".
preview

Developing a robot in Python and MQL5 (Part 1): Data preprocessing

Developing a trading robot based on machine learning: A detailed guide. The first article in the series deals with collecting and preparing data and features. The project is implemented using the Python programming language and libraries, as well as the MetaTrader 5 platform.
preview

Example of Auto Optimized Take Profits and Indicator Parameters with SMA and EMA

This article presents a sophisticated Expert Advisor for forex trading, combining machine learning with technical analysis. It focuses on trading Apple stock, featuring adaptive optimization, risk management, and multiple strategies. Backtesting shows promising results with high profitability but also significant drawdowns, indicating potential for further refinement.
preview

MQL5 Wizard Techniques you should know (Part 31): Selecting the Loss Function

Loss Function is the key metric of machine learning algorithms that provides feedback to the training process by quantifying how well a given set of parameters are performing when compared to their intended target. We explore the various formats of this function in an MQL5 custom wizard class.
preview

Integrate Your Own LLM into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning

With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.
preview

Data Science and ML (Part 29): Essential Tips for Selecting the Best Forex Data for AI Training Purposes

In this article, we dive deep into the crucial aspects of choosing the most relevant and high-quality Forex data to enhance the performance of AI models.
preview

Time series clustering in causal inference

Clustering algorithms in machine learning are important unsupervised learning algorithms that can divide the original data into groups with similar observations. By using these groups, you can analyze the market for a specific cluster, search for the most stable clusters using new data, and make causal inferences. The article proposes an original method for time series clustering in Python.
preview

Neural networks made easy (Part 82): Ordinary Differential Equation models (NeuralODE)

In this article, we will discuss another type of models that are aimed at studying the dynamics of the environmental state.
preview

MQL5 Wizard Techniques you should know (Part 30): Spotlight on Batch-Normalization in Machine Learning

Batch normalization is the pre-processing of data before it is fed into a machine learning algorithm, like a neural network. This is always done while being mindful of the type of Activation to be used by the algorithm. We therefore explore the different approaches that one can take in reaping the benefits of this, with the help of a wizard assembled Expert Advisor.
preview

Integrating MQL5 with data processing packages (Part 1): Advanced Data analysis and Statistical Processing

Integration enables seamless workflow where raw financial data from MQL5 can be imported into data processing packages like Jupyter Lab for advanced analysis including statistical testing.
preview

Build Self Optimizing Expert Advisors With MQL5 And Python (Part II): Tuning Deep Neural Networks

Machine learning models come with various adjustable parameters. In this series of articles, we will explore how to customize your AI models to fit your specific market using the SciPy library.
preview

Data Science and ML (Part 28): Predicting Multiple Futures for EURUSD, Using AI

It is a common practice for many Artificial Intelligence models to predict a single future value. However, in this article, we will delve into the powerful technique of using machine learning models to predict multiple future values. This approach, known as multistep forecasting, allows us to predict not only tomorrow's closing price but also the day after tomorrow's and beyond. By mastering multistep forecasting, traders and data scientists can gain deeper insights and make more informed decisions, significantly enhancing their predictive capabilities and strategic planning.
preview

Role of random number generator quality in the efficiency of optimization algorithms

In this article, we will look at the Mersenne Twister random number generator and compare it with the standard one in MQL5. We will also find out the influence of the random number generator quality on the results of optimization algorithms.
preview

Reimagining Classic Strategies (Part III): Forecasting Higher Highs And Lower Lows

In this series article, we will empirically analyze classic trading strategies to see if we can improve them using AI. In today's discussion, we tried to predict higher highs and lower lows using the Linear Discriminant Analysis model.
preview

Neural Networks Made Easy (Part 81): Context-Guided Motion Analysis (CCMR)

In previous works, we always assessed the current state of the environment. At the same time, the dynamics of changes in indicators always remained "behind the scenes". In this article I want to introduce you to an algorithm that allows you to evaluate the direct change in data between 2 successive environmental states.
preview

MQL5 Wizard Techniques you should know (Part 29): Continuation on Learning Rates with MLPs

We wrap up our look at learning rate sensitivity to the performance of Expert Advisors by primarily examining the Adaptive Learning Rates. These learning rates aim to be customized for each parameter in a layer during the training process and so we assess potential benefits vs the expected performance toll.
preview
Population optimization algorithms: Whale Optimization Algorithm (WOA)

Population optimization algorithms: Whale Optimization Algorithm (WOA)

Whale Optimization Algorithm (WOA) is a metaheuristic algorithm inspired by the behavior and hunting strategies of humpback whales. The main idea of WOA is to mimic the so-called "bubble-net" feeding method, in which whales create bubbles around prey and then attack it in a spiral motion.
preview
Neural networks made easy (Part 80): Graph Transformer Generative Adversarial Model (GTGAN)

Neural networks made easy (Part 80): Graph Transformer Generative Adversarial Model (GTGAN)

In this article, I will get acquainted with the GTGAN algorithm, which was introduced in January 2024 to solve complex problems of generation architectural layouts with graph constraints.
preview
Hybridization of population algorithms. Sequential and parallel structures

Hybridization of population algorithms. Sequential and parallel structures

Here we will dive into the world of hybridization of optimization algorithms by looking at three key types: strategy mixing, sequential and parallel hybridization. We will conduct a series of experiments combining and testing relevant optimization algorithms.
preview
Data Science and ML (Part 27): Convolutional Neural Networks (CNNs) in MetaTrader 5 Trading Bots — Are They Worth It?

Data Science and ML (Part 27): Convolutional Neural Networks (CNNs) in MetaTrader 5 Trading Bots — Are They Worth It?

Convolutional Neural Networks (CNNs) are renowned for their prowess in detecting patterns in images and videos, with applications spanning diverse fields. In this article, we explore the potential of CNNs to identify valuable patterns in financial markets and generate effective trading signals for MetaTrader 5 trading bots. Let us discover how this deep machine learning technique can be leveraged for smarter trading decisions.
preview
MQL5 Wizard Techniques you should know (Part 28): GANs Revisited with a Primer on Learning Rates

MQL5 Wizard Techniques you should know (Part 28): GANs Revisited with a Primer on Learning Rates

The Learning Rate, is a step size towards a training target in many machine learning algorithms’ training processes. We examine the impact its many schedules and formats can have on the performance of a Generative Adversarial Network, a type of neural network that we had examined in an earlier article.
preview
Population optimization algorithms: Resistance to getting stuck in local extrema (Part II)

Population optimization algorithms: Resistance to getting stuck in local extrema (Part II)

We continue our experiment that aims to examine the behavior of population optimization algorithms in the context of their ability to efficiently escape local minima when population diversity is low and reach global maxima. Research results are provided.
preview
Data Science and ML (Part 26): The Ultimate Battle in Time Series Forecasting — LSTM vs GRU Neural Networks

Data Science and ML (Part 26): The Ultimate Battle in Time Series Forecasting — LSTM vs GRU Neural Networks

In the previous article, we discussed a simple RNN which despite its inability to understand long-term dependencies in the data, was able to make a profitable strategy. In this article, we are discussing both the Long-Short Term Memory(LSTM) and the Gated Recurrent Unit(GRU). These two were introduced to overcome the shortcomings of a simple RNN and to outsmart it.
preview
SP500 Trading Strategy in MQL5 For Beginners

SP500 Trading Strategy in MQL5 For Beginners

Discover how to leverage MQL5 to forecast the S&P 500 with precision, blending in classical technical analysis for added stability and combining algorithms with time-tested principles for robust market insights.
preview
Using PatchTST Machine Learning Algorithm for Predicting Next 24 Hours of Price Action

Using PatchTST Machine Learning Algorithm for Predicting Next 24 Hours of Price Action

In this article, we apply a relatively complex neural network algorithm released in 2023 called PatchTST for predicting the price action for the next 24 hours. We will use the official repository, make slight modifications, train a model for EURUSD, and apply it to making future predictions both in Python and MQL5.
preview
Eigenvectors and eigenvalues: Exploratory data analysis in MetaTrader 5

Eigenvectors and eigenvalues: Exploratory data analysis in MetaTrader 5

In this article we explore different ways in which the eigenvectors and eigenvalues can be applied in exploratory data analysis to reveal unique relationships in data.
preview
Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state

Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state

In the previous article, we got acquainted with one of the methods for detecting objects in an image. However, processing a static image is somewhat different from working with dynamic time series, such as the dynamics of the prices we analyze. In this article, we will consider the method of detecting objects in video, which is somewhat closer to the problem we are solving.
preview
Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

In this article, I propose to look at the issue of building a trading strategy from a different angle. We will not predict future price movements, but will try to build a trading system based on the analysis of historical data.
preview
Reimagining Classic Strategies in Python: MA Crossovers

Reimagining Classic Strategies in Python: MA Crossovers

In this article, we revisit the classic moving average crossover strategy to assess its current effectiveness. Given the amount of time since its inception, we explore the potential enhancements that AI can bring to this traditional trading strategy. By incorporating AI techniques, we aim to leverage advanced predictive capabilities to potentially optimize trade entry and exit points, adapt to varying market conditions, and enhance overall performance compared to conventional approaches.
preview
Neural networks made easy (Part 77): Cross-Covariance Transformer (XCiT)

Neural networks made easy (Part 77): Cross-Covariance Transformer (XCiT)

In our models, we often use various attention algorithms. And, probably, most often we use Transformers. Their main disadvantage is the resource requirement. In this article, we will consider a new algorithm that can help reduce computing costs without losing quality.
preview
Data Science and Machine Learning (Part 25): Forex Timeseries Forecasting Using a Recurrent Neural Network (RNN)

Data Science and Machine Learning (Part 25): Forex Timeseries Forecasting Using a Recurrent Neural Network (RNN)

Recurrent neural networks (RNNs) excel at leveraging past information to predict future events. Their remarkable predictive capabilities have been applied across various domains with great success. In this article, we will deploy RNN models to predict trends in the forex market, demonstrating their potential to enhance forecasting accuracy in forex trading.
preview
Propensity score in causal inference

Propensity score in causal inference

The article examines the topic of matching in causal inference. Matching is used to compare similar observations in a data set. This is necessary to correctly determine causal effects and get rid of bias. The author explains how this helps in building trading systems based on machine learning, which become more stable on new data they were not trained on. The propensity score plays a central role and is widely used in causal inference.
preview
Creating Time Series Predictions using LSTM Neural Networks: Normalizing Price and Tokenizing Time

Creating Time Series Predictions using LSTM Neural Networks: Normalizing Price and Tokenizing Time

This article outlines a simple strategy for normalizing the market data using the daily range and training a neural network to enhance market predictions. The developed models may be used in conjunction with an existing technical analysis frameworks or on a standalone basis to assist in predicting the overall market direction. The framework outlined in this article may be further refined by any technical analyst to develop models suitable for both manual and automated trading strategies.
preview
Integrate Your Own LLM into EA (Part 4): Training Your Own LLM with GPU

Integrate Your Own LLM into EA (Part 4): Training Your Own LLM with GPU

With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.
preview
The base class of population algorithms as the backbone of efficient optimization

The base class of population algorithms as the backbone of efficient optimization

The article represents a unique research attempt to combine a variety of population algorithms into a single class to simplify the application of optimization methods. This approach not only opens up opportunities for the development of new algorithms, including hybrid variants, but also creates a universal basic test stand. This stand becomes a key tool for choosing the optimal algorithm depending on a specific task.
preview
Data Science and Machine Learning (Part 24): Forex Time series Forecasting Using Regular AI Models

Data Science and Machine Learning (Part 24): Forex Time series Forecasting Using Regular AI Models

In the forex markets It is very challenging to predict the future trend without having an idea of the past. Very few machine learning models are capable of making the future predictions by considering past values. In this article, we are going to discuss how we can use classical(Non-time series) Artificial Intelligence models to beat the market
preview
MQL5 Wizard Techniques you should know (Part 23): CNNs

MQL5 Wizard Techniques you should know (Part 23): CNNs

Convolutional Neural Networks are another machine learning algorithm that tend to specialize in decomposing multi-dimensioned data sets into key constituent parts. We look at how this is typically achieved and explore a possible application for traders in another MQL5 wizard signal class.
preview
Neural networks made easy (Part 76): Exploring diverse interaction patterns with Multi-future Transformer

Neural networks made easy (Part 76): Exploring diverse interaction patterns with Multi-future Transformer

This article continues the topic of predicting the upcoming price movement. I invite you to get acquainted with the Multi-future Transformer architecture. Its main idea is to decompose the multimodal distribution of the future into several unimodal distributions, which allows you to effectively simulate various models of interaction between agents on the scene.
preview
Neural networks made easy (Part 75): Improving the performance of trajectory prediction models

Neural networks made easy (Part 75): Improving the performance of trajectory prediction models

The models we create are becoming larger and more complex. This increases the costs of not only their training as well as operation. However, the time required to make a decision is often critical. In this regard, let us consider methods for optimizing model performance without loss of quality.