Population optimization algorithms: Micro Artificial immune system (Micro-AIS)
The article considers an optimization method based on the principles of the body's immune system - Micro Artificial Immune System (Micro-AIS) - a modification of AIS. Micro-AIS uses a simpler model of the immune system and simple immune information processing operations. The article also discusses the advantages and disadvantages of Micro-AIS compared to conventional AIS.
Population optimization algorithms: Bacterial Foraging Optimization - Genetic Algorithm (BFO-GA)
The article presents a new approach to solving optimization problems by combining ideas from bacterial foraging optimization (BFO) algorithms and techniques used in the genetic algorithm (GA) into a hybrid BFO-GA algorithm. It uses bacterial swarming to globally search for an optimal solution and genetic operators to refine local optima. Unlike the original BFO, bacteria can now mutate and inherit genes.
Developing an MQL5 Reinforcement Learning agent with RestAPI integration (Part 1): How to use RestAPIs in MQL5
In this article we will talk about the importance of APIs (Application Programming Interface) for interaction between different applications and software systems. We will see the role of APIs in simplifying interactions between applications, allowing them to efficiently share data and functionality.
Population optimization algorithms: Evolution Strategies, (μ,λ)-ES and (μ+λ)-ES
The article considers a group of optimization algorithms known as Evolution Strategies (ES). They are among the very first population algorithms to use evolutionary principles for finding optimal solutions. We will implement changes to the conventional ES variants and revise the test function and test stand methodology for the algorithms.
Overcoming ONNX Integration Challenges
ONNX is a great tool for integrating complex AI code between different platforms, it is a great tool that comes with some challenges that one must address to get the most out of it, In this article we discuss the common issues you might face and how to mitigate them.
MQL5 Wizard Techniques you should know (Part 16): Principal Component Analysis with Eigen Vectors
Principal Component Analysis, a dimensionality reducing technique in data analysis, is looked at in this article, with how it could be implemented with Eigen values and vectors. As always, we aim to develop a prototype expert-signal-class usable in the MQL5 wizard.
Population optimization algorithms: Changing shape, shifting probability distributions and testing on Smart Cephalopod (SC)
The article examines the impact of changing the shape of probability distributions on the performance of optimization algorithms. We will conduct experiments using the Smart Cephalopod (SC) test algorithm to evaluate the efficiency of various probability distributions in the context of optimization problems.
Population optimization algorithms: Simulated Isotropic Annealing (SIA) algorithm. Part II
The first part was devoted to the well-known and popular algorithm - simulated annealing. We have thoroughly considered its pros and cons. The second part of the article is devoted to the radical transformation of the algorithm, which turns it into a new optimization algorithm - Simulated Isotropic Annealing (SIA).
Population optimization algorithms: Simulated Annealing (SA) algorithm. Part I
The Simulated Annealing algorithm is a metaheuristic inspired by the metal annealing process. In the article, we will conduct a thorough analysis of the algorithm and debunk a number of common beliefs and myths surrounding this widely known optimization method. The second part of the article will consider the custom Simulated Isotropic Annealing (SIA) algorithm.
MQL5 Wizard Techniques You Should Know (Part 15): Support Vector Machines with Newton's Polynomial
Support Vector Machines classify data based on predefined classes by exploring the effects of increasing its dimensionality. It is a supervised learning method that is fairly complex given its potential to deal with multi-dimensioned data. For this article we consider how it’s very basic implementation of 2-dimensioned data can be done more efficiently with Newton’s Polynomial when classifying price-action.
Neural networks made easy (Part 67): Using past experience to solve new tasks
In this article, we continue discussing methods for collecting data into a training set. Obviously, the learning process requires constant interaction with the environment. However, situations can be different.
Neural networks made easy (Part 66): Exploration problems in offline learning
Models are trained offline using data from a prepared training dataset. While providing certain advantages, its negative side is that information about the environment is greatly compressed to the size of the training dataset. Which, in turn, limits the possibilities of exploration. In this article, we will consider a method that enables the filling of a training dataset with the most diverse data possible.
Introduction to MQL5 (Part 6): A Beginner's Guide to Array Functions in MQL5 (II)
Embark on the next phase of our MQL5 journey. In this insightful and beginner-friendly article, we'll look into the remaining array functions, demystifying complex concepts to empower you to craft efficient trading strategies. We’ll be discussing ArrayPrint, ArrayInsert, ArraySize, ArrayRange, ArrarRemove, ArraySwap, ArrayReverse, and ArraySort. Elevate your algorithmic trading expertise with these essential array functions. Join us on the path to MQL5 mastery!
The Group Method of Data Handling: Implementing the Multilayered Iterative Algorithm in MQL5
In this article we describe the implementation of the Multilayered Iterative Algorithm of the Group Method of Data Handling in MQL5.
Population optimization algorithms: Nelder–Mead, or simplex search (NM) method
The article presents a complete exploration of the Nelder-Mead method, explaining how the simplex (function parameter space) is modified and rearranged at each iteration to achieve an optimal solution, and describes how the method can be improved.
Population optimization algorithms: Differential Evolution (DE)
In this article, we will consider the algorithm that demonstrates the most controversial results of all those discussed previously - the differential evolution (DE) algorithm.
Python, ONNX and MetaTrader 5: Creating a RandomForest model with RobustScaler and PolynomialFeatures data preprocessing
In this article, we will create a random forest model in Python, train the model, and save it as an ONNX pipeline with data preprocessing. After that we will use the model in the MetaTrader 5 terminal.
Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)
In this article, we will get acquainted with an interesting algorithm that is built at the intersection of supervised and reinforcement learning methods.
MQL5 Wizard Techniques you should know (14): Multi Objective Timeseries Forecasting with STF
Spatial Temporal Fusion which is using both ‘space’ and time metrics in modelling data is primarily useful in remote-sensing, and a host of other visual based activities in gaining a better understanding of our surroundings. Thanks to a published paper, we take a novel approach in using it by examining its potential to traders.
Population optimization algorithms: Spiral Dynamics Optimization (SDO) algorithm
The article presents an optimization algorithm based on the patterns of constructing spiral trajectories in nature, such as mollusk shells - the spiral dynamics optimization (SDO) algorithm. I have thoroughly revised and modified the algorithm proposed by the authors. The article will consider the necessity of these changes.
Data Science and Machine Learning (Part 21): Unlocking Neural Networks, Optimization algorithms demystified
Dive into the heart of neural networks as we demystify the optimization algorithms used inside the neural network. In this article, discover the key techniques that unlock the full potential of neural networks, propelling your models to new heights of accuracy and efficiency.
Seasonality Filtering and time period for Deep Learning ONNX models with python for EA
Can we benefit from seasonality when creating models for Deep Learning with Python? Does filtering data for the ONNX models help to get better results? What time period should we use? We will cover all of this over this article.
Population optimization algorithms: Intelligent Water Drops (IWD) algorithm
The article considers an interesting algorithm derived from inanimate nature - intelligent water drops (IWD) simulating the process of river bed formation. The ideas of this algorithm made it possible to significantly improve the previous leader of the rating - SDS. As usual, the new leader (modified SDSm) can be found in the attachment.
Neural networks made easy (Part 64): ConserWeightive Behavioral Cloning (CWBC) method
As a result of tests performed in previous articles, we came to the conclusion that the optimality of the trained strategy largely depends on the training set used. In this article, we will get acquainted with a fairly simple yet effective method for selecting trajectories to train models.
MQL5 Wizard Techniques you should know (Part 13): DBSCAN for Expert Signal Class
Density Based Spatial Clustering for Applications with Noise is an unsupervised form of grouping data that hardly requires any input parameters, save for just 2, which when compared to other approaches like k-means, is a boon. We delve into how this could be constructive for testing and eventually trading with Wizard assembled Expert Advisers
Neural networks made easy (Part 63): Unsupervised Pretraining for Decision Transformer (PDT)
We continue to discuss the family of Decision Transformer methods. From previous article, we have already noticed that training the transformer underlying the architecture of these methods is a rather complex task and requires a large labeled dataset for training. In this article we will look at an algorithm for using unlabeled trajectories for preliminary model training.
Introduction to MQL5 (Part 5): A Beginner's Guide to Array Functions in MQL5
Explore the world of MQL5 arrays in Part 5, designed for absolute beginners. Simplifying complex coding concepts, this article focuses on clarity and inclusivity. Join our community of learners, where questions are embraced, and knowledge is shared!
Quantization in machine learning (Part 2): Data preprocessing, table selection, training CatBoost models
The article considers the practical application of quantization in the construction of tree models. The methods for selecting quantum tables and data preprocessing are considered. No complex mathematical equations are used.
Neural networks made easy (Part 62): Using Decision Transformer in hierarchical models
In recent articles, we have seen several options for using the Decision Transformer method. The method allows analyzing not only the current state, but also the trajectory of previous states and actions performed in them. In this article, we will focus on using this method in hierarchical models.
The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI
In this article, we explore the challenge of understanding how AI works. AI models often make decisions in ways that are hard to explain, leading to what's known as the "disagreement problem". This issue is key to making AI more transparent and trustworthy.
Population optimization algorithms: Charged System Search (CSS) algorithm
In this article, we will consider another optimization algorithm inspired by inanimate nature - Charged System Search (CSS) algorithm. The purpose of this article is to present a new optimization algorithm based on the principles of physics and mechanics.
Deep Learning GRU model with Python to ONNX with EA, and GRU vs LSTM models
We will guide you through the entire process of DL with python to make a GRU ONNX model, culminating in the creation of an Expert Advisor (EA) designed for trading, and subsequently comparing GRU model with LSTM model.
Integrating ML models with the Strategy Tester (Conclusion): Implementing a regression model for price prediction
This article describes the implementation of a regression model based on a decision tree. The model should predict prices of financial assets. We have already prepared the data, trained and evaluated the model, as well as adjusted and optimized it. However, it is important to note that this model is intended for study purposes only and should not be used in real trading.
Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning
During the offline learning, we optimize the Agent's policy based on the training sample data. The resulting strategy gives the Agent confidence in its actions. However, such optimism is not always justified and can cause increased risks during the model operation. Today we will look at one of the methods to reduce these risks.
Quantization in machine learning (Part 1): Theory, sample code, analysis of implementation in CatBoost
The article considers the theoretical application of quantization in the construction of tree models and showcases the implemented quantization methods in CatBoost. No complex mathematical equations are used.
Working with ONNX models in float16 and float8 formats
Data formats used to represent machine learning models play a crucial role in their effectiveness. In recent years, several new types of data have emerged, specifically designed for working with deep learning models. In this article, we will focus on two new data formats that have become widely adopted in modern models.
Experiments with neural networks (Part 7): Passing indicators
Examples of passing indicators to a perceptron. The article describes general concepts and showcases the simplest ready-made Expert Advisor followed by the results of its optimization and forward test.
Neural networks made easy (Part 60): Online Decision Transformer (ODT)
The last two articles were devoted to the Decision Transformer method, which models action sequences in the context of an autoregressive model of desired rewards. In this article, we will look at another optimization algorithm for this method.
Data Science and Machine Learning (Part 20): Algorithmic Trading Insights, A Faceoff Between LDA and PCA in MQL5
Uncover the secrets behind these powerful dimensionality reduction techniques as we dissect their applications within the MQL5 trading environment. Delve into the nuances of Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA), gaining a profound understanding of their impact on strategy development and market analysis.
Neural networks are easy (Part 59): Dichotomy of Control (DoC)
In the previous article, we got acquainted with the Decision Transformer. But the complex stochastic environment of the foreign exchange market did not allow us to fully implement the potential of the presented method. In this article, I will introduce an algorithm that is aimed at improving the performance of algorithms in stochastic environments.