Better NN EA - page 15

 

Neural Networks Made Easy (Part 87): Time Series Patching

Neural Networks Made Easy (Part 87): Time Series Patching

Forecasting plays an important role in time series analysis. Deep models have brought significant improvement in this area. In addition to successfully predicting future values, they also extract abstract representations that can be applied to other tasks such as classification and anomaly detection.

The Transformer architecture, which originated in the field of natural language processing (NLP), demonstrated its advantages in computer vision (CV) and is successfully applied in time series analysis. Its Self-Attention mechanism, which can automatically identify relationships between elements of a time series, has become the basis for creating effective forecasting models.

Neural Networks Made Easy (Part 87): Time Series Patching
Neural Networks Made Easy (Part 87): Time Series Patching
  • www.mql5.com
Forecasting plays an important role in time series analysis. In the new article, we will talk about the benefits of time series patching.
 

Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)

Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)

Probably all known neural network architectures have been studied in terms of their ability to solve time series forecasting problems, including recurrent, convolutional and graph models. The most notable results are demonstrated by models based on the Transformer architecture. Several such algorithms were also presented in this series of articles. However, recent research has shown that Transformer-based architectures might be less powerful than expected. On some time series forecasting benchmarks, simple linear models can show comparable or even better performance. Unfortunately, however, such linear models have shortcomings because they are not suitable for modeling nonlinear relationships between a sequence of time series and time-independent covariates.
Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)
Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)
  • www.mql5.com
In an attempt to obtain the most accurate forecasts, researchers often complicate forecasting models. Which in turn leads to increased model training and maintenance costs. Is such an increase always justified? This article introduces an algorithm that uses the simplicity and speed of linear models and demonstrates results on par with the best models with a more complex architecture.
 

Neural Network in Practice: Least Squares 

Neural Network in Practice: Least Squares

In the previous article Neural Network in Practice: Secant Line, we began to discuss applied mathematics in practice. However, this was only a short and quick introduction to the topic. We have seen that the basic mathematical operation to be used is the trigonometric function. And, contrary to what many think, this is not a tangent function but a secant function. Although this may all seem quite confusing at first, you will soon find that everything is much simpler than it seems. Unlike many people who only create a lot of confusion in the mathematical environment, here everything develops completely naturally.

Neural Network in Practice: Least Squares
Neural Network in Practice: Least Squares
  • www.mql5.com
In this article, we'll look at a few ideas, including how mathematical formulas are more complex in appearance than when implemented in code. In addition, we will consider how to set up a chart quadrant, as well as one interesting problem that may arise in your MQL5 code. Although, to be honest, I still don't quite understand how to explain it. Anyway, I'll show you how to fix it in code.
 

Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)

Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)

Long-term forecasting of time series is a long-standing problem in solving various applied problems. Transformer-based models show promising results. However, high computational complexity and memory requirements make it difficult to use the Transformer for modeling long sequences. This has given rise to numerous studies devoted to reducing computational costs of the Transformer algorithm.

Despite the progress made by Transformer-based time series forecasting methods based, in some cases they fail to capture the common features of the time series distribution. The authors of the paper "FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting" have made an attempt to solve this problem. They compare the actual data of a time series with its predicted values obtained from the vanilla Transformer. Below is a screenshot from that paper.

FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting
FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting
  • arxiv.org
Although Transformer-based methods have significantly improved state-of-the-art results for long-term series forecasting, they are not only computationally expensive but more importantly, are unable to capture the global view of time series (e.g. overall trend). To address these problems, we propose to combine Transformer with the seasonal-trend decomposition method, in which the decomposition method captures the global profile of time series while Transformers capture more detailed structures. To further enhance the performance of Transformer for long-term prediction, we exploit the fact that most time series tend to have a sparse representation in well-known basis such as Fourier transform, and develop a frequency enhanced Transformer. Besides being more effective, the proposed method, termed as Frequency Enhanced Decomposed Transformer ({\bf FEDformer}), is more efficient than standard Transformer with a linear complexity to the sequence length. Our empirical studies with six...
 

Neural Network in Practice: Straight Line Function

Neural Network in Practice: Straight Line Function

In the previous article "Neural Network in Practice: Least Squares", we looked at how, in very simple cases, we can find an equation that best describes the data set we are using. The equation that was formed in this system was very simple, it used only one variable. We've already shown how to do the calculation, so we'll get straight to the point here. This is because the mathematics used to create an equation based on the values available in the database requires significant knowledge of analytical mathematics and algebraic computation. In addition to this, of course, it is necessary to know what type of data is in the database we are using.
Neural Network in Practice: Least Squares
Neural Network in Practice: Least Squares
  • www.mql5.com
In this article, we'll look at a few ideas, including how mathematical formulas are more complex in appearance than when implemented in code. In addition, we will consider how to set up a chart quadrant, as well as one interesting problem that may arise in your MQL5 code. Although, to be honest, I still don't quite understand how to explain it. Anyway, I'll show you how to fix it in code.
 

Neural Networks Made Easy (Part 90): Frequency Interpolation of Time Series (FITS)

Neural Networks Made Easy (Part 90): Frequency Interpolation of Time Series (FITS)

In the previous articles, we discussed the FEDformer method that uses the frequency domain to find patterns in a time series. However, the Transformer used in that method can hardly be referred to as a lightweight model. Instead of complex models that require large computational costs, the paper "FITS: Modeling Time Series with 10k Parameters" proposes a method for the frequency interpolation of time series (Frequency Interpolation Time Series - FITS). It is a compact and efficient solution for time series analysis and forecasting. FITS uses frequency domain interpolation to expand the window of the analyzed time segment, thus enabling the efficient extraction of temporal features without significant computational overhead.
FITS: Modeling Time Series with $10k$ Parameters
FITS: Modeling Time Series with $10k$ Parameters
  • arxiv.org
In this paper, we introduce FITS, a lightweight yet powerful model for time series analysis. Unlike existing models that directly process raw time-domain data, FITS operates on the principle that time series can be manipulated through interpolation in the complex frequency domain. By discarding high-frequency components with negligible impact on time series data, FITS achieves performance comparable to state-of-the-art models for time series forecasting and anomaly detection tasks, while having a remarkably compact size of only approximately $10k$ parameters. Such a lightweight model can be easily trained and deployed in edge devices, creating opportunities for various applications. The code is available in: \url{https://github.com/VEWOXIC/FITS}
 

Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

Forecasting time series of future prices is critical in various financial market scenarios. Most of the methods that currently exist are based on certain autocorrelation in the data. In other words, we exploit the presence of correlation between time steps that exists both in the input data and in the predicted values.

Among the models gaining popularity are those based on the Transformer architecture that use Self-Attention mechanisms for dynamic autocorrelation estimation. Also, we see an increasing interest in the use of frequency analysis in forecasting models. The representation of the sequence of input data in the frequency domain helps avoid the complexity of describing autocorrelation and improves the efficiency of various models.

Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)
Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)
  • www.mql5.com
We continue to explore the analysis and forecasting of time series in the frequency domain. In this article, we will get acquainted with a new method to forecast data in the frequency domain, which can be added to many of the algorithms we have studied previously.
 

Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains

Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains

The article presents the results of experiments on eight real data sets, according to which ATFNet shows promising results and outperforms other state-of-the-art time series forecasting methods on many datasets.
Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains
Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains
  • www.mql5.com
The authors of the FreDF method experimentally confirmed the advantage of combined forecasting in the frequency and time domains. However, the use of the weight hyperparameter is not optimal for non-stationary time series. In this article, we will get acquainted with the method of adaptive combination of forecasts in frequency and time domains.
 

Neural Networks Made Easy (Part 93): Adaptive Forecasting in Frequency and Time Domains (Final Part)

Neural Networks Made Easy (Part 93): Adaptive Forecasting in Frequency and Time Domains (Final Part)

In the previous article, we got acquainted with the ATFNet algorithm, which is an ensemble of 2 time series forecasting models. One of them works in the time domain and constructs predictive values of the studied time series based on the analysis of signal amplitudes. The second model works with the frequency characteristics of the analyzed time series and records its global dependencies, their periodicity and spectrum. Adaptive merging of two independent forecasts, according to the author of the method, generates impressive results.
Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains
Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains
  • www.mql5.com
The authors of the FreDF method experimentally confirmed the advantage of combined forecasting in the frequency and time domains. However, the use of the weight hyperparameter is not optimal for non-stationary time series. In this article, we will get acquainted with the method of adaptive combination of forecasts in frequency and time domains.
 

Neural Networks Made Easy (Part 94): Optimizing the Input Sequence

Neural Networks Made Easy (Part 94): Optimizing the Input Sequence

A common approach when processing time series is to keep the original arrangement of the time steps intact. It is assumed that the historical order is the most optimal. However, most existing models lack explicit mechanisms to explore the relationships between distant segments within each time series, which may in fact have strong dependencies. For example, models based on convolutional networks (CNN) used for time series learning can only capture patterns within a limited time window. As a result, when analyzing time series in which important patterns span longer time windows, such models have difficulty effectively capturing this information. The use of deep networks allows to increase the size of the receptive field and partially solves the problem. But the number of convolutional layers required to cover the entire sequence may be too large, and oversizing the model leads to the vanishing gradient problem.
Neural networks made easy (Part 3): Convolutional networks
Neural networks made easy (Part 3): Convolutional networks
  • www.mql5.com
As a continuation of the neural network topic, I propose considering convolutional neural networks. This type of neural network are usually applied to analyzing visual imagery. In this article, we will consider the application of these networks in the financial markets.