You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Neural Networks Made Easy (Part 87): Time Series Patching
Forecasting plays an important role in time series analysis. Deep models have brought significant improvement in this area. In addition to successfully predicting future values, they also extract abstract representations that can be applied to other tasks such as classification and anomaly detection.
The Transformer architecture, which originated in the field of natural language processing (NLP), demonstrated its advantages in computer vision (CV) and is successfully applied in time series analysis. Its Self-Attention mechanism, which can automatically identify relationships between elements of a time series, has become the basis for creating effective forecasting models.
Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)
Neural Network in Practice: Least Squares
In the previous article Neural Network in Practice: Secant Line, we began to discuss applied mathematics in practice. However, this was only a short and quick introduction to the topic. We have seen that the basic mathematical operation to be used is the trigonometric function. And, contrary to what many think, this is not a tangent function but a secant function. Although this may all seem quite confusing at first, you will soon find that everything is much simpler than it seems. Unlike many people who only create a lot of confusion in the mathematical environment, here everything develops completely naturally.
Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)
Long-term forecasting of time series is a long-standing problem in solving various applied problems. Transformer-based models show promising results. However, high computational complexity and memory requirements make it difficult to use the Transformer for modeling long sequences. This has given rise to numerous studies devoted to reducing computational costs of the Transformer algorithm.
Despite the progress made by Transformer-based time series forecasting methods based, in some cases they fail to capture the common features of the time series distribution. The authors of the paper "FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting" have made an attempt to solve this problem. They compare the actual data of a time series with its predicted values obtained from the vanilla Transformer. Below is a screenshot from that paper.
Neural Network in Practice: Straight Line Function
Neural Networks Made Easy (Part 90): Frequency Interpolation of Time Series (FITS)
Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)
Forecasting time series of future prices is critical in various financial market scenarios. Most of the methods that currently exist are based on certain autocorrelation in the data. In other words, we exploit the presence of correlation between time steps that exists both in the input data and in the predicted values.
Among the models gaining popularity are those based on the Transformer architecture that use Self-Attention mechanisms for dynamic autocorrelation estimation. Also, we see an increasing interest in the use of frequency analysis in forecasting models. The representation of the sequence of input data in the frequency domain helps avoid the complexity of describing autocorrelation and improves the efficiency of various models.
Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains
Neural Networks Made Easy (Part 93): Adaptive Forecasting in Frequency and Time Domains (Final Part)
Neural Networks Made Easy (Part 94): Optimizing the Input Sequence