Better NN EA - page 14

 

Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

In previous articles, we mainly focused on predicting upcoming price movements and analyzing historical data. Based on this analysis, we tried to predict the most likely upcoming price movement in various ways. Some strategies constructed a whole range of predicted movements and tried to estimate the probability of each of the forecasts. Naturally, training and operating such models require significant computing resources.

But do we really need to predict the upcoming price movement? Moreover, the accuracy of the forecasts obtained is far from desired.

Our ultimate goal is to generate a profit, which we expect to receive from the successful trading of our Agent. The Agent, in turn, selects the optimal actions based on the obtained predicted price trajectories.

Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)
Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)
  • www.mql5.com
In this article, I propose to look at the issue of building a trading strategy from a different angle. We will not predict future price movements, but will try to build a trading system based on the analysis of historical data.
 
Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state


Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state

Object detection in video has a number of certain characteristics and must solve the problem of changes in object features caused by motion, which are not encountered in the image domain. One of the solutions is to use temporal information and combine features from adjacent frames. The paper "FAQ: Feature Aggregated Queries for Transformer-based Video Object Detectors" proposes a new approach to detecting objects in video. The authors of the article improve the quality of queries for Transformer-based models by aggregating them. To achieve this goal, a practical method is proposed to generate and aggregate queries according to the features of the input frames. Extensive experimental results provided in the paper validate the effectiveness of the proposed method. The proposed approaches can be extended to a wide range of methods for detecting objects in images and videos to improve their efficiency.
Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state
Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state
  • www.mql5.com
In the previous article, we got acquainted with one of the methods for detecting objects in an image. However, processing a static image is somewhat different from working with dynamic time series, such as the dynamics of the prices we analyze. In this article, we will consider the method of detecting objects in video, which is somewhat closer to the problem we are solving.
 

Neural networks made easy (Part 80): Graph Transformer Generative Adversarial Model (GTGAN)

Neural networks made easy (Part 80): Graph Transformer Generative Adversarial Model (GTGAN)

The recently published paper "Graph Transformer GANs with Graph Masked Modeling for Architectural Layout Generation" introduces the algorithm for the graph transformer generative adversarial model (GTGAN), which succinctly combines both of these approaches. The authors of the GTGAN algorithm address the problem of creating a realistic architectural design of a house from an input graph. The generator model they presented consists of three components: a message passing convolutional neural network (Conv-MPN), Graph Transformer encoder (GTE) and generation head.

Qualitative and quantitative experiments on three complex graphically constrained architectural layout generations with three datasets that were presented in the paper demonstrate that the proposed method can generate results superior to previously presented algorithms.

Neural networks made easy (Part 80): Graph Transformer Generative Adversarial Model (GTGAN)
Neural networks made easy (Part 80): Graph Transformer Generative Adversarial Model (GTGAN)
  • www.mql5.com
In this article, I will get acquainted with the GTGAN algorithm, which was introduced in January 2024 to solve complex problems of generation architectural layouts with graph constraints.
 

Neural Networks Made Easy (Part 81): Context-Guided Motion Analysis (CCMR)

A particularly interesting method entitled CCMR was presented in the paper "CCMR: High Resolution Optical Flow Estimation via Coarse-to-Fine Context-Guided Motion Reasoning". It is an approach to optical flow estimation that combines the advantages of attention-oriented methods of motion aggregation concepts and high-resolution multi-scale approaches. The CCMR method consistently integrates context-based motion grouping concepts into a high-resolution coarse-grained estimation framework. This allows for detailed flow fields that also provide high accuracy in occluded areas. In this context, the authors of the method propose a two-stage motion grouping strategy where global self-attentional contextual features are first computed and them used to guide motion features iteratively across all scales. Thus, context-directed reasoning about XCiT-based motion provides processing at all coarse-grained scales. Experiments conducted by the authors of the method demonstrate the strong performance of the proposed approach and the advantages of its basic concepts.
Neural Networks Made Easy (Part 81): Context-Guided Motion Analysis (CCMR)
Neural Networks Made Easy (Part 81): Context-Guided Motion Analysis (CCMR)
  • www.mql5.com
In previous works, we always assessed the current state of the environment. At the same time, the dynamics of changes in indicators always remained "behind the scenes". In this article I want to introduce you to an algorithm that allows you to evaluate the direct change in data between 2 successive environmental states.
 

Neural networks made easy (Part 82): Ordinary Differential Equation models (NeuralODE) 

Neural networks made easy (Part 82): Ordinary Differential Equation models (NeuralODE)

Let's get acquainted with a new model family: Ordinary Differential Equations. Instead of specifying a discrete sequence of hidden layers, they parameterize the derivative of the hidden state using a neural network. The results of the model are calculated using a "black box", that is, the Differential Equation Solver. These continuous-depth models use a constant amount of memory and adapt their estimation strategy to each input signal. Such models were first introduced in the paper "Neural Ordinary Differential Equations". In this paper, the authors of the method demonstrate the ability to scale backpropagation using any Ordinary Differential Equation (ODE) solver without access to its internal operations. This enables end-to-end training of ODEs within larger models.
Neural networks made easy (Part 82): Ordinary Differential Equation models (NeuralODE)
Neural networks made easy (Part 82): Ordinary Differential Equation models (NeuralODE)
  • www.mql5.com
In this article, we will discuss another type of models that are aimed at studying the dynamics of the environmental state.
 

Neural Network in Practice: Secant Line

Neural Network in Practice: Secant Line

Although many may think that it would be better to release a series of articles on the topic of artificial intelligence, I cannot imagine how this could be done. Most people have no idea about the true purpose of neural networks and, accordingly, about the so-called artificial intelligence.

So, we will not go into this topic in detail here. Instead, we will focus on other aspects.

Neural Network in Practice: Secant Line
Neural Network in Practice: Secant Line
  • www.mql5.com
As already explained in the theoretical part, when working with neural networks we need to use linear regressions and derivatives. Why? The reason is that linear regression is one of the simplest formulas in existence. Essentially, linear regression is just an affine function. However, when we talk about neural networks, we are not interested in the effects of direct linear regression. We are interested in the equation that generates this line. We are not that interested in the line created. Do you know the main equation that we need to understand? If not, I recommend reading this article to understanding it.
 

Neural Networks Made Easy (Part 83): The "Conformer" Spatio-Temporal Continuous Attention Transformer Algorithm

Neural Networks Made Easy (Part 83): The "Conformer" Spatio-Temporal Continuous Attention Transformer Algorithm

The unpredictability of financial market behavior can probably be compared to the volatility of the weather. However, humanity has done quite a lot in the field of weather forecasting. So, we can now quite trust the weather forecasts provided by meteorologists. Can we use their developments to forecast the "weather" in financial markets? In this article, we will get acquainted with the complex algorithm of the "Conformer" Spatio-Temporal Continuous Attention Transformer, which was developed for the purposes of weather forecasting and is presented in the paper "Conformer: Embedding Continuous Attention in Vision Transformer for Weather Forecasting". In their work, the authors of the method propose the Continuous Attention algorithm. They combine it with those we discussed in the previous article on Neural ODE.
Neural Networks Made Easy (Part 83): The "Conformer" Spatio-Temporal Continuous Attention Transformer Algorithm
Neural Networks Made Easy (Part 83): The "Conformer" Spatio-Temporal Continuous Attention Transformer Algorithm
  • www.mql5.com
This article introduces the Conformer algorithm originally developed for the purpose of weather forecasting, which in terms of variability and capriciousness can be compared to financial markets. Conformer is a complex method. It combines the advantages of attention models and ordinary differential equations.
 

Neural Networks Made Easy (Part 84): Reversible Normalization (RevIN)

In the previous article, we discussed the Conformer method, which was originally developed for weather forecasting. This is quite an interesting method. When testing the trained model, we got a pretty good result. But did we do everything right? Is it possible to get a better result? Let's look at the learning process. We are clearly not using the model forecasting the next most probable timeseries values for its intended purpose. By feeding the model input data from a timeseries, we trained it by propagating the error gradient from models using the prediction results. We started with the Critic's results.

RevIN — is a flexible, trainable layer that can be applied to any arbitrarily chosen layers, effectively suppressing non-stationary information (mean and variance of an instance) in one layer and restoring it in another layer of nearly symmetric position, such as input and output layers.
 
Neural Networks Made Easy (Part 84): Reversible Normalization (RevIN)
Neural Networks Made Easy (Part 84): Reversible Normalization (RevIN)
  • www.mql5.com
We already know that pre-processing of the input data plays a major role in the stability of model training. To process "raw" input data online, we often use a batch normalization layer. But sometimes we need a reverse procedure. In this article, we discuss one of the possible approaches to solving this problem.
 

Neural Networks Made Easy (Part 85): Multivariate Time Series Forecasting

Neural Networks Made Easy (Part 85): Multivariate Time Series Forecasting

Forecasting timeseries is one of the most important elements in building an effective trading strategy. When performing a trading operation in one direction or another, we proceed from our own vision (forecast) of the upcoming price movement. Recent advances in deep learning models, especially architecture-based Transformer models, have demonstrated significant progress in this area, offering a great potential for solving the multifaceted problems associated with long-term timeseries forecasting.
Neural Networks Made Easy (Part 85): Multivariate Time Series Forecasting
Neural Networks Made Easy (Part 85): Multivariate Time Series Forecasting
  • www.mql5.com
In this article, I would like to introduce you to a new complex timeseries forecasting method, which harmoniously combines the advantages of linear models and transformers.
 

Neural Networks Made Easy (Part 86): U-Shaped Transformer

Neural Networks Made Easy (Part 86): U-Shaped Transformer

Forecasting long-term timeseries is of specifically great importance for trading. The Transformer architecture, which was introduced in 2017, has demonstrated impressive performance in the areas of Natural Language Processing (NLP) and Computer Vision (CV). The use of Self-Attention mechanisms allows the effective capturing of dependencies over long time intervals, extracting key information from the context. Naturally, quite quickly a large number of different algorithms based on this mechanism were proposed for solving problems related to timeseries.
Neural Networks Made Easy (Part 86): U-Shaped Transformer
Neural Networks Made Easy (Part 86): U-Shaped Transformer
  • www.mql5.com
We continue to study timeseries forecasting algorithms. In this article, we will discuss another method: the U-shaped Transformer.