Taking Neural Networks to the next level - page 36

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.06.01 09:22

Neural networks made easy (Part 71): Goal-Conditioned Predictive Coding GCPC)
Goal-Conditioned Behavior Cloning (BC) is a promising approach for solving various offline reinforcement learning problems. Instead of assessing the value of states and actions, BC directly trains the Agent behavior policy, building dependencies between the set goal, the analyzed environment state and the Agent's action. This is achieved using supervised learning methods on pre-collected offline trajectories. The familiar Decision Transformer method and its derivative algorithms have demonstrated the effectiveness of sequence modeling for offline reinforcement learning.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.06.06 11:09

Neural networks made easy (Part 72): Trajectory prediction in noisy environments 

The noise prediction module solves the auxiliary problem of identifying noise in the analyzed trajectories. This helps the movement prediction model better model potential spatial diversity and improves understanding of the underlying representation in movement prediction, thereby improving future predictions.

The authors of the method conducted additional experiments to empirically demonstrate the critical importance of the spatial consistency and noise prediction modules for SSWNP. When using only the spatial consistency module to solve the movement prediction problem, suboptimal performance of the trained model is observed. Therefore, they integrate both modules in their work.


 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.06.06 11:12

Neural networks made easy (Part 73): AutoBots for predicting price movements 

The proposed method is based on the Encoder-Decoder architecture. It was developed to solve problems of safe control of robotic systems. It allows the generation of sequences of trajectories for multiple agents consistent with the scene. AutoBots can predict the trajectory of one ego-agent or the distribution of future trajectories for all agents in the scene. In our case, we will try to apply the proposed model to generate sequences of price movements of currency pairs consistent with market dynamics.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.06.15 08:56

Neural networks made easy (Part 74): Trajectory prediction with adaptation

Building a trading strategy is inseparable from analyzing the market situation and forecasting the most likely movement of a financial instrument. This movement often correlated with other financial assets and macroeconomic indicators. This can be compared with the movement of transport, where each vehicle follows its own individual destination. However, their actions on the road are interconnected to a certain extent and are strictly regulated by traffic rules. Also, due to the individual perception of the road situation by vehicle drivers, a share of stochasticity remains on the roads.

In this article I want to introduce you to a method for effectively jointly predicting the trajectories of all agents on the scene with dynamic learning of weights ADAPT, which was proposed to solve problems in the field of navigation of autonomous vehicles. The method was first presented in the article "ADAPT: Efficient Multi-Agent Trajectory Prediction with Adaptation".

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.06.15 08:57

Neural networks made easy (Part 75): Improving the performance of trajectory prediction models

Forecasting the trajectory of the upcoming price movement probably plays one of the key roles in the process of constructing trading plans for the desired planning horizon. The accuracy of such forecasts is critical. In an attempt to improve the quality of trajectory forecasting, we complicate our trajectory forecasting models.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.06.25 06:29

Neural networks made easy (Part 76): Exploring diverse interaction patterns with Multi-future Transformer

Neural networks made easy (Part 76): Exploring diverse interaction patterns with Multi-future Transformer

The authors of the paper "Multi-future Transformer: Learning diverse interaction modes for behavior prediction in autonomous driving" suggest using the Multi-future Transformer (MFT) method to solve such problems. Its main idea is to decompose the multimodal distribution of the future into several unimodal distributions, which allows you to effectively simulate various models of interaction between agents on the scene.

In MFT, forecasts are generated by a neural network with fixed parameters in a single feed-forward pass, without the need to stochastically sample latent variables, pre-determine anchors, or run an iterative post-processing algorithm. This allows the model to operate in a deterministic, repeatable manner.


 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.06.25 06:29

Neural networks made easy (Part 76): Exploring diverse interaction patterns with Multi-future Transformer

Neural networks made easy (Part 76): Exploring diverse interaction patterns with Multi-future Transformer

The authors of the paper "Multi-future Transformer: Learning diverse interaction modes for behavior prediction in autonomous driving" suggest using the Multi-future Transformer (MFT) method to solve such problems. Its main idea is to decompose the multimodal distribution of the future into several unimodal distributions, which allows you to effectively simulate various models of interaction between agents on the scene.

In MFT, forecasts are generated by a neural network with fixed parameters in a single feed-forward pass, without the need to stochastically sample latent variables, pre-determine anchors, or run an iterative post-processing algorithm. This allows the model to operate in a deterministic, repeatable manner.


 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.07.09 11:53

Neural networks made easy (Part 77): Cross-Covariance Transformer (XCiT)

Neural networks made easy (Part 77): Cross-Covariance Transformer (XCiT)

Transformers show great potential in solving problems of analyzing various sequences. The Self-Attention operation which underlies transformers, provides global interactions between all tokens in the sequence. This makes it possible to evaluate interdependencies within the entire analyzed sequence. However, this comes with quadratic complexity in terms of computation time and memory usage, making it difficult to apply the algorithm to long sequences.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.07.09 11:57

Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

Neural networks made easy (Part 78): Decoder-free Object Detector with Transformer (DFFT)

In previous articles, we mainly focused on predicting upcoming price movements and analyzing historical data. Based on this analysis, we tried to predict the most likely upcoming price movement in various ways. Some strategies constructed a whole range of predicted movements and tried to estimate the probability of each of the forecasts. Naturally, training and operating such models require significant computing resources.

But do we really need to predict the upcoming price movement? Moreover, the accuracy of the forecasts obtained is far from desired.

Our ultimate goal is to generate a profit, which we expect to receive from the successful trading of our Agent. The Agent, in turn, selects the optimal actions based on the obtained predicted price trajectories.


 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.07.14 13:37

Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state


Neural networks made easy (Part 79): Feature Aggregated Queries (FAQ) in the context of state

Object detection in video has a number of certain characteristics and must solve the problem of changes in object features caused by motion, which are not encountered in the image domain. One of the solutions is to use temporal information and combine features from adjacent frames. The paper "FAQ: Feature Aggregated Queries for Transformer-based Video Object Detectors" proposes a new approach to detecting objects in video. The authors of the article improve the quality of queries for Transformer-based models by aggregating them. To achieve this goal, a practical method is proposed to generate and aggregate queries according to the features of the input frames. Extensive experimental results provided in the paper validate the effectiveness of the proposed method. The proposed approaches can be extended to a wide range of methods for detecting objects in images and videos to improve their efficiency.