Discussing the article: "Neural Networks in Trading: Exploring the Local Structure of Data"

 

Check out the new article: Neural Networks in Trading: Exploring the Local Structure of Data.

Effective identification and preservation of the local structure of market data in noisy conditions is a critical task in trading. The use of the Self-Attention mechanism has shown promising results in processing such data; however, the classical approach does not account for the local characteristics of the underlying structure. In this article, I introduce an algorithm capable of incorporating these structural dependencies.

The Transformer has proven its effectiveness in addressing various tasks. Compared to convolution, the Self-Attention mechanism can adaptively filter out noisy or irrelevant points. Nevertheless, the vanilla Transformer applies the same transformation function to all elements in a sequence. This isotropic approach disregards spatial relationships and local structure information such as direction and distance from a central point to its neighbors. If the positions of the points are rearranged, the output of the Transformer remains unchanged. This creates challenges in recognizing the directionality of objects, which is crucial for detecting price patterns.

The authors of the paper "SEFormer: Structure Embedding Transformer for 3D Object Detection" aimed to combine the strengths of both approaches by developing a new transformer architecture - Structure-Embedding transFormer (SEFormer), capable of encoding local structure with attention to direction and distance. The proposed SEFormer learns distinct transformations for the Value of points from different directions and distances. Consequently, changes in the local spatial structure are reflected in the model's output, providing a key to accurate recognition of object directionality.


Author: Dmitriy Gizlyk

 
Nice article