Dmitriy Gizlyk
Dmitriy Gizlyk
4.4 (49)
  • Bilgiler
11+ yıl
deneyim
0
ürünler
0
demo sürümleri
134
işler
0
sinyaller
0
aboneler
Professional writing programs of any complexity for MT4, MT5, C#.
Dmitriy Gizlyk
"Neural networks made easy (Part 37): Sparse Attention" makalesini yayınladı
Neural networks made easy (Part 37): Sparse Attention

In the previous article, we discussed relational models which use attention mechanisms in their architecture. One of the specific features of these models is the intensive utilization of computing resources. In this article, we will consider one of the mechanisms for reducing the number of computational operations inside the Self-Attention block. This will increase the general performance of the model.

1
Dmitriy Gizlyk
"Neural networks made easy (Part 36): Relational Reinforcement Learning" makalesini yayınladı
Neural networks made easy (Part 36): Relational Reinforcement Learning

In the reinforcement learning models we discussed in previous article, we used various variants of convolutional networks that are able to identify various objects in the original data. The main advantage of convolutional networks is the ability to identify objects regardless of their location. At the same time, convolutional networks do not always perform well when there are various deformations of objects and noise. These are the issues which the relational model can solve.

Dmitriy Gizlyk
"Neural networks made easy (Part 35): Intrinsic Curiosity Module" makalesini yayınladı
Neural networks made easy (Part 35): Intrinsic Curiosity Module

We continue to study reinforcement learning algorithms. All the algorithms we have considered so far required the creation of a reward policy to enable the agent to evaluate each of its actions at each transition from one system state to another. However, this approach is rather artificial. In practice, there is some time lag between an action and a reward. In this article, we will get acquainted with a model training algorithm which can work with various time delays from the action to the reward.

2
Dmitriy Gizlyk
"Neural networks made easy (Part 34): Fully Parameterized Quantile Function" makalesini yayınladı
Neural networks made easy (Part 34): Fully Parameterized Quantile Function

We continue studying distributed Q-learning algorithms. In previous articles, we have considered distributed and quantile Q-learning algorithms. In the first algorithm, we trained the probabilities of given ranges of values. In the second algorithm, we trained ranges with a given probability. In both of them, we used a priori knowledge of one distribution and trained another one. In this article, we will consider an algorithm which allows the model to train for both distributions.

2
Dmitriy Gizlyk
"Neural networks made easy (Part 33): Quantile regression in distributed Q-learning" makalesini yayınladı
Neural networks made easy (Part 33): Quantile regression in distributed Q-learning

We continue studying distributed Q-learning. Today we will look at this approach from the other side. We will consider the possibility of using quantile regression to solve price prediction tasks.

3
Dmitriy Gizlyk
"Neural networks made easy (Part 32): Distributed Q-Learning" makalesini yayınladı
Neural networks made easy (Part 32): Distributed Q-Learning

We got acquainted with the Q-learning method in one of the earlier articles within this series. This method averages rewards for each action. Two works were presented in 2017, which show greater success when studying the reward distribution function. Let's consider the possibility of using such technology to solve our problems.

2
Abdulrahman F
Abdulrahman F 2023.01.20
Mm am hmm mm
Dmitriy Gizlyk
"Neural networks made easy (Part 31): Evolutionary algorithms" makalesini yayınladı
Neural networks made easy (Part 31): Evolutionary algorithms

In the previous article, we started exploring non-gradient optimization methods. We got acquainted with the genetic algorithm. Today, we will continue this topic and will consider another class of evolutionary algorithms.

4
Dmitriy Gizlyk
"Neural networks made easy (Part 30): Genetic algorithms" makalesini yayınladı
Neural networks made easy (Part 30): Genetic algorithms

Today I want to introduce you to a slightly different learning method. We can say that it is borrowed from Darwin's theory of evolution. It is probably less controllable than the previously discussed methods but it allows training non-differentiable models.

5
Dmitriy Gizlyk
"Neural networks made easy (Part 29): Advantage Actor-Critic algorithm" makalesini yayınladı
Neural networks made easy (Part 29): Advantage Actor-Critic algorithm

In the previous articles of this series, we have seen two reinforced learning algorithms. Each of them has its own advantages and disadvantages. As often happens in such cases, next comes the idea to combine both methods into an algorithm, using the best of the two. This would compensate for the shortcomings of each of them. One of such methods will be discussed in this article.

7
Darius Sadauskas
Darius Sadauskas 2022.09.21
Hello, what I'm doing wrong ? I get error on compiling : 'vae' - undeclared identifier on NeuroNet.mqh line: 4130
xuebutayan
xuebutayan 2023.02.03
666
Dmitriy Gizlyk
"Neural networks made easy (Part 28): Policy gradient algorithm" makalesini yayınladı
Neural networks made easy (Part 28): Policy gradient algorithm

We continue to study reinforcement learning methods. In the previous article, we got acquainted with the Deep Q-Learning method. In this method, the model is trained to predict the upcoming reward depending on the action taken in a particular situation. Then, an action is performed in accordance with the policy and the expected reward. But it is not always possible to approximate the Q-function. Sometimes its approximation does not generate the desired result. In such cases, approximation methods are applied not to utility functions, but to a direct policy (strategy) of actions. One of such methods is Policy Gradient.

8
Dmitriy Gizlyk
"Neural networks made easy (Part 27): Deep Q-Learning (DQN)" makalesini yayınladı
Neural networks made easy (Part 27): Deep Q-Learning (DQN)

We continue to study reinforcement learning. In this article, we will get acquainted with the Deep Q-Learning method. The use of this method has enabled the DeepMind team to create a model that can outperform a human when playing Atari computer games. I think it will be useful to evaluate the possibilities of the technology for solving trading problems.

7
mi ya
mi ya 2022.09.05
I really appreciate you for your publishing articles series of machine learning on MQL5.
Dmitriy Gizlyk
"Neural networks made easy (Part 26): Reinforcement Learning" makalesini yayınladı
Neural networks made easy (Part 26): Reinforcement Learning

We continue to study machine learning methods. With this article, we begin another big topic, Reinforcement Learning. This approach allows the models to set up certain strategies for solving the problems. We can expect that this property of reinforcement learning will open up new horizons for building trading strategies.

7
Dmitriy Gizlyk
"Neural networks made easy (Part 25): Practicing Transfer Learning" makalesini yayınladı
Neural networks made easy (Part 25): Practicing Transfer Learning

In the last two articles, we developed a tool for creating and editing neural network models. Now it is time to evaluate the potential use of Transfer Learning technology using practical examples.

6
Dmitriy Gizlyk
"Neural networks made easy (Part 24): Improving the tool for Transfer Learning" makalesini yayınladı
Neural networks made easy (Part 24): Improving the tool for Transfer Learning

In the previous article, we created a tool for creating and editing the architecture of neural networks. Today we will continue working on this tool. We will try to make it more user friendly. This may see, top be a step away form our topic. But don't you think that a well organized workspace plays an important role in achieving the result.

5
Dmitriy Gizlyk
"Neural networks made easy (Part 23): Building a tool for Transfer Learning" makalesini yayınladı
Neural networks made easy (Part 23): Building a tool for Transfer Learning

In this series of articles, we have already mentioned Transfer Learning more than once. However, this was only mentioning. in this article, I suggest filling this gap and taking a closer look at Transfer Learning.

2
Dmitriy Gizlyk
"Neural networks made easy (Part 22): Unsupervised learning of recurrent models" makalesini yayınladı
Neural networks made easy (Part 22): Unsupervised learning of recurrent models

We continue to study unsupervised learning algorithms. This time I suggest that we discuss the features of autoencoders when applied to recurrent model training.

2
Dmitriy Gizlyk
"Neural networks made easy (Part 21): Variational autoencoders (VAE)" makalesini yayınladı
Neural networks made easy (Part 21): Variational autoencoders (VAE)

In the last article, we got acquainted with the Autoencoder algorithm. Like any other algorithm, it has its advantages and disadvantages. In its original implementation, the autoenctoder is used to separate the objects from the training sample as much as possible. This time we will talk about how to deal with some of its disadvantages.

2
Dmitriy Gizlyk
"Neural networks made easy (Part 20): Autoencoders" makalesini yayınladı
Neural networks made easy (Part 20): Autoencoders

We continue to study unsupervised learning algorithms. Some readers might have questions regarding the relevance of recent publications to the topic of neural networks. In this new article, we get back to studying neural networks.

3
Dmitriy Gizlyk
"Neural networks made easy (Part 19): Association rules using MQL5" makalesini yayınladı
Neural networks made easy (Part 19): Association rules using MQL5

We continue considering association rules. In the previous article, we have discussed theoretical aspect of this type of problem. In this article, I will show the implementation of the FP Growth method using MQL5. We will also test the implemented solution using real data.

3
Dmitriy Gizlyk
"Neural networks made easy (Part 18): Association rules" makalesini yayınladı
Neural networks made easy (Part 18): Association rules

As a continuation of this series of articles, let's consider another type of problems within unsupervised learning methods: mining association rules. This problem type was first used in retail, namely supermarkets, to analyze market baskets. In this article, we will talk about the applicability of such algorithms in trading.

3