Dmitriy Gizlyk
Dmitriy Gizlyk
4.4 (49)
  • Information
10+ years
experience
0
products
0
demo versions
134
jobs
0
signals
0
subscribers
Professional writing programs of any complexity for MT4, MT5, C#.
Dmitriy Gizlyk
Published article Neural networks made easy (Part 41): Hierarchical models
Neural networks made easy (Part 41): Hierarchical models

The article describes hierarchical training models that offer an effective approach to solving complex machine learning problems. Hierarchical models consist of several levels, each of which is responsible for different aspects of the task.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 40): Using Go-Explore on large amounts of data
Neural networks made easy (Part 40): Using Go-Explore on large amounts of data

This article discusses the use of the Go-Explore algorithm over a long training period, since the random action selection strategy may not lead to a profitable pass as training time increases.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 39): Go-Explore, a different approach to exploration
Neural networks made easy (Part 39): Go-Explore, a different approach to exploration

We continue studying the environment in reinforcement learning models. And in this article we will look at another algorithm – Go-Explore, which allows you to effectively explore the environment at the model training stage.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement
Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement

One of the key problems within reinforcement learning is environmental exploration. Previously, we have already seen the research method based on Intrinsic Curiosity. Today I propose to look at another algorithm: Exploration via Disagreement.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 37): Sparse Attention
Neural networks made easy (Part 37): Sparse Attention

In the previous article, we discussed relational models which use attention mechanisms in their architecture. One of the specific features of these models is the intensive utilization of computing resources. In this article, we will consider one of the mechanisms for reducing the number of computational operations inside the Self-Attention block. This will increase the general performance of the model.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 36): Relational Reinforcement Learning
Neural networks made easy (Part 36): Relational Reinforcement Learning

In the reinforcement learning models we discussed in previous article, we used various variants of convolutional networks that are able to identify various objects in the original data. The main advantage of convolutional networks is the ability to identify objects regardless of their location. At the same time, convolutional networks do not always perform well when there are various deformations of objects and noise. These are the issues which the relational model can solve.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 35): Intrinsic Curiosity Module
Neural networks made easy (Part 35): Intrinsic Curiosity Module

We continue to study reinforcement learning algorithms. All the algorithms we have considered so far required the creation of a reward policy to enable the agent to evaluate each of its actions at each transition from one system state to another. However, this approach is rather artificial. In practice, there is some time lag between an action and a reward. In this article, we will get acquainted with a model training algorithm which can work with various time delays from the action to the reward.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 34): Fully Parameterized Quantile Function
Neural networks made easy (Part 34): Fully Parameterized Quantile Function

We continue studying distributed Q-learning algorithms. In previous articles, we have considered distributed and quantile Q-learning algorithms. In the first algorithm, we trained the probabilities of given ranges of values. In the second algorithm, we trained ranges with a given probability. In both of them, we used a priori knowledge of one distribution and trained another one. In this article, we will consider an algorithm which allows the model to train for both distributions.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 33): Quantile regression in distributed Q-learning
Neural networks made easy (Part 33): Quantile regression in distributed Q-learning

We continue studying distributed Q-learning. Today we will look at this approach from the other side. We will consider the possibility of using quantile regression to solve price prediction tasks.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 32): Distributed Q-Learning
Neural networks made easy (Part 32): Distributed Q-Learning

We got acquainted with the Q-learning method in one of the earlier articles within this series. This method averages rewards for each action. Two works were presented in 2017, which show greater success when studying the reward distribution function. Let's consider the possibility of using such technology to solve our problems.

Abdulrahman F
Abdulrahman F 2023.01.20
Mm am hmm mm
Dmitriy Gizlyk
Published article Neural networks made easy (Part 31): Evolutionary algorithms
Neural networks made easy (Part 31): Evolutionary algorithms

In the previous article, we started exploring non-gradient optimization methods. We got acquainted with the genetic algorithm. Today, we will continue this topic and will consider another class of evolutionary algorithms.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 30): Genetic algorithms
Neural networks made easy (Part 30): Genetic algorithms

Today I want to introduce you to a slightly different learning method. We can say that it is borrowed from Darwin's theory of evolution. It is probably less controllable than the previously discussed methods but it allows training non-differentiable models.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 29): Advantage Actor-Critic algorithm
Neural networks made easy (Part 29): Advantage Actor-Critic algorithm

In the previous articles of this series, we have seen two reinforced learning algorithms. Each of them has its own advantages and disadvantages. As often happens in such cases, next comes the idea to combine both methods into an algorithm, using the best of the two. This would compensate for the shortcomings of each of them. One of such methods will be discussed in this article.

Darius Sadauskas
Darius Sadauskas 2022.09.21
Hello, what I'm doing wrong ? I get error on compiling : 'vae' - undeclared identifier on NeuroNet.mqh line: 4130
xuebutayan
xuebutayan 2023.02.03
666
Dmitriy Gizlyk
Published article Neural networks made easy (Part 28): Policy gradient algorithm
Neural networks made easy (Part 28): Policy gradient algorithm

We continue to study reinforcement learning methods. In the previous article, we got acquainted with the Deep Q-Learning method. In this method, the model is trained to predict the upcoming reward depending on the action taken in a particular situation. Then, an action is performed in accordance with the policy and the expected reward. But it is not always possible to approximate the Q-function. Sometimes its approximation does not generate the desired result. In such cases, approximation methods are applied not to utility functions, but to a direct policy (strategy) of actions. One of such methods is Policy Gradient.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 27): Deep Q-Learning (DQN)
Neural networks made easy (Part 27): Deep Q-Learning (DQN)

We continue to study reinforcement learning. In this article, we will get acquainted with the Deep Q-Learning method. The use of this method has enabled the DeepMind team to create a model that can outperform a human when playing Atari computer games. I think it will be useful to evaluate the possibilities of the technology for solving trading problems.

mi ya
mi ya 2022.09.05
I really appreciate you for your publishing articles series of machine learning on MQL5.
Dmitriy Gizlyk
Published article Neural networks made easy (Part 26): Reinforcement Learning
Neural networks made easy (Part 26): Reinforcement Learning

We continue to study machine learning methods. With this article, we begin another big topic, Reinforcement Learning. This approach allows the models to set up certain strategies for solving the problems. We can expect that this property of reinforcement learning will open up new horizons for building trading strategies.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 25): Practicing Transfer Learning
Neural networks made easy (Part 25): Practicing Transfer Learning

In the last two articles, we developed a tool for creating and editing neural network models. Now it is time to evaluate the potential use of Transfer Learning technology using practical examples.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 24): Improving the tool for Transfer Learning
Neural networks made easy (Part 24): Improving the tool for Transfer Learning

In the previous article, we created a tool for creating and editing the architecture of neural networks. Today we will continue working on this tool. We will try to make it more user friendly. This may see, top be a step away form our topic. But don't you think that a well organized workspace plays an important role in achieving the result.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 23): Building a tool for Transfer Learning
Neural networks made easy (Part 23): Building a tool for Transfer Learning

In this series of articles, we have already mentioned Transfer Learning more than once. However, this was only mentioning. in this article, I suggest filling this gap and taking a closer look at Transfer Learning.

Dmitriy Gizlyk
Published article Neural networks made easy (Part 22): Unsupervised learning of recurrent models
Neural networks made easy (Part 22): Unsupervised learning of recurrent models

We continue to study unsupervised learning algorithms. This time I suggest that we discuss the features of autoencoders when applied to recurrent model training.