Discussing the article: "Neural networks made easy (Part 54): Using random encoder for efficient research (RE3)"

 

Check out the new article: Neural networks made easy (Part 54): Using random encoder for efficient research (RE3).

Whenever we consider reinforcement learning methods, we are faced with the issue of efficiently exploring the environment. Solving this issue often leads to complication of the algorithm and training of additional models. In this article, we will look at an alternative approach to solving this problem.

The main goal of the Random Encoders for Efficient Exploration (RE3) method is to minimize the number of trained models. In their work, the authors of the RE3 method draw attention to the fact that in the field of image processing, only convolutional networks are capable of identifying individual object features and characteristics. It is convolutional networks that will help reduce the dimension of multidimensional space, highlight characteristic features and cope with scaling of the original object.

The quite reasonable question here is what kind of minimization of trained models we are talking about if we additionally turn to convolutional networks?

In this aspect, the key word is "trained". The authors of the method drew attention to the fact that even a convolutional encoder initialized with random parameters effectively captures information about the proximity of two states. Below is a visualization of k-nearest states found by measuring distances in the representation space of a randomly initialized encoder (Random Encoder) and in the space of the True State from the article.

Visualization of k-nearest states

Author: Dmitriy Gizlyk