You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Neural networks made easy (Part 37): Sparse Attention
However, we recognize the limited resources available to improve the model. Therefore, there is a need to optimize the model with minimal loss of quality.
Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement
Neural networks made easy (Part 38): Self-Supervised Exploration via Disagreement
Neural networks made easy (Part 39): Go-Explore, a different approach to exploration
Neural networks made easy (Part 40): Using Go-Explore on large amounts of data
In the previous article "Neural networks made easy (Part 39): Go-Explore, a different approach to exploration", we familiarized ourselves with the Go-Explore algorithm and its ability to explore the environment.
In this article, we will take a closer look at possible optimization methods for the Go-Explore algorithm to improve its efficiency over longer training periods.
Neural networks made easy (Part 41): Hierarchical models
In this article, we will explore the application of hierarchical reinforcement learning in trading. We propose using this approach to create a hierarchical trading model that will be able to make optimal decisions at different levels and adapt to different market conditions.
In this article, we will consider the architecture of the hierarchical model, including various levels of decision making, such as determining entry and exit points for trades. We also present hierarchical model learning methods that combine global-level reinforcement learning and local-level reinforcement learning.
The use of hierarchical learning makes it possible to model complex decision-making structures, as well as effectively use knowledge at different levels. This helps to increase the generalizing ability of the model and its adaptability to changing market conditions.
Neural networks made easy (Part 42): Model procrastination, reasons and solutions
Neural networks made easy (Part 43): Mastering skills without the reward function
In this article, we introduce the concept of "Diversity is All You Need", which allows you to teach a model a skill without an explicit reward function. Variety of actions, exploration of the environment, and maximizing the variability of interactions with the environment are key factors for training an agent to behave effectively.
Neural networks made easy (Part 44): Learning skills with dynamics in mind
In the previous article, we got acquainted with the DIAYN method, which allows you to train separable skills. This makes it possible to build a model that can change the agent behavior depending on the current state.
In this paradigm, a question arises of learning skills whose behavior would be easily predictable. At the same time, we are not ready to sacrifice the diversity of their behavior. A similar problem is solved by the authors of the Dynamics-Aware Discovery of Skills (DADS) method presented in 2020. Unlike DIAYN, the DADS method seeks to teach skills that not only have variety in behavior, but are also predictable.Neural networks made easy (Part 45): Training state exploration skills
In this article, I propose to get acquainted with the alternative method of teaching skills Explore, Discover and Learn (EDL). EDL approaches the problem from a different angle, which allows it to overcome the problem of limited state coverage and offer more flexible and adaptive agent behavior.