File:Challenges and Tricks of Deep RL.jpg

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search

Original file(1,215 × 1,093 pixels, file size: 90 KB, MIME type: image/jpeg)

Captions

Captions

Add a one-line explanation of what this file represents

Summary

[edit]
Description
English: This figure lists 11 popular tricks for deep reinforcement learning algorithms, as well as what kind of challenges each trick mainly addresses. These tricks include experience replay (ExR), parallel exploration (PEx), separated target network (STN), delayed policy update (DPU), constrained policy update (CPU), clipped actor criterion (CAC), double Q-functions (DQF), bounded double Q-functions (BDQ), distributional return function (DRF), entropy regularization (EnR), and soft value function (SVF).
Date
Source Own work
Author Tsesea

Licensing

[edit]
I, the copyright holder of this work, hereby publish it under the following license:
w:en:Creative Commons
attribution share alike
This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
You are free:
  • to share – to copy, distribute and transmit the work
  • to remix – to adapt the work
Under the following conditions:
  • attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
  • share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.

File history

Click on a date/time to view the file as it appeared at that time.

Date/TimeThumbnailDimensionsUserComment
current00:50, 6 December 2023Thumbnail for version as of 00:50, 6 December 20231,215 × 1,093 (90 KB)Tsesea (talk | contribs)Uploaded while editing "Deep reinforcement learning" on en.wikipedia.org

There are no pages that use this file.