Temporal abstraction allows reinforcement learning agents to represent knowledge and develop strategies over different temporal scales. The option-critic framework has been demonstrated to learn temporally extended actions, represented as options, end-to-end in a model-free setting. However, feasibility of option-critic remains limited due to two major challenges, multiple options adopting very similar behavior, or a shrinking set of options that are relevant to the task. These occurrences not only void the need for temporal abstraction, they also suppress performance. In this paper, we tackle these problems by learning a \textit{diverse set of options} online. We introduce an information-theoretic intrinsic reward, which augments the task reward, as well as a novel termination objective, in order to encourage diversity in the option set. We show empirically that our proposed method achieves state-of-the-art performance learning options end-to-end on several discrete and continuous control tasks, outperforming option-critic by a wide margin. Furthermore, we show that our approach sustainably generates robust, reusable, reliable and interpretable options, in contrast to option-critic.
-
Notifications
You must be signed in to change notification settings - Fork 1
Original Code for paper: Diversity Enriched Option-Critic
License
anandkamat05/TDEOC
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Original Code for paper: Diversity Enriched Option-Critic
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published