TU Darmstadt / ULB / TUprints

Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning

Parisi, Simone ; Tateo, Davide ; Hensel, Maximilian ; D’Eramo, Carlo ; Peters, Jan ; Pajarinen, Joni (2022)
Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning.
In: Algorithms, 2022, 15 (3)
doi: 10.26083/tuprints-00021017
Article, Secondary publication, Publisher's Version

[img] Text
algorithms-15-00081-v3.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (15MB)
Item Type: Article
Type of entry: Secondary publication
Title: Long-Term Visitation Value for Deep Exploration in Sparse-Reward Reinforcement Learning
Language: English
Date: 11 April 2022
Place of Publication: Darmstadt
Year of primary publication: 2022
Publisher: MDPI
Journal or Publication Title: Algorithms
Volume of the journal: 15
Issue Number: 3
Collation: 44 Seiten
DOI: 10.26083/tuprints-00021017
Corresponding Links:
Origin: Secondary publication DeepGreen
Abstract:

Reinforcement learning with sparse rewards is still an open challenge. Classic methods rely on getting feedback via extrinsic rewards to train the agent, and in situations where this occurs very rarely the agent learns slowly or cannot learn at all. Similarly, if the agent receives also rewards that create suboptimal modes of the objective function, it will likely prematurely stop exploring. More recent methods add auxiliary intrinsic rewards to encourage exploration. However, auxiliary rewards lead to a non-stationary target for the Q-function. In this paper, we present a novel approach that (1) plans exploration actions far into the future by using a long-term visitation count, and (2) decouples exploration and exploitation by learning a separate function assessing the exploration value of the actions. Contrary to existing methods that use models of reward and dynamics, our approach is off-policy and model-free. We further propose new tabular environments for benchmarking exploration in reinforcement learning. Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function. Results also suggest that our approach scales gracefully with the size of the environment.

Uncontrolled Keywords: reinforcement learning, sparse reward, exploration, upper confidence bound, off-policy
Status: Publisher's Version
URN: urn:nbn:de:tuda-tuprints-210175
Classification DDC: 000 Generalities, computers, information > 004 Computer science
600 Technology, medicine, applied sciences > 620 Engineering and machine engineering
Divisions: 20 Department of Computer Science > Intelligent Autonomous Systems
Date Deposited: 11 Apr 2022 11:11
Last Modified: 14 Nov 2023 19:04
SWORD Depositor: Deep Green
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/21017
PPN: 500751579
Export:
Actions (login required)
View Item View Item