TU Darmstadt / ULB / TUprints

Correlation Priors for Reinforcement Learning

Alt, Bastian ; Šošić, Adrian ; Koeppl, Heinz
eds.: Wallach, H. ; Larochelle, H. ; Beygelzimer, A. ; d'Alché-Buc, F. ; Fox, E. ; Garnett, R. (2025)
Correlation Priors for Reinforcement Learning.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019). Vancouver, Canada (08.12.2019 - 14.12.2019)
doi: 10.26083/tuprints-00028993
Conference or Workshop Item, Secondary publication, Publisher's Version

[img] Text
NeurIPS-2019-correlation-priors-for-reinforcement-learning-Paper.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (2MB)
[img] Text (Supplement)
Correlation_Priors_for_Reinforcement_Learning_Camera_Ready_Supplement.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (826kB)
Item Type: Conference or Workshop Item
Type of entry: Secondary publication
Title: Correlation Priors for Reinforcement Learning
Language: English
Date: 15 January 2025
Place of Publication: Darmstadt
Year of primary publication: 2019
Place of primary publication: San Diego, CA
Publisher: NeurIPS
Book Title: Advances in Neural Information Processing Systems 32
Collation: 11 Seiten
Event Title: 33rd Conference on Neural Information Processing Systems (NeurIPS 2019)
Event Location: Vancouver, Canada
Event Dates: 08.12.2019 - 14.12.2019
DOI: 10.26083/tuprints-00028993
Corresponding Links:
Origin: Secondary publication service
Abstract:

Many decision-making problems naturally exhibit pronounced structures inherited from the characteristics of the underlying environment. In a Markov decision process model, for example, two distinct states can have inherently related semantics or encode resembling physical state configurations. This often implies locally correlated transition dynamics among the states. In order to complete a certain task in such environments, the operating agent usually needs to execute a series of temporally and spatially correlated actions. Though there exists a variety of approaches to capture these correlations in continuous state-action domains, a principled solution for discrete environments is missing. In this work, we present a Bayesian learning framework based on Pólya-Gamma augmentation that enables an analogous reasoning in such cases. We demonstrate the framework on a number of common decision-making related problems, such as imitation learning, subgoal extraction, system identification and Bayesian reinforcement learning. By explicitly modeling the underlying correlation structures of these problems, the proposed approach yields superior predictive performance compared to correlation-agnostic models, even when trained on data sets that are an order of magnitude smaller in size.

Status: Publisher's Version
URN: urn:nbn:de:tuda-tuprints-289935
Additional Information:

Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)

Classification DDC: 500 Science and mathematics > 570 Life sciences, biology
600 Technology, medicine, applied sciences > 621.3 Electrical engineering, electronics
Divisions: 18 Department of Electrical Engineering and Information Technology > Institute for Telecommunications > Bioinspired Communication Systems
18 Department of Electrical Engineering and Information Technology > Self-Organizing Systems Lab
Date Deposited: 15 Jan 2025 09:27
Last Modified: 15 Jan 2025 09:27
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/28993
PPN:
Export:
Actions (login required)
View Item View Item