TU Darmstadt / ULB / TUprints

POMDPs in Continuous Time and Discrete Spaces

Alt, Bastian ; Schultheis, Matthias ; Koeppl, Heinz (2023)
POMDPs in Continuous Time and Discrete Spaces.
34th Conference on Neural Information Processing Systems. virtual (06.-12.12.2020)
doi: 10.26083/tuprints-00023309
Conference or Workshop Item, Secondary publication, Publisher's Version

[img] Text
NeurIPS-2020-pomdps-in-continuous-time-and-discrete-spaces.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (2MB)
Item Type: Conference or Workshop Item
Type of entry: Secondary publication
Title: POMDPs in Continuous Time and Discrete Spaces
Language: English
Date: 2023
Place of Publication: Darmstadt
Year of primary publication: 2020
Publisher: Neural Information Processing Systems Foundation, Inc. (NeurIPS)
Book Title: Advances in Neural Information Processing Systems (NeurIPS 2020)
Series Volume: 33
Collation: 21 Seiten
Event Title: 34th Conference on Neural Information Processing Systems
Event Location: virtual
Event Dates: 06.-12.12.2020
DOI: 10.26083/tuprints-00023309
Corresponding Links:
Origin: Secondary publication service
Abstract:

Many processes, such as discrete event systems in engineering or population dynamics in biology, evolve in discrete space and continuous time. We consider the problem of optimal decision making in such discrete state and action space systems under partial observability. This places our work at the intersection of optimal filtering and optimal control. At the current state of research, a mathematical description for simultaneous decision making and filtering in continuous time with finite state and action spaces is still missing. In this paper, we give a mathematical description of a continuous-time partial observable Markov decision process (POMDP). By leveraging optimal filtering theory we derive a Hamilton-Jacobi-Bellman (HJB) type equation that characterizes the optimal solution. Using techniques from deep learning we approximately solve the resulting partial integro-differential equation. We present (i) an approach solving the decision problem offline by learning an approximation of the value function and (ii) an online algorithm which provides a solution in belief space using deep reinforcement learning. We show the applicability on a set of toy examples which pave the way for future methods providing solutions for high dimensional problems.

Status: Publisher's Version
URN: urn:nbn:de:tuda-tuprints-233092
Classification DDC: 000 Generalities, computers, information > 004 Computer science
500 Science and mathematics > 510 Mathematics
Divisions: 18 Department of Electrical Engineering and Information Technology > Institute for Telecommunications > Bioinspired Communication Systems
Zentrale Einrichtungen > Centre for Cognitive Science (CCS)
DFG-Collaborative Research Centres (incl. Transregio) > Collaborative Research Centres > CRC 1053: MAKI – Multi-Mechanisms Adaptation for the Future Internet > B: Adaptation Mechanisms > Subproject B4: Planning
Date Deposited: 31 Mar 2023 08:31
Last Modified: 25 May 2023 12:54
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/23309
PPN: 507478398
Export:
Actions (login required)
View Item View Item