TU Darmstadt / ULB / TUprints

Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior

Cui, Kai ; Hauck, Sascha ; Fabian, Christian ; Koeppl, Heinz (2024)
Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior.
International Conference on Learning Representations. Vienna, Austria (07.05.2024 - 11.05.2024)
doi: 10.26083/tuprints-00028689
Conference or Workshop Item, Secondary publication, Publisher's Version

[img] Text
Cui_et_al_2024_Learning_Decentralized_Partially_Observable_Mean_Field_Control_for_Artificial_Collective_Behavior.pdf
Copyright Information: CC BY 4.0 International - Creative Commons, Attribution.

Download (7MB)
Item Type: Conference or Workshop Item
Type of entry: Secondary publication
Title: Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior
Language: English
Date: 25 November 2024
Place of Publication: Darmstadt
Year of primary publication: 8 May 2024
Publisher: ICLR
Book Title: ICLR 2024 The Twelfth International Conference on Learning Representations
Collation: 40 Seiten
Event Title: International Conference on Learning Representations
Event Location: Vienna, Austria
Event Dates: 07.05.2024 - 11.05.2024
DOI: 10.26083/tuprints-00028689
Corresponding Links:
Origin: Secondary publication service
Abstract:

Recent reinforcement learning (RL) methods have achieved success in various domains. However, multi-agent RL (MARL) remains a challenge in terms of decentralization, partial observability and scalability to many agents. Meanwhile, collective behavior requires resolution of the aforementioned challenges, and remains of importance to many state-of-the-art applications such as active matter physics, self-organizing systems, opinion dynamics, and biological or robotic swarms. Here, MARL via mean field control (MFC) offers a potential solution to scalability, but fails to consider decentralized and partially observable systems. In this paper, we enable decentralized behavior of agents under partial information by proposing novel models for decentralized partially observable MFC (Dec-POMFC), a broad class of problems with permutation-invariant agents allowing for reduction to tractable single-agent Markov decision processes (MDP) with single-agent RL solution. We provide rigorous theoretical results, including a dynamic programming principle, together with optimality guarantees for Dec-POMFC solutions applied to finite swarms of interest. Algorithmically, we propose Dec-POMFC-based policy gradient methods for MARL via centralized training and decentralized execution, together with policy gradient approximation guarantees. In addition, we improve upon state-of-the-art histogram-based MFC by kernel methods, which is of separate interest also for fully observable MFC. We evaluate numerically on representative collective behavior tasks such as adapted Kuramoto and Vicsek swarming models, being on par with state-of-the-art MARL. Overall, our framework takes a step towards RL-based engineering of artificial collective behavior via MFC.

Status: Publisher's Version
URN: urn:nbn:de:tuda-tuprints-286893
Classification DDC: 000 Generalities, computers, information > 004 Computer science
600 Technology, medicine, applied sciences > 621.3 Electrical engineering, electronics
Divisions: 18 Department of Electrical Engineering and Information Technology > Institute for Telecommunications > Bioinspired Communication Systems
18 Department of Electrical Engineering and Information Technology > Self-Organizing Systems Lab
Date Deposited: 25 Nov 2024 10:49
Last Modified: 26 Nov 2024 15:01
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/28689
PPN: 524107386
Export:
Actions (login required)
View Item View Item