TU Darmstadt / ULB / TUprints

Discrete-Time Mean Field Control with Environment States

Cui, Kai ; Tahir, Anam ; Sinzger, Mark ; Koeppl, Heinz (2024)
Discrete-Time Mean Field Control with Environment States.
2021 60th IEEE Conference on Decision and Control (CDC). Austin, TX, USA (14.12.2021-17.12.2021)
doi: 10.26083/tuprints-00028856
Conference or Workshop Item, Secondary publication, Postprint

[img] Text
Cui_et_al_2021_Discrete-Time_Mean_Field_Control_with_Environment_States.pdf
Copyright Information: In Copyright.

Download (2MB)
Item Type: Conference or Workshop Item
Type of entry: Secondary publication
Title: Discrete-Time Mean Field Control with Environment States
Language: English
Date: 17 December 2024
Place of Publication: Darmstadt
Year of primary publication: 2021
Place of primary publication: New York, NY
Publisher: IEEE
Book Title: 2021 60th IEEE Conference on Decision and Control (CDC)
Collation: 8 Seiten
Event Title: 2021 60th IEEE Conference on Decision and Control (CDC)
Event Location: Austin, TX, USA
Event Dates: 14.12.2021-17.12.2021
DOI: 10.26083/tuprints-00028856
Corresponding Links:
Origin: Secondary publication service
Abstract:

Multi-agent reinforcement learning methods have shown remarkable potential in solving complex multi-agent problems but mostly lack theoretical guarantees. Recently, mean field control and mean field games have been established as a tractable solution for large-scale multi-agent problems with many agents. In this work, driven by a motivating scheduling problem, we consider a discrete-time mean field control model with common environment states. We rigorously establish approximate optimality as the number of agents grows in the finite agent case and find that a dynamic programming principle holds, resulting in the existence of an optimal stationary policy. As exact solutions are difficult in general due to the resulting continuous action space of the limiting mean field Markov decision process, we apply established deep reinforcement learning methods to solve the associated mean field control problem. The performance of the learned mean field control policy is compared to typical multi-agent reinforcement learning approaches and is found to converge to the mean field performance for sufficiently many agents, verifying the obtained theoretical results and reaching competitive solutions.

Uncontrolled Keywords: Manifolds, Limiting, Conferences, Process control, Reinforcement learning, Games, Aerospace electronics
Status: Postprint
URN: urn:nbn:de:tuda-tuprints-288560
Classification DDC: 000 Generalities, computers, information > 004 Computer science
600 Technology, medicine, applied sciences > 621.3 Electrical engineering, electronics
Divisions: 18 Department of Electrical Engineering and Information Technology > Institute for Telecommunications > Bioinspired Communication Systems
18 Department of Electrical Engineering and Information Technology > Self-Organizing Systems Lab
Date Deposited: 17 Dec 2024 09:53
Last Modified: 17 Dec 2024 09:54
URI: https://tuprints.ulb.tu-darmstadt.de/id/eprint/28856
PPN:
Export:
Actions (login required)
View Item View Item