Cui, Kai ; Koeppl, Heinz (2025)
Learning Graphon Mean Field Games and Approximate Nash Equilibria.
The Tenth International Conference on Learning Representations. Virtual Conference (25.04.2022 - 25.04.2022)
doi: 10.26083/tuprints-00028932
Conference or Workshop Item, Secondary publication, Publisher's Version
Text
1361_learning_graphon_mean_field_ga.pdf Copyright Information: CC BY 4.0 International - Creative Commons, Attribution. Download (3MB) |
Item Type: | Conference or Workshop Item |
---|---|
Type of entry: | Secondary publication |
Title: | Learning Graphon Mean Field Games and Approximate Nash Equilibria |
Language: | English |
Date: | 15 January 2025 |
Place of Publication: | Darmstadt |
Year of primary publication: | 28 January 2022 |
Place of primary publication: | Appleton, WI |
Publisher: | ICLR |
Book Title: | The Tenth International Conference on Learning Representations |
Collation: | 31 Seiten |
Event Title: | The Tenth International Conference on Learning Representations |
Event Location: | Virtual Conference |
Event Dates: | 25.04.2022 - 25.04.2022 |
DOI: | 10.26083/tuprints-00028932 |
Corresponding Links: | |
Origin: | Secondary publication service |
Abstract: | Recent advances at the intersection of dense large graph limits and mean field games have begun to enable the scalable analysis of a broad class of dynamical sequential games with large numbers of agents. So far, results have been largely limited to graphon mean field systems with continuous-time diffusive or jump dynamics, typically without control and with little focus on computational methods. We propose a novel discrete-time formulation for graphon mean field games as the limit of non-linear dense graph Markov games with weak interaction. On the theoretical side, we give extensive and rigorous existence and approximation properties of the graphon mean field solution in sufficiently large systems. On the practical side we provide general learning schemes for graphon mean field equilibria by either introducing agent equivalence classes or reformulating the graphon mean field system as a classical mean field system. By repeatedly finding a regularized optimal control solution and its generated mean field, we successfully obtain plausible approximate Nash equilibria in otherwise infeasible large dense graph games with many agents. Empirically, we are able to demonstrate on a number of examples that the finite-agent behavior comes increasingly close to the mean field behavior for our computed equilibria as the graph or system size grows, verifying our theory. More generally, we successfully apply policy gradient reinforcement learning in conjunction with sequential Monte Carlo methods. |
Uncontrolled Keywords: | Mean Field Games, Reinforcement Learning, Multi Agent Systems |
Status: | Publisher's Version |
URN: | urn:nbn:de:tuda-tuprints-289328 |
Additional Information: | Supplements über "Identisches Werk" verfügbar. |
Classification DDC: | 000 Generalities, computers, information > 004 Computer science 600 Technology, medicine, applied sciences > 621.3 Electrical engineering, electronics |
Divisions: | 18 Department of Electrical Engineering and Information Technology > Institute for Telecommunications > Bioinspired Communication Systems 18 Department of Electrical Engineering and Information Technology > Self-Organizing Systems Lab |
Date Deposited: | 15 Jan 2025 09:08 |
Last Modified: | 15 Jan 2025 09:08 |
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/28932 |
PPN: | |
Export: |
View Item |