Blüml, Jannis ; Czech, Johannes ; Kersting, Kristian (2023)
AlphaZe∗∗: AlphaZero-like baselines for imperfect information games are surprisingly strong.
In: Frontiers in Artificial Intelligence, 2023, 6
doi: 10.26083/tuprints-00024064
Article, Secondary publication, Publisher's Version
Text
frai-06-1014561.pdf Copyright Information: CC BY 4.0 International - Creative Commons, Attribution. Download (1MB) |
Item Type: | Article |
---|---|
Type of entry: | Secondary publication |
Title: | AlphaZe∗∗: AlphaZero-like baselines for imperfect information games are surprisingly strong |
Language: | English |
Date: | 26 May 2023 |
Place of Publication: | Darmstadt |
Year of primary publication: | 2023 |
Publisher: | Frontiers Media S.A. |
Journal or Publication Title: | Frontiers in Artificial Intelligence |
Volume of the journal: | 6 |
Collation: | 18 Seiten |
DOI: | 10.26083/tuprints-00024064 |
Corresponding Links: | |
Origin: | Secondary publication DeepGreen |
Abstract: | In recent years, deep neural networks for strategy games have made significant progress. AlphaZero-like frameworks which combine Monte-Carlo tree search with reinforcement learning have been successfully applied to numerous games with perfect information. However, they have not been developed for domains where uncertainty and unknowns abound, and are therefore often considered unsuitable due to imperfect observations. Here, we challenge this view and argue that they are a viable alternative for games with imperfect information — a domain currently dominated by heuristic approaches or methods explicitly designed for hidden information, such as oracle-based techniques. To this end, we introduce a novel algorithm based solely on reinforcement learning, called AlphaZe∗∗, which is an AlphaZero-based framework for games with imperfect information. We examine its learning convergence on the games Stratego and DarkHex and show that it is a surprisingly strong baseline, while using a model-based approach: it achieves similar win rates against other Stratego bots like Pipeline Policy Space Response Oracle (P2SRO), while not winning in direct comparison against P2SRO or reaching the much stronger numbers of DeepNash. Compared to heuristics and oracle-based approaches, AlphaZe∗∗ can easily deal with rule changes, e.g., when more information than usual is given, and drastically outperforms other approaches in this respect. |
Uncontrolled Keywords: | imperfect information games, deep neural networks, reinforcement learning, AlphaZero, Monte-Carlo tree search, perfect information Monte-Carlo |
Status: | Publisher's Version |
URN: | urn:nbn:de:tuda-tuprints-240643 |
Classification DDC: | 000 Generalities, computers, information > 004 Computer science |
Divisions: | 20 Department of Computer Science > Artificial Intelligence and Machine Learning Zentrale Einrichtungen > Centre for Cognitive Science (CCS) Zentrale Einrichtungen > hessian.AI - The Hessian Center for Artificial Intelligence |
Date Deposited: | 26 May 2023 11:40 |
Last Modified: | 02 Oct 2023 08:07 |
SWORD Depositor: | Deep Green |
URI: | https://tuprints.ulb.tu-darmstadt.de/id/eprint/24064 |
PPN: | 511988958 |
Export: |
View Item |