Logo des Repositoriums
  • English
  • Deutsch
Anmelden
Keine TU-ID? Klicken Sie hier für mehr Informationen.
  1. Startseite
  2. Publikationen
  3. Publikationen der Technischen Universität Darmstadt
  4. Zweitveröffentlichungen
  5. f-Divergence constrained policy improvement
 
  • Details
2017
Zweitveröffentlichung
Preprint

f-Divergence constrained policy improvement

File(s)
Download
Hauptpublikation
1801.00056.pdf
CC BY 4.0 International
Format: Adobe PDF
Size: 677.3 KB
TUDa URI
tuda/8085
URN
urn:nbn:de:tuda-tuprints-205534
DOI
10.26083/tuprints-00020553
Autor:innen
Belousov, Boris ORCID 0000-0001-7172-9104
Peters, Jan ORCID 0000-0002-5266-8091
Kurzbeschreibung (Abstract)

To ensure stability of learning, state-of-the-art generalized policy iteration algorithms augment the policy improvement step with a trust region constraint bounding the information loss. The size of the trust region is commonly determined by the Kullback-Leibler (KL) divergence, which not only captures the notion of distance well but also yields closed-form solutions. In this paper, we consider a more general class of f-divergences and derive the corresponding policy update rules. The generic solution is expressed through the derivative of the convex conjugate function to f and includes the KL solution as a special case. Within the class of f-divergences, we further focus on a one-parameter family of α-divergences to study effects of the choice of divergence on policy improvement. Previously known as well as new policy updates emerge for different values of α. We show that every type of policy update comes with a compatible policy evaluation resulting from the chosen f-divergence. Interestingly, the mean-squared Bellman error minimization is closely related to policy evaluation with the Pearson χ²-divergence penalty, while the KL divergence results in the soft-max policy update and a log-sum-exp critic. We carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of using different divergence functions on a multi-armed bandit problem and on common standard reinforcement learning problems.

Freie Schlagworte

Reinforcement Learnin...

Policy Search

Bandit Problems

Sprache
Englisch
Fachbereich/-gebiet
20 Fachbereich Informatik > Intelligente Autonome Systeme
DDC
000 Allgemeines, Informatik, Informationswissenschaft > 004 Informatik
Institution
Universitäts- und Landesbibliothek Darmstadt
Ort
Darmstadt
Publikationsjahr der Erstveröffentlichung
2017
PPN
512614172

  • TUprints Leitlinien
  • Cookie-Einstellungen
  • Impressum
  • Datenschutzbestimmungen
  • Webseitenanalyse
Diese Webseite wird von der Universitäts- und Landesbibliothek Darmstadt (ULB) betrieben.