Policy optimization as wasserstein gradient flows

Ruiyi Zhang, Changyou Chen, Chunyuan Li, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Scopus citations

Abstract

Policy optimization is a core component of reinforcement learning (RL), and most existing RL methods directly optimize parameters of a policy based on maximizing the expected total reward, or its surrogate. Though often achieving encouraging empirical success, its underlying mathematical principle on policy-distribution optimization is unclear. We place policy optimization into the space of probability measures, and interpret it as Wasserstein gradient flows. On the probability-measure space, under specified circumstances, policy optimization becomes a convex problem in terms of distribution optimization. To make optimization feasible, we develop efficient algorithms by numerically solving the corresponding discrete gradient flows. Our technique is applicable to several RL settings, and is related to many state-of-the-art policy-optimization algorithms. Empirical results verify the effectiveness of our framework, often obtaining better performance compared to related algorithms.
Original languageEnglish (US)
Title of host publication35th International Conference on Machine Learning, ICML 2018
PublisherInternational Machine Learning Society (IMLS)[email protected]
Pages9134-9143
Number of pages10
ISBN (Print)9781510867963
StatePublished - Jan 1 2018
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09

Fingerprint

Dive into the research topics of 'Policy optimization as wasserstein gradient flows'. Together they form a unique fingerprint.

Cite this