Scalable Thompson sampling via optimal transport

Ruiyi Zhang, Zheng Wen, Changyou Chen, Chen Fang, Tong Yu, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Scopus citations

Abstract

Thompson sampling (TS) is a class of algorithms for sequential decision making, in which a posterior distribution is maintained over a reward model. However, calculating exact posterior distributions is intractable for all but the simplest models. Development of computationally-efficiently approximate methods for the posterior distribution is consequently a crucial problem for scalable TS with complex models, such as neural networks. In this paper, we use distribution optimization techniques to approximate the posterior distribution, solved via Wasserstein gradient flows. Based on the framework, a principled particle-optimization algorithm is developed for TS to approximate the posterior efficiently. Our approach is scalable and does not make explicit distribution assumptions on posterior approximations. Extensive experiments on both synthetic and real large-scale data demonstrate the superior performance of the proposed methods.
Original languageEnglish (US)
Title of host publicationAISTATS 2019 - 22nd International Conference on Artificial Intelligence and Statistics
PublisherPLMR
StatePublished - Jan 1 2020
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09

Fingerprint

Dive into the research topics of 'Scalable Thompson sampling via optimal transport'. Together they form a unique fingerprint.

Cite this