Learning to Sample with Adversarially Learned Likelihood-Ratio

Chunyuan Li, Jianqiao Li, Guoyin Wang, Lawrence Carin

Research output: Contribution to journalArticlepeer-review


We link the reverse KL divergence with adversarial learning. This insight enables learning to synthesize realistic samples in two settings: (i) Given a set of samples from the true distribution, an adversarially learned likelihood-ratio and a new entropy bound are used to learn a GAN model, that improves synthesized sample quality relative to previous GAN variants. (ii) Given an unnormalized distribution, a reference-based framework is proposed to learn to draw samples, naturally yielding an adversarial scheme to amortize MCMC/SVGD samples. Experimental results show the improved performance of the derived algorithms. 1 BACKGROUND ON THE REVERSE KL DIVERGENCE Target Distribution Assume we are given a set of samples D = {x i } i=1,N , with each sample assumed drawn iid from an unknown distribution q(x). For x ∈ X , let S q ⊂ X represent the support of q, implying that S q is the smallest subset of X for which Sq q(x)dx = 1 (or Sq q(x)dx = 1 − , for → 0 +). Let S o q represent the complement set of S q , i.e., S q ∪ S o q = X and S q ∩ S o q = ∅.
Original languageEnglish (US)
Pages (from-to)1-6
Number of pages6
JournalIclr 2018
Issue number2
StatePublished - 2018
Externally publishedYes


Dive into the research topics of 'Learning to Sample with Adversarially Learned Likelihood-Ratio'. Together they form a unique fingerprint.

Cite this