Symmetric variational autoencoder and connections to adversarial learning

Liqun Chen, Shuyang Dai, Yunchen Pu, Erjin Zhou, Chunyuan Li, Qinliang Su, Changyou Chen, Lawrence Carin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

30 Scopus citations


A new form of the variational autoencoder (VAE) is proposed, based on the symmetric Kullback-Leibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to ameliorate shortcomings with some previously developed adversarial methods. In addition to an analysis that motivates and explains the sVAE, an extensive set of experiments validate the utility of the approach.
Original languageEnglish (US)
Title of host publicationInternational Conference on Artificial Intelligence and Statistics, AISTATS 2018
Number of pages9
StatePublished - Jan 1 2018
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2021-02-09


Dive into the research topics of 'Symmetric variational autoencoder and connections to adversarial learning'. Together they form a unique fingerprint.

Cite this