Latent Filter Scaling for Multimodal Unsupervised Image-To-Image Translation

Yazeed Alharbi, Neil Smith, Peter Wonka

Research output: Chapter in Book/Report/Conference proceedingConference contribution

30 Scopus citations

Abstract

In multimodal unsupervised image-to-image translation tasks, the goal is to translate an image from the source domain to many images in the target domain. We present a simple method that produces higher quality images than current state-of-the-art while maintaining the same amount of multimodal diversity. Previous methods follow the unconditional approach of trying to map the latent code directly to a full-size image. This leads to complicated network architectures with several introduced hyperparameters to tune. By treating the latent code as a modifier of the convolutional filters, we produce multimodal output while maintaining the traditional Generative Adversarial Network (GAN) loss and without additional hyperparameters. The only tuning required by our method controls the tradeoff between variability and quality of generated images. Furthermore, we achieve disentanglement between source domain content and target domain style for free as a by-product of our formulation. We perform qualitative and quantitative experiments showing the advantages of our method compared with the state-of-the art on multiple benchmark image-to-image translation datasets.
Original languageEnglish (US)
Title of host publication2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PublisherIEEE
Pages1458-1466
Number of pages9
ISBN (Print)9781728132938
DOIs
StatePublished - 2019

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): URF/1/3426-01-01
Acknowledgements: The project was funded in part by the KAUST Office of Sponsored Research (OSR) under Award No. URF/1/3426-01-01.

Fingerprint

Dive into the research topics of 'Latent Filter Scaling for Multimodal Unsupervised Image-To-Image Translation'. Together they form a unique fingerprint.

Cite this