AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation

Bing Li, Yuanlue Zhu, Yitong Wang, Chia-Wen Lin, Bernard Ghanem, Linlin Shen

Research output: Contribution to journalArticlepeer-review

11 Scopus citations


In this paper, we propose a novel framework to translate a portrait photo-face into an anime appearance. Different from existing translation methods which do not designate specific styles, we aim to synthesize anime-faces which are style-consistent with a given reference anime-face. However, unlike typical translation tasks, such anime-face translation is particularly challenging due to the large and complex variations of appearances among anime-faces. Existing methods often fail to transfer the styles of reference anime-faces to the generated anime-faces, or introduce noticeable artifacts/distortions in the local shapes of their generated anime-faces. We propose a novel GAN-based anime-face translator, called AniGAN, to synthesize high-quality anime-faces. Specifically, a new generator architecture is proposed to simultaneously transfer color/texture styles and transform local facial shapes into anime-like counterparts based on the style of a reference anime-face, while preserving the global structure of the source photo-face. New normalization functions are designed for the generator to further improve local shape transformation and color/texture style transfer. Besides, we propose a double-branch discriminator to learn domain-specific distributions through individual branches and learn cross-domain shared distributions via shared layers, helping generate visually pleasing anime-faces and effectively mitigate artifacts/distortions. Extensive experiments on benchmark datasets qualitatively and quantitatively demonstrate the superiority of our method over state-of-the-art methods.
Original languageEnglish (US)
Pages (from-to)1-1
Number of pages1
JournalIEEE Transactions on Multimedia
StatePublished - 2021

Bibliographical note

KAUST Repository Item: Exported on 2021-10-08
Acknowledgements: This work was supported in part by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding and in part by the Ministry of Science and Technology, Taiwan, under Grants MOST 110-2634-F-007-015. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Palaiahnakote Shivakumara. Bing Li and Yuanlue Zhu contributed equally to this work.


Dive into the research topics of 'AniGAN: Style-Guided Generative Adversarial Networks for Unsupervised Anime Face Generation'. Together they form a unique fingerprint.

Cite this