Abstract
We introduce the first learning-based method for recovering shapes from Laplacian spectra. Our model consists of a cycle-consistent module that maps between learned latent vectors of an auto-encoder and sequences of eigenvalues. This module provides an efficient and effective linkage between Laplacian spectrum and geometry. Our data-driven approach replaces the need for ad-hoc regularizers required by prior methods, while providing more accurate results at a fraction of the computational cost. Our learning model applies without modifications across different dimensions (2D and 3D shapes alike), representations (meshes, contours and point clouds), as well as across different shape classes, and admits arbitrary resolution of the input spectrum without affecting complexity. The increased flexibility allows us to address notoriously difficult tasks in 3D vision and geometry processing within a unified framework, including shape generation from spectrum, mesh super-resolution, shape exploration, style transfer, spectrum estimation from point clouds, segmentation transfer and point-to-point matching.
Original language | English (US) |
---|---|
Title of host publication | 2020 International Conference on 3D Vision (3DV) |
Publisher | IEEE |
Pages | 120-129 |
Number of pages | 10 |
ISBN (Print) | 9781728181288 |
DOIs | |
State | Published - Nov 2020 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2021-03-30Acknowledgements: We gratefully acknowledge Luca Moschella and Silvia Casola for the technical support, Nicholas Sharp for the useful suggestions about pointcloud spectra. Parts of this work were supported by the KAUST OSR Award No. CRG-2017-3426, the ERC Starting Grant No. 758800 (EXPROTEA), the ERC Starting Grant No. 802554 (SPECGEO), the ANR AI Chair AIGRETTE, and the MIUR under grant “Dipartimenti di eccellenza 2018-2022” of the Department of Computer Science of Sapienza University and University of Verona.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.