Abstract
Point clouds obtained with 3D scanners or by image-based reconstruction techniques are often corrupted with significant amount of noise and outliers. Traditional methods for point cloud denoising largely rely on local surface fitting (e.g. jets or MLS surfaces), local or non-local averaging or on statistical assumptions about the underlying noise model. In contrast, we develop a simple data-driven method for removing outliers and reducing noise in unordered point clouds. We base our approach on a deep learning architecture adapted from PCPNet, which was recently proposed for estimating local 3D shape properties in point clouds. Our method first classifies and discards outlier samples, and then estimates correction vectors that project noisy points onto the original clean surfaces. The approach is efficient and robust to varying amounts of noise and outliers, while being able to handle large densely sampled point clouds. In our extensive evaluation, both on synthetic and real data, we show an increased robustness to strong noise levels compared to various state-of-the-art methods, enabling accurate surface reconstruction from extremely noisy real data obtained by range scans. Finally, the simplicity and universality of our approach makes it very easy to integrate in any existing geometry processing pipeline. Both the code and pre-trained networks can be found on the project page (https://github.com/mrakotosaon/pointcleannet).
Original language | English (US) |
---|---|
Pages (from-to) | 185-203 |
Number of pages | 19 |
Journal | Computer Graphics Forum |
Volume | 39 |
Issue number | 1 |
DOIs | |
State | Published - Jun 25 2019 |
Externally published | Yes |
Bibliographical note
KAUST Repository Item: Exported on 2021-02-23Acknowledged KAUST grant number(s): CRG-2017-3426
Acknowledgements: Parts of this work were supported by the KAUST OSR Award No. CRG-2017-3426, a gift from the NVIDIA Corporation, the ERC Starting Grants EXPROTEA (StG-2017-758800) and SmartGeometry (StG-2013-335373), a Google Faculty Award, and gifts from Adobe.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.