Computational photography encompasses a diversity of imaging techniques, but one of the core operations performed by many of them is to compute image differences. An intuitive approach to computing such differences is to capture several images sequentially and then process them jointly. In this paper, we introduce a snapshot difference imaging approach that is directly implemented in the sensor hardware of emerging time-of-flight cameras. With a variety of examples, we demonstrate that the proposed snapshot difference imaging technique is useful for direct-global illumination separation, for direct imaging of spatial and temporal image gradients, for direct depth edge imaging, and more.
|Original language||English (US)|
|Title of host publication||ACM Transactions on Graphics|
|Publisher||Association for Computing Machinery (ACM)|
|Number of pages||11|
|State||Published - Nov 20 2017|
Bibliographical noteKAUST Repository Item: Exported on 2022-06-28
Acknowledgements: This work was supported by the German Research Foundation (HU-2273/2-1), the X-Rite Chair for Digital Material Appearance, a National Science Foundation CAREER award (IIS 1553333), a Terman Faculty Fellowship, and the KAUST Office of Sponsored Research through the Visual Computing Center CCF grant. We thank Nick Maggio for his help on early experiments.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.