How to image objects that are hidden from a camera’s view is a problem of fundamental importance to many fields of research1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector14,15,16,17,18,19. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections21,22,23,24, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
Bibliographical noteKAUST Repository Item: Exported on 2020-10-01
Acknowledgements: We thank K. Zang for his expertise and advice on the SPAD sensor. We also thank B. A. Wandell, J. Chang, I. Kauvar, N. Padmanaban for reviewing the manuscript. M.O’T. is supported by the Government of Canada through the Banting Postdoctoral Fellowships programme. D.B.L. is supported by a Stanford Graduate Fellowship in Science and Engineering. G.W. is supported by a National Science Foundation CAREER award (IIS 1553333), a Terman Faculty Fellowship and by the KAUST Office of Sponsored Research through the Visual Computing Center CCF grant.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.