Images, while easy to acquire, view, publish, and share, they lack critical depth information. This poses a serious bottleneck for many image manipulation, editing, and retrieval tasks. In this paper we consider the problem of adding depth to an image of an object, effectively 'lifting' it back to 3D, by exploiting a collection of aligned 3D models of related objects. Our key insight is that, even when the imaged object is not contained in the shape collection, the network of shapes implicitly characterizes a shape-specific deformation subspace that regularizes the problem and enables robust diffusion of depth information from the shape collection to the input image. We evaluate our fully automatic approach on diverse and challenging input images, validate the results against Kinect depth readings, and demonstrate several imaging applications including depth-enhanced image editing and image relighting.
|ACM transactions on graphics
|Published - 2014
|41st International Conference and Exhibition on Computer Graphics and Interactive Techniques, ACM SIGGRAPH 2014 - Vancouver, BC, Canada
Duration: Aug 10 2014 → Aug 14 2014
Bibliographical noteFunding Information:
We thank the reviewers for their comments and suggestions on the paper. This work was supported in part by NSF grants IIS 1016324 and DMS 1228304, AFOSR grant FA9550-12-1-0372, NSFC grant 61202221, the Max Plack Center for Visual Computing and Communications, Google and Motorola research awards, a gift from HTC corporation, the Marie Curie Career Integration Grant 303541, the ERC Starting Grant SmartGeometry (StG-2013- 335373), and gifts from Adobe.
- Data-driven shape analysis
- Depth estimation
- Image retrieval
- Pose estimation
- Shape collections
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design