Flow-Guided Video Inpainting with Scene Templates

Majed A. Alzahrani, Peihao Zhu, Peter Wonka, Ganesh Sundaramoorthi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

12 Scopus citations

Abstract

We consider the problem of filling in missing spatiotemporal regions of a video. We provide a novel flow-based solution by introducing a generative model of images in relation to the scene (without missing regions) and mappings from the scene to images. We use the model to jointly infer the scene template, a 2D representation of the scene, and the mappings. This ensures consistency of the frame-to-frame flows generated to the underlying scene, reducing geometric distortions in flow based inpainting. The template is mapped to the missing regions in the video by a new (L$^{2}$-L$^{1}$) interpolation scheme, creating crisp inpaintings and reducing common blur and distortion artifacts. We show on two benchmark datasets that our approach out-performs state-of-the-art quantitatively and in user studies.$^{1}$
Original languageEnglish (US)
Title of host publication2021 IEEE/CVF International Conference on Computer Vision (ICCV)
PublisherIEEE
ISBN (Print)978-1-6654-2813-2
DOIs
StatePublished - 2021

Bibliographical note

KAUST Repository Item: Exported on 2022-03-09

Fingerprint

Dive into the research topics of 'Flow-Guided Video Inpainting with Scene Templates'. Together they form a unique fingerprint.

Cite this