Modeling self-occlusions in dynamic shape and appearance tracking

Yanchao Yang, Ganesh Sundaramoorthi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

13 Scopus citations

Abstract

We present a method to track the precise shape of a dynamic object in video. Joint dynamic shape and appearance models, in which a template of the object is propagated to match the object shape and radiance in the next frame, are advantageous over methods employing global image statistics in cases of complex object radiance and cluttered background. In cases of complex 3D object motion and relative viewpoint change, self-occlusions and disocclusions of the object are prominent, and current methods employing joint shape and appearance models are unable to accurately adapt to new shape and appearance information, leading to inaccurate shape detection. In this work, we model self-occlusions and dis-occlusions in a joint shape and appearance tracking framework. Experiments on video exhibiting occlusion/dis-occlusion, complex radiance and background show that occlusion/dis-occlusion modeling leads to superior shape accuracy compared to recent methods employing joint shape/appearance models or employing global statistics. © 2013 IEEE.
Original languageEnglish (US)
Title of host publication2013 IEEE International Conference on Computer Vision
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages201-208
Number of pages8
ISBN (Print)9781479928392
DOIs
StatePublished - Dec 2013

Bibliographical note

KAUST Repository Item: Exported on 2020-10-01

Fingerprint

Dive into the research topics of 'Modeling self-occlusions in dynamic shape and appearance tracking'. Together they form a unique fingerprint.

Cite this