TY - GEN
T1 - Object tracking by occlusion detection via structured sparse learning
AU - Zhang, Tianzhu
AU - Ghanem, Bernard
AU - Xu, Changsheng
AU - Ahuja, Narendra
N1 - KAUST Repository Item: Exported on 2020-10-01
PY - 2013/6
Y1 - 2013/6
N2 - Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object's track. This is the case when significant occlusion occurs. To accommodate for non-sparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our tracker consistently outperforms the state-of-the-art. © 2013 IEEE.
AB - Sparse representation based methods have recently drawn much attention in visual tracking due to good performance against illumination variation and occlusion. They assume the errors caused by image variations can be modeled as pixel-wise sparse. However, in many practical scenarios these errors are not truly pixel-wise sparse but rather sparsely distributed in a structured way. In fact, pixels in error constitute contiguous regions within the object's track. This is the case when significant occlusion occurs. To accommodate for non-sparse occlusion in a given frame, we assume that occlusion detected in previous frames can be propagated to the current one. This propagated information determines which pixels will contribute to the sparse representation of the current track. In other words, pixels that were detected as part of an occlusion in the previous frame will be removed from the target representation process. As such, this paper proposes a novel tracking algorithm that models and detects occlusion through structured sparse learning. We test our tracker on challenging benchmark sequences, such as sports videos, which involve heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that our tracker consistently outperforms the state-of-the-art. © 2013 IEEE.
UR - http://hdl.handle.net/10754/564735
UR - http://ieeexplore.ieee.org/document/6595996/
UR - http://www.scopus.com/inward/record.url?scp=84884935427&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2013.150
DO - 10.1109/CVPRW.2013.150
M3 - Conference contribution
SN - 9780769549903
SP - 1033
EP - 1040
BT - 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops
PB - Institute of Electrical and Electronics Engineers (IEEE)
ER -