TY - GEN
T1 - Robust visual tracking via multi-task sparse learning
AU - Zhang, Tianzhu
AU - Ghanem, Bernard
AU - Liu, Si
AU - Ahuja, Narendra
N1 - KAUST Repository Item: Exported on 2020-10-01
PY - 2012/6
Y1 - 2012/6
N2 - In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.
AB - In this paper, we formulate object tracking in a particle filter framework as a multi-task sparse learning problem, which we denote as Multi-Task Tracking (MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in MTT. By employing popular sparsity-inducing p, q mixed norms (p D; 1), we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular L 1 tracker [15] is a special case of our MTT formulation (denoted as the L 11 tracker) when p q 1. The learning problem can be efficiently solved using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, MTT is computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that MTT methods consistently outperform state-of-the-art trackers. © 2012 IEEE.
UR - http://hdl.handle.net/10754/564560
UR - http://ieeexplore.ieee.org/document/6247908/
UR - http://www.scopus.com/inward/record.url?scp=84866678444&partnerID=8YFLogxK
U2 - 10.1109/CVPR.2012.6247908
DO - 10.1109/CVPR.2012.6247908
M3 - Conference contribution
SN - 9781467312264
SP - 2042
EP - 2049
BT - 2012 IEEE Conference on Computer Vision and Pattern Recognition
PB - Institute of Electrical and Electronics Engineers (IEEE)
ER -