Abstract

Despite much success in the application of sparse representation to object tracking, most of the existing sparse-representation-based tracking methods are still not robust enough for challenges such as pose variations, illumination changes, occlusions, and background distractions. In this paper, we propose a robust object-tracking algorithm via local discriminative sparse representation. The key idea in our method is to develop what we believe is a novel local discriminative sparse representation method for object appearance modeling, which can be helpful to overcome issues such as appearance variations and occlusions. Then a robust tracker based on the local discriminative sparse appearance model is proposed to track the object over time. Additionally, an online dictionary update strategy is introduced in our approach for further robustness. Experimental results on challenging sequences demonstrate the effectiveness and robustness of our proposed method.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Object tracking based on incremental Bi-2DPCA learning with sparse structure

Bendu Bai, Ying Li, Jiulun Fan, Chris Price, and Qiang Shen
Appl. Opt. 54(10) 2897-2907 (2015)

Multi-class remote sensing object recognition based on discriminative sparse representation

Xin Wang, Siqiu Shen, Chen Ning, Fengchen Huang, and Hongmin Gao
Appl. Opt. 55(6) 1381-1394 (2016)

Real-time infrared target tracking based on ℓ1 minimization and compressive features

Ying Li, Pengcheng Li, and Qiang Shen
Appl. Opt. 53(28) 6518-6526 (2014)

References

  • View by:
  • |
  • |
  • |

  1. F. Dornaika and F. Chakik, “Efficient object detection and tracking in video sequences,” J. Opt. Soc. Am. A 29, 928–935 (2012).
    [Crossref]
  2. K. J. Cannons and R. P. Wildes, “The applicability of spatiotemporal oriented energy features to region tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 784–796 (2014).
    [Crossref]
  3. D. K. Prasad and M. S. Brown, “Online tracking of deformable objects under occlusion using dominant points,” J. Opt. Soc. Am. A 30, 1484–1491 (2013).
    [Crossref]
  4. K. Zhang, L. Zhang, and M. Yang, “Fast compressive tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014).
    [Crossref]
  5. V. Mahadevan and N. Vasconcelos, “Biologically inspired object tracking using center-surround saliency mechanisms,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 541–554 (2013).
    [Crossref]
  6. Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
    [Crossref]
  7. X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2259–2272 (2011).
    [Crossref]
  8. B. Zhuang, H. Lu, Z. Xiao, and D. Wang, “Visual tracking via discriminative sparse similarity map,” IEEE Trans. Image Process. 23, 1872–1881 (2014).
    [Crossref]
  9. G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
    [Crossref]
  10. W. Hu, W. Li, X. Zhang, and S. Maybank, “Single and multiple object tracking using a multi-feature joint sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 816–833 (2015).
    [Crossref]
  11. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
    [Crossref]
  12. X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
    [Crossref]
  13. T. Guha and R. K. Ward, “Learning sparse representations for human action recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1576–1588 (2012).
    [Crossref]
  14. X. Wang, S. Shen, C. Ning, F. Huang, and H. Gao, “Multi-class remote sensing object recognition based on discriminative sparse representation,” Appl. Opt. 55, 1381–1394 (2016).
    [Crossref]
  15. Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
    [Crossref]
  16. C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
    [Crossref]
  17. B. Liu, J. Huang, C. Kulikowski, and L. Yang, “Robust visual tracking using local sparse appearance model and K-selection,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2968–2981 (2013).
    [Crossref]
  18. J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. Image Process. 23, 3336–3351 (2014).
    [Crossref]
  19. J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 791–804 (2012).
    [Crossref]
  20. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
    [Crossref]
  21. S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process. 41, 3397–3415 (1993).
    [Crossref]
  22. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory 53, 4655–4666 (2007).
    [Crossref]
  23. H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transport. Syst. 13, 748–758 (2012).
    [Crossref]
  24. B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
    [Crossref]
  25. M. Kaaniche and F. Bremond, “Recognizing gestures by learning local motion signatures of HOG descriptors,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2247–2258 (2012).
    [Crossref]
  26. J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 1800–1807.
  27. I. Ramirez, P. Sprechmann, and G. Sapiro, “Classification and clustering via dictionary learning with structured incoherence and shared features,” in Proceedings of IEEE International Conference on Computer Vision Pattern Recognition (ICCVPR) (2010), pp. 3501–3508.
  28. M. Yang, L. Zhang, X. Feng, and D. Zhang, “Fisher discrimination dictionary learning for sparse representation,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 543–550.
  29. Z. Jiang, L. Zhe, and L. S. Davis, “Label consistent K-SVD: learning a discriminative dictionary for recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2651–2664 (2013).
    [Crossref]
  30. G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. Appl. 21, 185–194 (1999).
    [Crossref]
  31. D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002).
    [Crossref]
  32. D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
    [Crossref]
  33. B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2009), pp. 983–990.
  34. J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010), pp. 1269–1276.
  35. A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp. 798–805.
  36. J. Kwon and K. M. Lee, “Visual Tracking Decomposition” (Seoul National University, 2009), http://cv.snu.ac.kr/research/~vtd/ .
  37. D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental Learning for Robust Visual Tracking” (University of Toronto, 2008), http://www.cs.toronto.edu/~dross/ivt/ .
  38. A. Adam, E. Rivlin, and I. Shimshoni, “Fragtrack—Robust Fragments-Based Tracking Using the Integral Histogram (Israel Institute of Technology, 2006), http://www.cs.technion.ac.il/~amita/fragtrack/fragtrack.htm .
  39. K. Zhang, L. Zhang, and M. Yang, “Fast Compressive Tracking” (The Hong Kong Polytechnic University, 2012), http://www4.comp.polyu.edu.hk/~cslzhang/FCT/FCT.htm .
  40. B. Babenko, M. H. Yang, and S. Belongie, “UCSD Computer Vision” (University of California, 2009), http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml .

2016 (2)

G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
[Crossref]

X. Wang, S. Shen, C. Ning, F. Huang, and H. Gao, “Multi-class remote sensing object recognition based on discriminative sparse representation,” Appl. Opt. 55, 1381–1394 (2016).
[Crossref]

2015 (2)

W. Hu, W. Li, X. Zhang, and S. Maybank, “Single and multiple object tracking using a multi-feature joint sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 816–833 (2015).
[Crossref]

X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
[Crossref]

2014 (7)

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

K. J. Cannons and R. P. Wildes, “The applicability of spatiotemporal oriented energy features to region tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 784–796 (2014).
[Crossref]

K. Zhang, L. Zhang, and M. Yang, “Fast compressive tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014).
[Crossref]

Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
[Crossref]

B. Zhuang, H. Lu, Z. Xiao, and D. Wang, “Visual tracking via discriminative sparse similarity map,” IEEE Trans. Image Process. 23, 1872–1881 (2014).
[Crossref]

J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. Image Process. 23, 3336–3351 (2014).
[Crossref]

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

2013 (5)

Z. Jiang, L. Zhe, and L. S. Davis, “Label consistent K-SVD: learning a discriminative dictionary for recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2651–2664 (2013).
[Crossref]

V. Mahadevan and N. Vasconcelos, “Biologically inspired object tracking using center-surround saliency mechanisms,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 541–554 (2013).
[Crossref]

D. K. Prasad and M. S. Brown, “Online tracking of deformable objects under occlusion using dominant points,” J. Opt. Soc. Am. A 30, 1484–1491 (2013).
[Crossref]

C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
[Crossref]

B. Liu, J. Huang, C. Kulikowski, and L. Yang, “Robust visual tracking using local sparse appearance model and K-selection,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2968–2981 (2013).
[Crossref]

2012 (5)

T. Guha and R. K. Ward, “Learning sparse representations for human action recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1576–1588 (2012).
[Crossref]

F. Dornaika and F. Chakik, “Efficient object detection and tracking in video sequences,” J. Opt. Soc. Am. A 29, 928–935 (2012).
[Crossref]

M. Kaaniche and F. Bremond, “Recognizing gestures by learning local motion signatures of HOG descriptors,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2247–2258 (2012).
[Crossref]

H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transport. Syst. 13, 748–758 (2012).
[Crossref]

J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 791–804 (2012).
[Crossref]

2011 (1)

X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2259–2272 (2011).
[Crossref]

2008 (1)

D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

2007 (1)

J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory 53, 4655–4666 (2007).
[Crossref]

2006 (2)

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

2002 (1)

D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002).
[Crossref]

1999 (1)

G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. Appl. 21, 185–194 (1999).
[Crossref]

1993 (1)

S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process. 41, 3397–3415 (1993).
[Crossref]

Adam, A.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp. 798–805.

Aharon, M.

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

Babenko, B.

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2009), pp. 983–990.

Bach, F.

J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 791–804 (2012).
[Crossref]

Belongie, S.

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2009), pp. 983–990.

Bremond, F.

M. Kaaniche and F. Bremond, “Recognizing gestures by learning local motion signatures of HOG descriptors,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2247–2258 (2012).
[Crossref]

Brown, M. S.

Bruckstein, A.

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

Cannons, K. J.

K. J. Cannons and R. P. Wildes, “The applicability of spatiotemporal oriented energy features to region tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 784–796 (2014).
[Crossref]

Chakik, F.

Chen, P.

C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
[Crossref]

Chen, Y.

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

Comaniciu, D.

D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002).
[Crossref]

Criminisi, A.

J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 1800–1807.

Davis, L. S.

Z. Jiang, L. Zhe, and L. S. Davis, “Label consistent K-SVD: learning a discriminative dictionary for recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2651–2664 (2013).
[Crossref]

Dornaika, F.

Elad, M.

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

Feng, X.

M. Yang, L. Zhang, X. Feng, and D. Zhang, “Fisher discrimination dictionary learning for sparse representation,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 543–550.

Gao, H.

Gao, W.

J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. Image Process. 23, 3336–3351 (2014).
[Crossref]

Gilbert, A. C.

J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory 53, 4655–4666 (2007).
[Crossref]

Golub, G. H.

G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. Appl. 21, 185–194 (1999).
[Crossref]

Guha, T.

T. Guha and R. K. Ward, “Learning sparse representations for human action recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1576–1588 (2012).
[Crossref]

Han, G.

G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
[Crossref]

Hansen, P. C.

G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. Appl. 21, 185–194 (1999).
[Crossref]

He, L.

C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
[Crossref]

Hu, W.

W. Hu, W. Li, X. Zhang, and S. Maybank, “Single and multiple object tracking using a multi-feature joint sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 816–833 (2015).
[Crossref]

Huang, F.

Huang, J.

B. Liu, J. Huang, C. Kulikowski, and L. Yang, “Robust visual tracking using local sparse appearance model and K-selection,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2968–2981 (2013).
[Crossref]

Jen, C.

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

Jia, Y.

Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
[Crossref]

Jiang, Z.

Z. Jiang, L. Zhe, and L. S. Davis, “Label consistent K-SVD: learning a discriminative dictionary for recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2651–2664 (2013).
[Crossref]

Juang, J.

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

Kaaniche, M.

M. Kaaniche and F. Bremond, “Recognizing gestures by learning local motion signatures of HOG descriptors,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2247–2258 (2012).
[Crossref]

Kao, C.

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

Kulikowski, C.

B. Liu, J. Huang, C. Kulikowski, and L. Yang, “Robust visual tracking using local sparse appearance model and K-selection,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2968–2981 (2013).
[Crossref]

Kwon, J.

J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010), pp. 1269–1276.

Lee, K. M.

J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010), pp. 1269–1276.

Li, C.

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

Li, W.

W. Hu, W. Li, X. Zhang, and S. Maybank, “Single and multiple object tracking using a multi-feature joint sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 816–833 (2015).
[Crossref]

Li, Y.

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

Lim, J.

D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

Lin, R.

D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

Lin, S.

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

Ling, H.

X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2259–2272 (2011).
[Crossref]

Liu, B.

B. Liu, J. Huang, C. Kulikowski, and L. Yang, “Robust visual tracking using local sparse appearance model and K-selection,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2968–2981 (2013).
[Crossref]

Liu, J.

G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
[Crossref]

Lu, H.

B. Zhuang, H. Lu, Z. Xiao, and D. Wang, “Visual tracking via discriminative sparse similarity map,” IEEE Trans. Image Process. 23, 1872–1881 (2014).
[Crossref]

Ma, B.

Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
[Crossref]

Mahadevan, V.

V. Mahadevan and N. Vasconcelos, “Biologically inspired object tracking using center-surround saliency mechanisms,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 541–554 (2013).
[Crossref]

Mairal, J.

J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 791–804 (2012).
[Crossref]

Mallat, S. G.

S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process. 41, 3397–3415 (1993).
[Crossref]

Maybank, S.

W. Hu, W. Li, X. Zhang, and S. Maybank, “Single and multiple object tracking using a multi-feature joint sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 816–833 (2015).
[Crossref]

McAllester, D.

H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transport. Syst. 13, 748–758 (2012).
[Crossref]

Meer, P.

D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002).
[Crossref]

Mei, X.

X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2259–2272 (2011).
[Crossref]

Minka, T.

J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 1800–1807.

Mita, S.

H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transport. Syst. 13, 748–758 (2012).
[Crossref]

Niknejad, H. T.

H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transport. Syst. 13, 748–758 (2012).
[Crossref]

Ning, C.

X. Wang, S. Shen, C. Ning, F. Huang, and H. Gao, “Multi-class remote sensing object recognition based on discriminative sparse representation,” Appl. Opt. 55, 1381–1394 (2016).
[Crossref]

X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
[Crossref]

O’Leary, D. P.

G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. Appl. 21, 185–194 (1999).
[Crossref]

Ponce, J.

J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 791–804 (2012).
[Crossref]

Prasad, D. K.

Qu, Y.

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

Ramirez, I.

I. Ramirez, P. Sprechmann, and G. Sapiro, “Classification and clustering via dictionary learning with structured incoherence and shared features,” in Proceedings of IEEE International Conference on Computer Vision Pattern Recognition (ICCVPR) (2010), pp. 3501–3508.

Rivlin, E.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp. 798–805.

Ross, D. A.

D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

Sapiro, G.

I. Ramirez, P. Sprechmann, and G. Sapiro, “Classification and clustering via dictionary learning with structured incoherence and shared features,” in Proceedings of IEEE International Conference on Computer Vision Pattern Recognition (ICCVPR) (2010), pp. 3501–3508.

Shen, S.

X. Wang, S. Shen, C. Ning, F. Huang, and H. Gao, “Multi-class remote sensing object recognition based on discriminative sparse representation,” Appl. Opt. 55, 1381–1394 (2016).
[Crossref]

X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
[Crossref]

Shimshoni, I.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp. 798–805.

Sprechmann, P.

I. Ramirez, P. Sprechmann, and G. Sapiro, “Classification and clustering via dictionary learning with structured incoherence and shared features,” in Proceedings of IEEE International Conference on Computer Vision Pattern Recognition (ICCVPR) (2010), pp. 3501–3508.

Sun, N.

G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
[Crossref]

Takeuchi, A.

H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transport. Syst. 13, 748–758 (2012).
[Crossref]

Tan, J.

C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
[Crossref]

Tropp, J. A.

J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory 53, 4655–4666 (2007).
[Crossref]

Vasconcelos, N.

V. Mahadevan and N. Vasconcelos, “Biologically inspired object tracking using center-surround saliency mechanisms,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 541–554 (2013).
[Crossref]

Wang, C.

G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
[Crossref]

Wang, D.

B. Zhuang, H. Lu, Z. Xiao, and D. Wang, “Visual tracking via discriminative sparse similarity map,” IEEE Trans. Image Process. 23, 1872–1881 (2014).
[Crossref]

Wang, X.

G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
[Crossref]

X. Wang, S. Shen, C. Ning, F. Huang, and H. Gao, “Multi-class remote sensing object recognition based on discriminative sparse representation,” Appl. Opt. 55, 1381–1394 (2016).
[Crossref]

X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
[Crossref]

Ward, R. K.

T. Guha and R. K. Ward, “Learning sparse representations for human action recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1576–1588 (2012).
[Crossref]

Wildes, R. P.

K. J. Cannons and R. P. Wildes, “The applicability of spatiotemporal oriented energy features to region tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 784–796 (2014).
[Crossref]

Winn, J.

J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 1800–1807.

Wu, B.

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

Wu, Y.

Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
[Crossref]

Xiao, Z.

B. Zhuang, H. Lu, Z. Xiao, and D. Wang, “Visual tracking via discriminative sparse similarity map,” IEEE Trans. Image Process. 23, 1872–1881 (2014).
[Crossref]

Xie, C.

C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
[Crossref]

Xie, Y.

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

Xu, M.

X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
[Crossref]

Yan, X.

X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
[Crossref]

Yang, L.

B. Liu, J. Huang, C. Kulikowski, and L. Yang, “Robust visual tracking using local sparse appearance model and K-selection,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2968–2981 (2013).
[Crossref]

Yang, M.

Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
[Crossref]

K. Zhang, L. Zhang, and M. Yang, “Fast compressive tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014).
[Crossref]

D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

M. Yang, L. Zhang, X. Feng, and D. Zhang, “Fisher discrimination dictionary learning for sparse representation,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 543–550.

Yang, M. H.

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2009), pp. 983–990.

Zhang, D.

M. Yang, L. Zhang, X. Feng, and D. Zhang, “Fisher discrimination dictionary learning for sparse representation,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 543–550.

Zhang, J.

Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
[Crossref]

J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. Image Process. 23, 3336–3351 (2014).
[Crossref]

C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
[Crossref]

Zhang, K.

K. Zhang, L. Zhang, and M. Yang, “Fast compressive tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014).
[Crossref]

Zhang, L.

K. Zhang, L. Zhang, and M. Yang, “Fast compressive tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014).
[Crossref]

M. Yang, L. Zhang, X. Feng, and D. Zhang, “Fisher discrimination dictionary learning for sparse representation,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 543–550.

Zhang, W.

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

Zhang, X.

W. Hu, W. Li, X. Zhang, and S. Maybank, “Single and multiple object tracking using a multi-feature joint sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 816–833 (2015).
[Crossref]

Zhang, Y.

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

Zhang, Z.

S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process. 41, 3397–3415 (1993).
[Crossref]

Zhao, D.

J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. Image Process. 23, 3336–3351 (2014).
[Crossref]

Zhe, L.

Z. Jiang, L. Zhe, and L. S. Davis, “Label consistent K-SVD: learning a discriminative dictionary for recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2651–2664 (2013).
[Crossref]

Zhuang, B.

B. Zhuang, H. Lu, Z. Xiao, and D. Wang, “Visual tracking via discriminative sparse similarity map,” IEEE Trans. Image Process. 23, 1872–1881 (2014).
[Crossref]

Appl. Opt. (1)

IEEE Trans. Circuit Syst. Video Technol. (1)

Y. Wu, B. Ma, M. Yang, J. Zhang, and Y. Jia, “Metric learning based structural appearance model for robust visual tracking,” IEEE Trans. Circuit Syst. Video Technol. 24, 865–877 (2014).
[Crossref]

IEEE Trans. Cybern. (1)

Y. Xie, W. Zhang, C. Li, S. Lin, Y. Qu, and Y. Zhang, “Discriminative object tracking via sparse representation and online dictionary learning,” IEEE Trans. Cybern. 44, 539–553 (2014).
[Crossref]

IEEE Trans. Image Process. (3)

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[Crossref]

B. Zhuang, H. Lu, Z. Xiao, and D. Wang, “Visual tracking via discriminative sparse similarity map,” IEEE Trans. Image Process. 23, 1872–1881 (2014).
[Crossref]

J. Zhang, D. Zhao, and W. Gao, “Group-based sparse representation for image restoration,” IEEE Trans. Image Process. 23, 3336–3351 (2014).
[Crossref]

IEEE Trans. Ind. Electron. (1)

B. Wu, C. Kao, C. Jen, Y. Li, Y. Chen, and J. Juang, “A relative discriminative histogram of oriented gradients based particle filter approach to vehicle occlusion handling and tracking,” IEEE Trans. Ind. Electron. 61, 4228–4237 (2014).
[Crossref]

IEEE Trans. Inf. Theory (1)

J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Trans. Inf. Theory 53, 4655–4666 (2007).
[Crossref]

IEEE Trans. Intell. Transport. Syst. (1)

H. T. Niknejad, A. Takeuchi, S. Mita, and D. McAllester, “On-road multivehicle tracking using deformable object model and particle filter with improved likelihood estimation,” IEEE Trans. Intell. Transport. Syst. 13, 748–758 (2012).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (11)

Z. Jiang, L. Zhe, and L. S. Davis, “Label consistent K-SVD: learning a discriminative dictionary for recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2651–2664 (2013).
[Crossref]

M. Kaaniche and F. Bremond, “Recognizing gestures by learning local motion signatures of HOG descriptors,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2247–2258 (2012).
[Crossref]

J. Mairal, F. Bach, and J. Ponce, “Task-driven dictionary learning,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 791–804 (2012).
[Crossref]

D. Comaniciu and P. Meer, “Mean shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 24, 603–619 (2002).
[Crossref]

X. Mei and H. Ling, “Robust visual tracking and vehicle classification via sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2259–2272 (2011).
[Crossref]

K. J. Cannons and R. P. Wildes, “The applicability of spatiotemporal oriented energy features to region tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 784–796 (2014).
[Crossref]

K. Zhang, L. Zhang, and M. Yang, “Fast compressive tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 36, 2002–2015 (2014).
[Crossref]

V. Mahadevan and N. Vasconcelos, “Biologically inspired object tracking using center-surround saliency mechanisms,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 541–554 (2013).
[Crossref]

W. Hu, W. Li, X. Zhang, and S. Maybank, “Single and multiple object tracking using a multi-feature joint sparse representation,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 816–833 (2015).
[Crossref]

T. Guha and R. K. Ward, “Learning sparse representations for human action recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1576–1588 (2012).
[Crossref]

B. Liu, J. Huang, C. Kulikowski, and L. Yang, “Robust visual tracking using local sparse appearance model and K-selection,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 2968–2981 (2013).
[Crossref]

IEEE Trans. Signal Process. (2)

M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Trans. Signal Process. 54, 4311–4322 (2006).
[Crossref]

S. G. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. Signal Process. 41, 3397–3415 (1993).
[Crossref]

IET Comput. Vis. (1)

C. Xie, J. Tan, P. Chen, J. Zhang, and L. He, “Multiple instance learning tracking method with local sparse representation,” IET Comput. Vis. 7, 320–334 (2013).
[Crossref]

Infrared Phys. Technol. (1)

X. Wang, S. Shen, C. Ning, M. Xu, and X. Yan, “A sparse representation-based method for infrared dim target detection under sea-sky background,” Infrared Phys. Technol. 71, 347–355 (2015).
[Crossref]

Int. J. Comput. Vis. (1)

D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental learning for robust visual tracking,” Int. J. Comput. Vis. 77, 125–141 (2008).
[Crossref]

J. Opt. Soc. Am. A (2)

Neurocomputing (1)

G. Han, X. Wang, J. Liu, N. Sun, and C. Wang, “Robust object tracking based on local region sparse appearance model,” Neurocomputing 184, 145–167 (2016).
[Crossref]

SIAM J. Matrix Anal. Appl. (1)

G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. Appl. 21, 185–194 (1999).
[Crossref]

Other (11)

B. Babenko, M. H. Yang, and S. Belongie, “Visual tracking with online multiple instance learning,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2009), pp. 983–990.

J. Kwon and K. M. Lee, “Visual tracking decomposition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2010), pp. 1269–1276.

A. Adam, E. Rivlin, and I. Shimshoni, “Robust fragments-based tracking using the integral histogram,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006), pp. 798–805.

J. Kwon and K. M. Lee, “Visual Tracking Decomposition” (Seoul National University, 2009), http://cv.snu.ac.kr/research/~vtd/ .

D. A. Ross, J. Lim, R. Lin, and M. Yang, “Incremental Learning for Robust Visual Tracking” (University of Toronto, 2008), http://www.cs.toronto.edu/~dross/ivt/ .

A. Adam, E. Rivlin, and I. Shimshoni, “Fragtrack—Robust Fragments-Based Tracking Using the Integral Histogram (Israel Institute of Technology, 2006), http://www.cs.technion.ac.il/~amita/fragtrack/fragtrack.htm .

K. Zhang, L. Zhang, and M. Yang, “Fast Compressive Tracking” (The Hong Kong Polytechnic University, 2012), http://www4.comp.polyu.edu.hk/~cslzhang/FCT/FCT.htm .

B. Babenko, M. H. Yang, and S. Belongie, “UCSD Computer Vision” (University of California, 2009), http://vision.ucsd.edu/~bbabenko/project_miltrack.shtml .

J. Winn, A. Criminisi, and T. Minka, “Object categorization by learned universal visual dictionary,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2005), pp. 1800–1807.

I. Ramirez, P. Sprechmann, and G. Sapiro, “Classification and clustering via dictionary learning with structured incoherence and shared features,” in Proceedings of IEEE International Conference on Computer Vision Pattern Recognition (ICCVPR) (2010), pp. 3501–3508.

M. Yang, L. Zhang, X. Feng, and D. Zhang, “Fisher discrimination dictionary learning for sparse representation,” in Proceedings of IEEE International Conference on Computer Vision (ICCV) (2011), pp. 543–550.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1.

Example of the first image frame of a tracking sequence and a number of its corresponding object templates.

Fig. 2.
Fig. 2.

Illustration for the multiple object classes generation.

Fig. 3.
Fig. 3.

Flowchart of the proposed tracking method.

Fig. 4.
Fig. 4.

Illustration of the search region used to get the candidate objects. The red rectangle Rp denotes the object region extracted in the previous frame. l* is the previous location of the object. The yellow rectangle Rs denotes the search region used to get the candidate objects in the current frame.

Fig. 5.
Fig. 5.

Different distributions of elements in q for various image fragments.

Fig. 6.
Fig. 6.

Online dictionary update.

Fig. 7.
Fig. 7.

Angles between featcur and all column vectors in featall are larger than π/4.

Fig. 8.
Fig. 8.

Comparison of quadrature component and in-phase component of fma with different angles to featcur.

Fig. 9.
Fig. 9.

Evaluation of our online dictionary update approach.

Fig. 10.
Fig. 10.

Tracking results of three sequences with pose, scale, and illumination variations.

Fig. 11.
Fig. 11.

Tracking results of three sequences with background clutters and distractions.

Fig. 12.
Fig. 12.

Tracking results of three sequences with occlusions.

Fig. 13.
Fig. 13.

Tracking results of the Animal sequence with abrupt motion.

Fig. 14.
Fig. 14.

Average CLEs and TSRs of the three trackers over all 12 video sequences.

Tables (3)

Tables Icon

Table 1. Challenges of All 12 Video Sequences

Tables Icon

Table 2. Center Location Errors (Pixels) of the Three Trackers over All 12 Video Sequencesa

Tables Icon

Table 3. Tracking Success Rates (Percent) of the Three Trackers over All 12 Video Sequencesa

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

D,X=argminD,XYDX22s.t.  i,xi0T,
xi=x*(yi,D)argminxyiDx22s.t.  x0T,
Ei=YD˜iX˜i,
ti=[yi,1,,yi,r],
classall=[class1,,classr],
featj=[feature1,j,,featureN,j],
D,A,X=argminD,A,XYDX22+αQAX22s.t.  i,xi0T,
Dinit=[D1,,Dr].
Ainit=QXt(XXt+γI)1,
Dnew,X=argminDnew,XYnewDnewX22+βX1.
C=[c1,,cM],
q=Afinalxi,j.
Right=[right1,,rightM],
sumi=j=1rqi,jjqi,j1++qi,jr,
wi=sumij=1ssumj,i=1,,s.
l=i=1swi×li,
featcur=(fcur1fcurr)Rd×1,
featall=(feature1,1featureN,1feature1,rfeatureN,r)Rd×N,
S=[sim1,,simN].
CLEi=di(l,lG),
CLE=1Mi=1MCLEi,
TSR=MsM,
ΩTΩGΩTΩGα,

Metrics