Abstract

An efficient target detection algorithm for detecting moving targets in infrared imagery using spatiotemporal information is presented. The output of the spatial processing serves as input to the temporal stage in a layered manner. The spatial information is obtained using joint space–spatial-frequency distribution and Rényi entropy. Temporal information is incorporated using background subtraction. By utilizing both spatial and temporal information, it is observed that the proposed method can achieve both high detection and a low false-alarm rate. The method is validated with experimentally generated data consisting of a variety of moving targets. Experimental results demonstrate a high value of F-measure for the proposed algorithm.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. A. Treptow, G. Cielniak, and T. Duckett, “Real-time people tracking for mobile robots using thermal vision,” Robot. Auton. Syst. 54, 729–739 (2006).
    [CrossRef]
  2. R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robots 18, 81–102 (2005).
    [CrossRef]
  3. R. Xin and R. Lei, “Search aid system based on machine vision and its visual attention model for rescue target detection,” in Second WRI Global Congress on Intelligent Systems (GCIS) (IEEE, 2010), pp. 149–152.
  4. S. Sun and H. Park, “Automatic target recognition using target boundary information in FLIR images,” in Proceedings of the IASTED International Conference on Signal and Image Processing (2000), pp. 405–410.
  5. A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
    [CrossRef]
  6. O. Firschein and T. M. Strat, Reconnaissance, Surveillance, and Target Acquisition for the Unmanned Ground Vehicle: Providing Surveillance “Eyes” for an Autonomous Vehicle (Morgan Kaufmann, 1997).
  7. H. Tong, X. Zhao, and C. Yu, “Multi-sensor intelligent transportation monitoring system based on information fusion technology,” in International Conference on Convergence Information Technology (IEEE, 2007), pp. 1726–1732.
  8. M. Kagesawa, S. Ueno, K. Ikeuchi, and H. Kashiwagi, “Recognizing vehicles in infrared images using IMAP parallel vision board,” IEEE Trans. Intell. Transp. Syst. 2, 10–17 (2001).
  9. A. Monnet, A. Mittal, N. Paragios, and V. Ramesh, “Background modeling and subtraction of dynamic scenes,” in Proceedings Ninth IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1305–1312.
  10. J. Wang, H.-L. Eng, A. H. Kam, and W.-Y. Yau, “A framework for foreground detection in complex environments,” in Statistical Methods in Video Processing (Springer, 2004), pp. 129–140.
  11. L. Li and M. K. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Process. 11, 105–112 (2002).
    [CrossRef]
  12. I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell. 22, 809–830 (2000).
    [CrossRef]
  13. B. Bhanu and I. Pavlidis, Computer Vision Beyond the Visible Spectrum (Springer, 2005).
  14. M. Vollmer and K. P. Möllmann, Infrared Thermal Imaging: Fundamentals, Research and Applications (Wiley-VCH, 2010).
  15. U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
    [CrossRef]
  16. Z. Chaohui, D. Xiaohui, X. Shuoyu, S. Zheng, and L. Min, “An improved moving object detection algorithm based on frame difference and edge detection,” in Fourth International Conference on Image and Graphics (IEEE, 2007), pp. 519–523.
  17. R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process. 14, 294–307 (2005).
    [CrossRef]
  18. J. W. Davis and M. A. Keck, “A two-stage template approach to person detection in thermal imagery,” in Seventh IEEE Workshops on Application of Computer Vision, WACV/MOTIONS ‘05 (IEEE, 2005), Vol. 1, pp. 364–369.
  19. C. Dai, Y. Zheng, and X. Li, “Pedestrian detection and tracking in infrared imagery using shape and appearance,” Comput. vis. image underst. 106, 288–299 (2007).
  20. S. S. Beauchemin and J. L. Barron, “The computation of optical flow,” ACM Comput. Surv. 27, 433–466 (1995).
    [CrossRef]
  21. Y. Motai, S. Kumar Jha, and D. Kruse, “Human tracking from a mobile agent: optical flow and Kalman filter arbitration,” Signal Process. Image Commun. 27, 83–95 (2012).
  22. A. Fernández-Caballero, J. C. Castillo, J. Martínez-Cantos, and R. Martínez-Tomás, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robot. Auton. Syst. 58, 1273–1281 (2010).
  23. J. Wang, G. Bebis, and R. Miller, “Robust video-based surveillance by integrating target detection with tracking,” in Computer Vision and Pattern Recognition Workshop (IEEE, 2006), p. 137–142.
  24. M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: a comprehensive review,” EURASIP J. Adv. Signal Process. 2010, 43 (2010).
    [CrossRef]
  25. M. Piccardi, “Background subtraction techniques: a review,” in 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE, 2004), pp. 3099–3104.
  26. S. Y. Elhabian, K. M. El-Sayed, and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques-state-of-art,” RPCS 1, 32–54 (2008).
    [CrossRef]
  27. Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonly-implemented background subtraction algorithms,” in 19th International Conference on Pattern Recognition (IEEE, 2008).
  28. A. M. McIvor, “Background subtraction techniques,” in Proceedings of Image and Vision Computing (Auckland, New Zealand2000).
  29. Y. B. Chin, L. W. Soong, L. H. Siong, and W. W. Kit, “Extended fuzzy background modeling for moving vehicle detection using infrared vision,” IEICE Electron. Exp. 8, 340–345 (2011).
    [CrossRef]
  30. C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).
  31. B. White and M. Shah, “Automatically tuning background subtraction parameters using particle swarm optimization,” in 2007 IEEE International Conference on Multimedia and Expo (IEEE, 2007), pp. 1826–1829.
  32. M. H. Sigari, N. Mozayani, and H. R. Pourreza, “Fuzzy running average and fuzzy background subtraction: concepts and application,” Int. J. Comput. Sci. Netw. Secur. 8, 138–143 (2008).
  33. A. Strehl and J. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.
  34. D. Davies, P. Palmer, and M. Mirmehdi, “Detection and tracking of very small low contrast objects,” in Proceedings of the British Machine Conference, M. Nixon and J. Carter, eds. (BMVA, 1998).
  35. H. Shekarforoush and R. Chellappa, “A multi-fractal formalism for stabilization, object detection and tracking in FLIR sequences,” in Proceedings 2000 International Conference on Image Processing (IEEE, 2000), pp. 78–81.
  36. Y. Xu, Y. Zhao, C. Jin, Z. Qu, L. Liu, and X. Sun, “Salient target detection based on pseudo-Wigner-Ville distribution and Rényi entropy,” Opt. Lett. 35, 475–477 (2010).
    [CrossRef]
  37. E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749–759 (1932).
    [CrossRef]
  38. L. D. Jacobson and H. Wechsler, “Joint spatial/spatial-frequency representation,” Signal Process. 14, 37–68 (1988).
  39. S. Gabarda Tébar, “New contributions to the multidimensional analysis of signals and images through the pseudo-Wigner Distribution,” Ph.D. thesis (Universidad Nacional De Educación A Distancia, Spain, 2008).
  40. N. Wiener, Cybernetics (Hermann Paris, 1948).
  41. C. E. Shannon and W. Weaver, “A mathematical theory of communication” Bell Syst. Tech. J. 27, 379–423 (1948).
  42. A. Renyi, “On measures of entropy and information,” in Fourth Berkeley Symposium on Mathematical Statistics and Probability (University of California, 1961), Vol. 1, pp. 547–561.
  43. R. Eisberg, R. Resnick, and J. Brown, “Quantum physics of atoms, molecules, solids, nuclei, and particles,” Phys. Today 39(3) 110 (1986).
    [CrossRef]
  44. P. Flandrin, R. G. Baraniuk, and O. Michel, “Time-frequency complexity and information,” in 1994 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1994), Vol. 3, pp. III/329–III/332.
  45. S. Benton, “Background subtraction,” http://www.sethbenton.com/background_subtraction.html .

2012

Y. Motai, S. Kumar Jha, and D. Kruse, “Human tracking from a mobile agent: optical flow and Kalman filter arbitration,” Signal Process. Image Commun. 27, 83–95 (2012).

2011

Y. B. Chin, L. W. Soong, L. H. Siong, and W. W. Kit, “Extended fuzzy background modeling for moving vehicle detection using infrared vision,” IEICE Electron. Exp. 8, 340–345 (2011).
[CrossRef]

2010

A. Fernández-Caballero, J. C. Castillo, J. Martínez-Cantos, and R. Martínez-Tomás, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robot. Auton. Syst. 58, 1273–1281 (2010).

M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: a comprehensive review,” EURASIP J. Adv. Signal Process. 2010, 43 (2010).
[CrossRef]

Y. Xu, Y. Zhao, C. Jin, Z. Qu, L. Liu, and X. Sun, “Salient target detection based on pseudo-Wigner-Ville distribution and Rényi entropy,” Opt. Lett. 35, 475–477 (2010).
[CrossRef]

2008

S. Y. Elhabian, K. M. El-Sayed, and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques-state-of-art,” RPCS 1, 32–54 (2008).
[CrossRef]

M. H. Sigari, N. Mozayani, and H. R. Pourreza, “Fuzzy running average and fuzzy background subtraction: concepts and application,” Int. J. Comput. Sci. Netw. Secur. 8, 138–143 (2008).

2007

C. Dai, Y. Zheng, and X. Li, “Pedestrian detection and tracking in infrared imagery using shape and appearance,” Comput. vis. image underst. 106, 288–299 (2007).

2006

A. Treptow, G. Cielniak, and T. Duckett, “Real-time people tracking for mobile robots using thermal vision,” Robot. Auton. Syst. 54, 729–739 (2006).
[CrossRef]

2005

R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robots 18, 81–102 (2005).
[CrossRef]

R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process. 14, 294–307 (2005).
[CrossRef]

2004

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

2002

L. Li and M. K. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Process. 11, 105–112 (2002).
[CrossRef]

2001

M. Kagesawa, S. Ueno, K. Ikeuchi, and H. Kashiwagi, “Recognizing vehicles in infrared images using IMAP parallel vision board,” IEEE Trans. Intell. Transp. Syst. 2, 10–17 (2001).

2000

I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell. 22, 809–830 (2000).
[CrossRef]

1995

S. S. Beauchemin and J. L. Barron, “The computation of optical flow,” ACM Comput. Surv. 27, 433–466 (1995).
[CrossRef]

1988

L. D. Jacobson and H. Wechsler, “Joint spatial/spatial-frequency representation,” Signal Process. 14, 37–68 (1988).

1986

R. Eisberg, R. Resnick, and J. Brown, “Quantum physics of atoms, molecules, solids, nuclei, and particles,” Phys. Today 39(3) 110 (1986).
[CrossRef]

1948

C. E. Shannon and W. Weaver, “A mathematical theory of communication” Bell Syst. Tech. J. 27, 379–423 (1948).

1932

E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749–759 (1932).
[CrossRef]

Aggarwal, J.

A. Strehl and J. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.

Ahmed, S. H.

S. Y. Elhabian, K. M. El-Sayed, and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques-state-of-art,” RPCS 1, 32–54 (2008).
[CrossRef]

Al-Kofahi, O.

R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process. 14, 294–307 (2005).
[CrossRef]

Andra, S.

R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process. 14, 294–307 (2005).
[CrossRef]

Arora, A.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Bapat, S.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Baraniuk, R. G.

P. Flandrin, R. G. Baraniuk, and O. Michel, “Time-frequency complexity and information,” in 1994 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1994), Vol. 3, pp. III/329–III/332.

Barron, J. L.

S. S. Beauchemin and J. L. Barron, “The computation of optical flow,” ACM Comput. Surv. 27, 433–466 (1995).
[CrossRef]

Beauchemin, S. S.

S. S. Beauchemin and J. L. Barron, “The computation of optical flow,” ACM Comput. Surv. 27, 433–466 (1995).
[CrossRef]

Bebis, G.

J. Wang, G. Bebis, and R. Miller, “Robust video-based surveillance by integrating target detection with tracking,” in Computer Vision and Pattern Recognition Workshop (IEEE, 2006), p. 137–142.

Benezeth, Y.

Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonly-implemented background subtraction algorithms,” in 19th International Conference on Pattern Recognition (IEEE, 2008).

Bhanu, B.

B. Bhanu and I. Pavlidis, Computer Vision Beyond the Visible Spectrum (Springer, 2005).

Bloisi, D.

M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: a comprehensive review,” EURASIP J. Adv. Signal Process. 2010, 43 (2010).
[CrossRef]

Braga-Neto, U.

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

Brown, J.

R. Eisberg, R. Resnick, and J. Brown, “Quantum physics of atoms, molecules, solids, nuclei, and particles,” Phys. Today 39(3) 110 (1986).
[CrossRef]

Cao, H.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Castano, A.

R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robots 18, 81–102 (2005).
[CrossRef]

Castillo, J. C.

A. Fernández-Caballero, J. C. Castillo, J. Martínez-Cantos, and R. Martínez-Tomás, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robot. Auton. Syst. 58, 1273–1281 (2010).

Chaohui, Z.

Z. Chaohui, D. Xiaohui, X. Shuoyu, S. Zheng, and L. Min, “An improved moving object detection algorithm based on frame difference and edge detection,” in Fourth International Conference on Image and Graphics (IEEE, 2007), pp. 519–523.

Chellappa, R.

H. Shekarforoush and R. Chellappa, “A multi-fractal formalism for stabilization, object detection and tracking in FLIR sequences,” in Proceedings 2000 International Conference on Image Processing (IEEE, 2000), pp. 78–81.

Chin, Y. B.

Y. B. Chin, L. W. Soong, L. H. Siong, and W. W. Kit, “Extended fuzzy background modeling for moving vehicle detection using infrared vision,” IEICE Electron. Exp. 8, 340–345 (2011).
[CrossRef]

Choudhary, M.

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

Cielniak, G.

A. Treptow, G. Cielniak, and T. Duckett, “Real-time people tracking for mobile robots using thermal vision,” Robot. Auton. Syst. 54, 729–739 (2006).
[CrossRef]

Cristani, M.

M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: a comprehensive review,” EURASIP J. Adv. Signal Process. 2010, 43 (2010).
[CrossRef]

Dai, C.

C. Dai, Y. Zheng, and X. Li, “Pedestrian detection and tracking in infrared imagery using shape and appearance,” Comput. vis. image underst. 106, 288–299 (2007).

Davies, D.

D. Davies, P. Palmer, and M. Mirmehdi, “Detection and tracking of very small low contrast objects,” in Proceedings of the British Machine Conference, M. Nixon and J. Carter, eds. (BMVA, 1998).

Davis, J. W.

J. W. Davis and M. A. Keck, “A two-stage template approach to person detection in thermal imagery,” in Seventh IEEE Workshops on Application of Computer Vision, WACV/MOTIONS ‘05 (IEEE, 2005), Vol. 1, pp. 364–369.

Davis, L. S.

I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell. 22, 809–830 (2000).
[CrossRef]

Demirbas, M.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Duckett, T.

A. Treptow, G. Cielniak, and T. Duckett, “Real-time people tracking for mobile robots using thermal vision,” Robot. Auton. Syst. 54, 729–739 (2006).
[CrossRef]

Dutta, P.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Eisberg, R.

R. Eisberg, R. Resnick, and J. Brown, “Quantum physics of atoms, molecules, solids, nuclei, and particles,” Phys. Today 39(3) 110 (1986).
[CrossRef]

Elhabian, S. Y.

S. Y. Elhabian, K. M. El-Sayed, and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques-state-of-art,” RPCS 1, 32–54 (2008).
[CrossRef]

El-Sayed, K. M.

S. Y. Elhabian, K. M. El-Sayed, and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques-state-of-art,” RPCS 1, 32–54 (2008).
[CrossRef]

Emile, B.

Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonly-implemented background subtraction algorithms,” in 19th International Conference on Pattern Recognition (IEEE, 2008).

Eng, H.-L.

J. Wang, H.-L. Eng, A. H. Kam, and W.-Y. Yau, “A framework for foreground detection in complex environments,” in Statistical Methods in Video Processing (Springer, 2004), pp. 129–140.

Farenzena, M.

M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: a comprehensive review,” EURASIP J. Adv. Signal Process. 2010, 43 (2010).
[CrossRef]

Fernández-Caballero, A.

A. Fernández-Caballero, J. C. Castillo, J. Martínez-Cantos, and R. Martínez-Tomás, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robot. Auton. Syst. 58, 1273–1281 (2010).

Firschein, O.

O. Firschein and T. M. Strat, Reconnaissance, Surveillance, and Target Acquisition for the Unmanned Ground Vehicle: Providing Surveillance “Eyes” for an Autonomous Vehicle (Morgan Kaufmann, 1997).

Flandrin, P.

P. Flandrin, R. G. Baraniuk, and O. Michel, “Time-frequency complexity and information,” in 1994 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1994), Vol. 3, pp. III/329–III/332.

Gabarda Tébar, S.

S. Gabarda Tébar, “New contributions to the multidimensional analysis of signals and images through the pseudo-Wigner Distribution,” Ph.D. thesis (Universidad Nacional De Educación A Distancia, Spain, 2008).

Gouda, M.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Goutsias, J.

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

Grimson, W. E. L.

C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).

Haritaoglu, I.

I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell. 22, 809–830 (2000).
[CrossRef]

Harwood, D.

I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell. 22, 809–830 (2000).
[CrossRef]

Ikeuchi, K.

M. Kagesawa, S. Ueno, K. Ikeuchi, and H. Kashiwagi, “Recognizing vehicles in infrared images using IMAP parallel vision board,” IEEE Trans. Intell. Transp. Syst. 2, 10–17 (2001).

Jacobson, L. D.

L. D. Jacobson and H. Wechsler, “Joint spatial/spatial-frequency representation,” Signal Process. 14, 37–68 (1988).

Jin, C.

Jodoin, P.-M.

Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonly-implemented background subtraction algorithms,” in 19th International Conference on Pattern Recognition (IEEE, 2008).

Kagesawa, M.

M. Kagesawa, S. Ueno, K. Ikeuchi, and H. Kashiwagi, “Recognizing vehicles in infrared images using IMAP parallel vision board,” IEEE Trans. Intell. Transp. Syst. 2, 10–17 (2001).

Kam, A. H.

J. Wang, H.-L. Eng, A. H. Kam, and W.-Y. Yau, “A framework for foreground detection in complex environments,” in Statistical Methods in Video Processing (Springer, 2004), pp. 129–140.

Kashiwagi, H.

M. Kagesawa, S. Ueno, K. Ikeuchi, and H. Kashiwagi, “Recognizing vehicles in infrared images using IMAP parallel vision board,” IEEE Trans. Intell. Transp. Syst. 2, 10–17 (2001).

Keck, M. A.

J. W. Davis and M. A. Keck, “A two-stage template approach to person detection in thermal imagery,” in Seventh IEEE Workshops on Application of Computer Vision, WACV/MOTIONS ‘05 (IEEE, 2005), Vol. 1, pp. 364–369.

Kit, W. W.

Y. B. Chin, L. W. Soong, L. H. Siong, and W. W. Kit, “Extended fuzzy background modeling for moving vehicle detection using infrared vision,” IEICE Electron. Exp. 8, 340–345 (2011).
[CrossRef]

Kruse, D.

Y. Motai, S. Kumar Jha, and D. Kruse, “Human tracking from a mobile agent: optical flow and Kalman filter arbitration,” Signal Process. Image Commun. 27, 83–95 (2012).

Kulathumani, V.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Kumar Jha, S.

Y. Motai, S. Kumar Jha, and D. Kruse, “Human tracking from a mobile agent: optical flow and Kalman filter arbitration,” Signal Process. Image Commun. 27, 83–95 (2012).

Laurent, H.

Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonly-implemented background subtraction algorithms,” in 19th International Conference on Pattern Recognition (IEEE, 2008).

Lei, R.

R. Xin and R. Lei, “Search aid system based on machine vision and its visual attention model for rescue target detection,” in Second WRI Global Congress on Intelligent Systems (GCIS) (IEEE, 2010), pp. 149–152.

Leung, M. K.

L. Li and M. K. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Process. 11, 105–112 (2002).
[CrossRef]

Li, L.

L. Li and M. K. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Process. 11, 105–112 (2002).
[CrossRef]

Li, X.

C. Dai, Y. Zheng, and X. Li, “Pedestrian detection and tracking in infrared imagery using shape and appearance,” Comput. vis. image underst. 106, 288–299 (2007).

Liu, L.

Manduchi, R.

R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robots 18, 81–102 (2005).
[CrossRef]

Martínez-Cantos, J.

A. Fernández-Caballero, J. C. Castillo, J. Martínez-Cantos, and R. Martínez-Tomás, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robot. Auton. Syst. 58, 1273–1281 (2010).

Martínez-Tomás, R.

A. Fernández-Caballero, J. C. Castillo, J. Martínez-Cantos, and R. Martínez-Tomás, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robot. Auton. Syst. 58, 1273–1281 (2010).

Matthies, L.

R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robots 18, 81–102 (2005).
[CrossRef]

McIvor, A. M.

A. M. McIvor, “Background subtraction techniques,” in Proceedings of Image and Vision Computing (Auckland, New Zealand2000).

Michel, O.

P. Flandrin, R. G. Baraniuk, and O. Michel, “Time-frequency complexity and information,” in 1994 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1994), Vol. 3, pp. III/329–III/332.

Miller, R.

J. Wang, G. Bebis, and R. Miller, “Robust video-based surveillance by integrating target detection with tracking,” in Computer Vision and Pattern Recognition Workshop (IEEE, 2006), p. 137–142.

Min, L.

Z. Chaohui, D. Xiaohui, X. Shuoyu, S. Zheng, and L. Min, “An improved moving object detection algorithm based on frame difference and edge detection,” in Fourth International Conference on Image and Graphics (IEEE, 2007), pp. 519–523.

Mirmehdi, M.

D. Davies, P. Palmer, and M. Mirmehdi, “Detection and tracking of very small low contrast objects,” in Proceedings of the British Machine Conference, M. Nixon and J. Carter, eds. (BMVA, 1998).

Mittal, A.

A. Monnet, A. Mittal, N. Paragios, and V. Ramesh, “Background modeling and subtraction of dynamic scenes,” in Proceedings Ninth IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1305–1312.

Mittal, V.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Möllmann, K. P.

M. Vollmer and K. P. Möllmann, Infrared Thermal Imaging: Fundamentals, Research and Applications (Wiley-VCH, 2010).

Monnet, A.

A. Monnet, A. Mittal, N. Paragios, and V. Ramesh, “Background modeling and subtraction of dynamic scenes,” in Proceedings Ninth IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1305–1312.

Motai, Y.

Y. Motai, S. Kumar Jha, and D. Kruse, “Human tracking from a mobile agent: optical flow and Kalman filter arbitration,” Signal Process. Image Commun. 27, 83–95 (2012).

Mozayani, N.

M. H. Sigari, N. Mozayani, and H. R. Pourreza, “Fuzzy running average and fuzzy background subtraction: concepts and application,” Int. J. Comput. Sci. Netw. Secur. 8, 138–143 (2008).

Murino, V.

M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: a comprehensive review,” EURASIP J. Adv. Signal Process. 2010, 43 (2010).
[CrossRef]

Naik, V.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Palmer, P.

D. Davies, P. Palmer, and M. Mirmehdi, “Detection and tracking of very small low contrast objects,” in Proceedings of the British Machine Conference, M. Nixon and J. Carter, eds. (BMVA, 1998).

Paragios, N.

A. Monnet, A. Mittal, N. Paragios, and V. Ramesh, “Background modeling and subtraction of dynamic scenes,” in Proceedings Ninth IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1305–1312.

Park, H.

S. Sun and H. Park, “Automatic target recognition using target boundary information in FLIR images,” in Proceedings of the IASTED International Conference on Signal and Image Processing (2000), pp. 405–410.

Pavlidis, I.

B. Bhanu and I. Pavlidis, Computer Vision Beyond the Visible Spectrum (Springer, 2005).

Piccardi, M.

M. Piccardi, “Background subtraction techniques: a review,” in 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE, 2004), pp. 3099–3104.

Pourreza, H. R.

M. H. Sigari, N. Mozayani, and H. R. Pourreza, “Fuzzy running average and fuzzy background subtraction: concepts and application,” Int. J. Comput. Sci. Netw. Secur. 8, 138–143 (2008).

Qu, Z.

Radke, R. J.

R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process. 14, 294–307 (2005).
[CrossRef]

Ramesh, V.

A. Monnet, A. Mittal, N. Paragios, and V. Ramesh, “Background modeling and subtraction of dynamic scenes,” in Proceedings Ninth IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1305–1312.

Renyi, A.

A. Renyi, “On measures of entropy and information,” in Fourth Berkeley Symposium on Mathematical Statistics and Probability (University of California, 1961), Vol. 1, pp. 547–561.

Resnick, R.

R. Eisberg, R. Resnick, and J. Brown, “Quantum physics of atoms, molecules, solids, nuclei, and particles,” Phys. Today 39(3) 110 (1986).
[CrossRef]

Rosenberger, C.

Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonly-implemented background subtraction algorithms,” in 19th International Conference on Pattern Recognition (IEEE, 2008).

Roysam, B.

R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process. 14, 294–307 (2005).
[CrossRef]

Shah, M.

B. White and M. Shah, “Automatically tuning background subtraction parameters using particle swarm optimization,” in 2007 IEEE International Conference on Multimedia and Expo (IEEE, 2007), pp. 1826–1829.

Shannon, C. E.

C. E. Shannon and W. Weaver, “A mathematical theory of communication” Bell Syst. Tech. J. 27, 379–423 (1948).

Shekarforoush, H.

H. Shekarforoush and R. Chellappa, “A multi-fractal formalism for stabilization, object detection and tracking in FLIR sequences,” in Proceedings 2000 International Conference on Image Processing (IEEE, 2000), pp. 78–81.

Shuoyu, X.

Z. Chaohui, D. Xiaohui, X. Shuoyu, S. Zheng, and L. Min, “An improved moving object detection algorithm based on frame difference and edge detection,” in Fourth International Conference on Image and Graphics (IEEE, 2007), pp. 519–523.

Sigari, M. H.

M. H. Sigari, N. Mozayani, and H. R. Pourreza, “Fuzzy running average and fuzzy background subtraction: concepts and application,” Int. J. Comput. Sci. Netw. Secur. 8, 138–143 (2008).

Siong, L. H.

Y. B. Chin, L. W. Soong, L. H. Siong, and W. W. Kit, “Extended fuzzy background modeling for moving vehicle detection using infrared vision,” IEICE Electron. Exp. 8, 340–345 (2011).
[CrossRef]

Soong, L. W.

Y. B. Chin, L. W. Soong, L. H. Siong, and W. W. Kit, “Extended fuzzy background modeling for moving vehicle detection using infrared vision,” IEICE Electron. Exp. 8, 340–345 (2011).
[CrossRef]

Stauffer, C.

C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).

Strat, T. M.

O. Firschein and T. M. Strat, Reconnaissance, Surveillance, and Target Acquisition for the Unmanned Ground Vehicle: Providing Surveillance “Eyes” for an Autonomous Vehicle (Morgan Kaufmann, 1997).

Strehl, A.

A. Strehl and J. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.

Sun, S.

S. Sun and H. Park, “Automatic target recognition using target boundary information in FLIR images,” in Proceedings of the IASTED International Conference on Signal and Image Processing (2000), pp. 405–410.

Sun, X.

Talukder, A.

R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robots 18, 81–102 (2005).
[CrossRef]

Tong, H.

H. Tong, X. Zhao, and C. Yu, “Multi-sensor intelligent transportation monitoring system based on information fusion technology,” in International Conference on Convergence Information Technology (IEEE, 2007), pp. 1726–1732.

Treptow, A.

A. Treptow, G. Cielniak, and T. Duckett, “Real-time people tracking for mobile robots using thermal vision,” Robot. Auton. Syst. 54, 729–739 (2006).
[CrossRef]

Ueno, S.

M. Kagesawa, S. Ueno, K. Ikeuchi, and H. Kashiwagi, “Recognizing vehicles in infrared images using IMAP parallel vision board,” IEEE Trans. Intell. Transp. Syst. 2, 10–17 (2001).

Vollmer, M.

M. Vollmer and K. P. Möllmann, Infrared Thermal Imaging: Fundamentals, Research and Applications (Wiley-VCH, 2010).

Wang, J.

J. Wang, H.-L. Eng, A. H. Kam, and W.-Y. Yau, “A framework for foreground detection in complex environments,” in Statistical Methods in Video Processing (Springer, 2004), pp. 129–140.

J. Wang, G. Bebis, and R. Miller, “Robust video-based surveillance by integrating target detection with tracking,” in Computer Vision and Pattern Recognition Workshop (IEEE, 2006), p. 137–142.

Weaver, W.

C. E. Shannon and W. Weaver, “A mathematical theory of communication” Bell Syst. Tech. J. 27, 379–423 (1948).

Wechsler, H.

L. D. Jacobson and H. Wechsler, “Joint spatial/spatial-frequency representation,” Signal Process. 14, 37–68 (1988).

White, B.

B. White and M. Shah, “Automatically tuning background subtraction parameters using particle swarm optimization,” in 2007 IEEE International Conference on Multimedia and Expo (IEEE, 2007), pp. 1826–1829.

Wiener, N.

N. Wiener, Cybernetics (Hermann Paris, 1948).

Wigner, E.

E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749–759 (1932).
[CrossRef]

Xiaohui, D.

Z. Chaohui, D. Xiaohui, X. Shuoyu, S. Zheng, and L. Min, “An improved moving object detection algorithm based on frame difference and edge detection,” in Fourth International Conference on Image and Graphics (IEEE, 2007), pp. 519–523.

Xin, R.

R. Xin and R. Lei, “Search aid system based on machine vision and its visual attention model for rescue target detection,” in Second WRI Global Congress on Intelligent Systems (GCIS) (IEEE, 2010), pp. 149–152.

Xu, Y.

Yau, W.-Y.

J. Wang, H.-L. Eng, A. H. Kam, and W.-Y. Yau, “A framework for foreground detection in complex environments,” in Statistical Methods in Video Processing (Springer, 2004), pp. 129–140.

Yu, C.

H. Tong, X. Zhao, and C. Yu, “Multi-sensor intelligent transportation monitoring system based on information fusion technology,” in International Conference on Convergence Information Technology (IEEE, 2007), pp. 1726–1732.

Zhang, H.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Zhao, X.

H. Tong, X. Zhao, and C. Yu, “Multi-sensor intelligent transportation monitoring system based on information fusion technology,” in International Conference on Convergence Information Technology (IEEE, 2007), pp. 1726–1732.

Zhao, Y.

Zheng, S.

Z. Chaohui, D. Xiaohui, X. Shuoyu, S. Zheng, and L. Min, “An improved moving object detection algorithm based on frame difference and edge detection,” in Fourth International Conference on Image and Graphics (IEEE, 2007), pp. 519–523.

Zheng, Y.

C. Dai, Y. Zheng, and X. Li, “Pedestrian detection and tracking in infrared imagery using shape and appearance,” Comput. vis. image underst. 106, 288–299 (2007).

ACM Comput. Surv.

S. S. Beauchemin and J. L. Barron, “The computation of optical flow,” ACM Comput. Surv. 27, 433–466 (1995).
[CrossRef]

Auton. Robots

R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robots 18, 81–102 (2005).
[CrossRef]

Bell Syst. Tech. J.

C. E. Shannon and W. Weaver, “A mathematical theory of communication” Bell Syst. Tech. J. 27, 379–423 (1948).

Comput. Netw.

A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M. Demirbas, and M. Gouda, “A line in the sand: a wireless sensor network for target detection, classification, and tracking,” Comput. Netw. 46, 605–634 (2004).
[CrossRef]

Comput. vis. image underst.

C. Dai, Y. Zheng, and X. Li, “Pedestrian detection and tracking in infrared imagery using shape and appearance,” Comput. vis. image underst. 106, 288–299 (2007).

EURASIP J. Adv. Signal Process.

M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background subtraction for automated multisensor surveillance: a comprehensive review,” EURASIP J. Adv. Signal Process. 2010, 43 (2010).
[CrossRef]

IEEE Trans. Image Process.

R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process. 14, 294–307 (2005).
[CrossRef]

L. Li and M. K. Leung, “Integrating intensity and texture differences for robust change detection,” IEEE Trans. Image Process. 11, 105–112 (2002).
[CrossRef]

IEEE Trans. Intell. Transp. Syst.

M. Kagesawa, S. Ueno, K. Ikeuchi, and H. Kashiwagi, “Recognizing vehicles in infrared images using IMAP parallel vision board,” IEEE Trans. Intell. Transp. Syst. 2, 10–17 (2001).

IEEE Trans. Pattern Anal. Machine Intell.

I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell. 22, 809–830 (2000).
[CrossRef]

IEICE Electron. Exp.

Y. B. Chin, L. W. Soong, L. H. Siong, and W. W. Kit, “Extended fuzzy background modeling for moving vehicle detection using infrared vision,” IEICE Electron. Exp. 8, 340–345 (2011).
[CrossRef]

Int. J. Comput. Sci. Netw. Secur.

M. H. Sigari, N. Mozayani, and H. R. Pourreza, “Fuzzy running average and fuzzy background subtraction: concepts and application,” Int. J. Comput. Sci. Netw. Secur. 8, 138–143 (2008).

J. Electron. Imaging

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

Opt. Lett.

Phys. Rev.

E. Wigner, “On the quantum correction for thermodynamic equilibrium,” Phys. Rev. 40, 749–759 (1932).
[CrossRef]

Phys. Today

R. Eisberg, R. Resnick, and J. Brown, “Quantum physics of atoms, molecules, solids, nuclei, and particles,” Phys. Today 39(3) 110 (1986).
[CrossRef]

Robot. Auton. Syst.

A. Fernández-Caballero, J. C. Castillo, J. Martínez-Cantos, and R. Martínez-Tomás, “Optical flow or image subtraction in human detection from infrared camera on mobile robot,” Robot. Auton. Syst. 58, 1273–1281 (2010).

A. Treptow, G. Cielniak, and T. Duckett, “Real-time people tracking for mobile robots using thermal vision,” Robot. Auton. Syst. 54, 729–739 (2006).
[CrossRef]

RPCS

S. Y. Elhabian, K. M. El-Sayed, and S. H. Ahmed, “Moving object detection in spatial domain using background removal techniques-state-of-art,” RPCS 1, 32–54 (2008).
[CrossRef]

Signal Process.

L. D. Jacobson and H. Wechsler, “Joint spatial/spatial-frequency representation,” Signal Process. 14, 37–68 (1988).

Signal Process. Image Commun.

Y. Motai, S. Kumar Jha, and D. Kruse, “Human tracking from a mobile agent: optical flow and Kalman filter arbitration,” Signal Process. Image Commun. 27, 83–95 (2012).

Other

J. Wang, G. Bebis, and R. Miller, “Robust video-based surveillance by integrating target detection with tracking,” in Computer Vision and Pattern Recognition Workshop (IEEE, 2006), p. 137–142.

Y. Benezeth, P.-M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Review and evaluation of commonly-implemented background subtraction algorithms,” in 19th International Conference on Pattern Recognition (IEEE, 2008).

A. M. McIvor, “Background subtraction techniques,” in Proceedings of Image and Vision Computing (Auckland, New Zealand2000).

M. Piccardi, “Background subtraction techniques: a review,” in 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE, 2004), pp. 3099–3104.

S. Gabarda Tébar, “New contributions to the multidimensional analysis of signals and images through the pseudo-Wigner Distribution,” Ph.D. thesis (Universidad Nacional De Educación A Distancia, Spain, 2008).

N. Wiener, Cybernetics (Hermann Paris, 1948).

A. Strehl and J. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.

D. Davies, P. Palmer, and M. Mirmehdi, “Detection and tracking of very small low contrast objects,” in Proceedings of the British Machine Conference, M. Nixon and J. Carter, eds. (BMVA, 1998).

H. Shekarforoush and R. Chellappa, “A multi-fractal formalism for stabilization, object detection and tracking in FLIR sequences,” in Proceedings 2000 International Conference on Image Processing (IEEE, 2000), pp. 78–81.

J. W. Davis and M. A. Keck, “A two-stage template approach to person detection in thermal imagery,” in Seventh IEEE Workshops on Application of Computer Vision, WACV/MOTIONS ‘05 (IEEE, 2005), Vol. 1, pp. 364–369.

Z. Chaohui, D. Xiaohui, X. Shuoyu, S. Zheng, and L. Min, “An improved moving object detection algorithm based on frame difference and edge detection,” in Fourth International Conference on Image and Graphics (IEEE, 2007), pp. 519–523.

A. Monnet, A. Mittal, N. Paragios, and V. Ramesh, “Background modeling and subtraction of dynamic scenes,” in Proceedings Ninth IEEE International Conference on Computer Vision (IEEE, 2003), pp. 1305–1312.

J. Wang, H.-L. Eng, A. H. Kam, and W.-Y. Yau, “A framework for foreground detection in complex environments,” in Statistical Methods in Video Processing (Springer, 2004), pp. 129–140.

B. Bhanu and I. Pavlidis, Computer Vision Beyond the Visible Spectrum (Springer, 2005).

M. Vollmer and K. P. Möllmann, Infrared Thermal Imaging: Fundamentals, Research and Applications (Wiley-VCH, 2010).

O. Firschein and T. M. Strat, Reconnaissance, Surveillance, and Target Acquisition for the Unmanned Ground Vehicle: Providing Surveillance “Eyes” for an Autonomous Vehicle (Morgan Kaufmann, 1997).

H. Tong, X. Zhao, and C. Yu, “Multi-sensor intelligent transportation monitoring system based on information fusion technology,” in International Conference on Convergence Information Technology (IEEE, 2007), pp. 1726–1732.

R. Xin and R. Lei, “Search aid system based on machine vision and its visual attention model for rescue target detection,” in Second WRI Global Congress on Intelligent Systems (GCIS) (IEEE, 2010), pp. 149–152.

S. Sun and H. Park, “Automatic target recognition using target boundary information in FLIR images,” in Proceedings of the IASTED International Conference on Signal and Image Processing (2000), pp. 405–410.

P. Flandrin, R. G. Baraniuk, and O. Michel, “Time-frequency complexity and information,” in 1994 IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1994), Vol. 3, pp. III/329–III/332.

S. Benton, “Background subtraction,” http://www.sethbenton.com/background_subtraction.html .

C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).

B. White and M. Shah, “Automatically tuning background subtraction parameters using particle swarm optimization,” in 2007 IEEE International Conference on Multimedia and Expo (IEEE, 2007), pp. 1826–1829.

A. Renyi, “On measures of entropy and information,” in Fourth Berkeley Symposium on Mathematical Statistics and Probability (University of California, 1961), Vol. 1, pp. 547–561.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1.

Block diagram of the proposed moving target detection method.

Fig. 2.
Fig. 2.

Stage-wise output of the proposed PWD-RE-based background subtraction method. (a) Background image (left) and frame 88 (right) of the thermal video. (b) Fourth, fifth, and sixth (left to right) Wigner coefficient maps of the background for PWD of N=8. (c) Fourth, fifth, and sixth (left to right) Wigner coefficient maps of frame 88 of N=8. (d) Saliency maps for the background image (left) and frame 88 (right). (e) Detected foreground before (left) and after (right) postprocessing. (f) Detected region of moving object shown by bounding boxes before (left) and after (right) blob fusion.

Fig. 3.
Fig. 3.

Uncooled microbolometer-type thermal imager.

Fig. 4.
Fig. 4.

Infrared image/video acquisition system.

Fig. 5.
Fig. 5.

Google Earth Image of the experimental location (Manginapudi Beach, 11 km from Machalipatnam, Krishna District, Andhra Pradesh, India).

Fig. 6.
Fig. 6.

Electro-optical image of the experimental site where the CSIR-CSIO moving object thermal infrared imagery dataset” was generated at two time intervals (left, 11:00 a.m.; right, 4:00 p.m.).

Fig. 7.
Fig. 7.

Thermal infrared image of a person walking.

Fig. 8.
Fig. 8.

Variability of the thermal infrared signature of a vehicle due to changes in atmospheric conditions (left, image captured during the day; right, image captured in the evening.)

Fig. 9.
Fig. 9.

Single frame snapshot of “Ambassador_people_bird” sequence. Electro-optical image (left) and infrared image (right).

Fig. 10.
Fig. 10.

1–4 represent frames 1, 50, 100, and 137 of the “Innova” sequence. (a1)–(a4) show the output of GMM, (b1)–(b4) show the output of single reference frame, (c1)–(c4) show the output of moving average, and (d1)–(d4) show the output of the proposed method.

Fig. 11.
Fig. 11.

Graph depicting the comparative analysis of the four methods in terms of correct detections.

Fig. 12.
Fig. 12.

Graph depicting the comparative analysis of the four methods in terms of false detections.

Fig. 13.
Fig. 13.

Output of the proposed method for “Auto_near” thermal sequence. (a) Detected target shown by the red bounding box in the input frame, (b) background frame, (c) output of the clutter rejection stage, and (d) output after morphological operations.

Fig. 14.
Fig. 14.

Output of the proposed method for “Motorcycle_Ambassador” thermal sequence. (a) Detected target shown by the red bounding box in the input frame, (b) background frame, (c) output of the clutter rejection stage, and (d) output after morphological operations.

Fig. 15.
Fig. 15.

Partial detections in the case of the “Auto_morning” thermal sequence: (a) Detected target shown by the red bounding box in the input frame, (b) background frame, (c) output of clutter rejection stage, and (d) output after morphological operations.

Tables (4)

Tables Icon

Table 1. Design of Experiments

Tables Icon

Table 2. Computational Time of the Methods Under Comparison

Tables Icon

Table 3. Comparative Analysis of the Performance of the Existing Background Subtraction Methods and the Proposed Methoda

Tables Icon

Table 4. Detection Results for CSIR-CSIO Moving Object Thermal Infrared Imagery Dataset (PPV—Positive Predictive Value)

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

Wf[n,k]=2m=N/2N/21f[n+m]f*[nm]e2j(2πmN)k,
Rα=11αlog2(nkPα[n,k]),
[n,k]=P[n,k]P*[n,k],
nk3[n,k]=1.
S(f)=hgΔRE(f),
ΔRE(f)=abs(R(f)R(f)),
R(f)=haR(f),
O(f)={1,S(f)>threshold0,otherwise,

Metrics