Abstract

The challenge of detecting and tracking moving objects in imaging throughout the atmosphere stems from the atmospheric turbulence effects that cause time-varying image shifts and blur. These phenomena significantly increase the miss and false detection rates in long-range horizontal imaging. An efficient method was developed, which is based on novel criteria for objects’ spatio-temporal properties, to discriminate true from false detections, following an adaptive thresholding procedure for foreground detection and an activity-based false alarm likeliness masking. The method is demonstrated on significantly distorted videos and compared with state of the art methods, and shows better false alarm and miss detection rates.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C Appl. Rev. 34, 334–352 (2004).
  2. Y. Dedeoglu, Moving Object Detection, Tracking and Classication for Smart Video Surveillance (Bilkent University, 2004).
  3. O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
    [CrossRef]
  4. B. Fishbain, L. P. Yaroslavsky, and I. A. Ideses, “Real time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).
  5. O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
    [CrossRef]
  6. A. Elkabetz and Y. Yitzhaky, “Background modeling for moving object detection in long-distance imaging through turbulent medium,” Appl. Opt. (2014), to be published.
  7. G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, “A simple and robust method for moving target tracking,” in Proceedings of IASTED International Conference on Signal Processing, Pattern Recognition and Applications, (ACTA, 2012), pp. 108–112.
  8. A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proceedings of 6th European Conference on Computer Vision, Dublin, Ireland, Vol. 2 (Springer, 2000), pp. 751–767.
  9. O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process. 20, 1709–1724 (2011).
    [CrossRef]
  10. C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).
  11. E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically degraded video,” Opt. Eng. 51, 101710 (2012).
    [CrossRef]
  12. S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
    [CrossRef]
  13. N. Lu, J. Wang, Q. H. Wu, and L. Yang, “An improved motion detection method for real-time surveillance,” IAENG Int. J. Comput. Sci. 35, 1 (2008).
  14. L. Zhang and Y. Liang, “Motion human detection based on background subtraction,” in Second International Workshop on Education Technology and Computer Science (IEEE, 2010).
  15. R. C. Ganzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 9.
  16. N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE, 1998), Chap. 15.
  17. M. S. Belen’kii, J. M. Stewart, and P. Gillespie, “Turbulence-induced edge image waviness: theory and experiment,” Appl. Opt. 40, 1321–1328 (2001).
    [CrossRef]
  18. X. Zhu and J. M. Kahn, “Free-space optical communication through atmospheric turbulence channels,” IEEE Trans. Commun. 50, 1293–1300 (2002).
    [CrossRef]
  19. O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” http://vision.eecs.ucf.edu/projects/Turbulence/TheeWayDec.zip .
  20. O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” http://hdl.handle.net/2268/145853 .
  21. Online Resource 1: http://www.ee.bgu.ac.il/~itzik/DetectTrackTurb/ .
  22. D. M. W. Powers, “Evaluation: from precision, recall and F-factor to ROC, informedness, markedness and correlation,” J. Mach. Learn. Technol. 2, 37–63 (2011).

2013

O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

2012

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically degraded video,” Opt. Eng. 51, 101710 (2012).
[CrossRef]

2011

O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process. 20, 1709–1724 (2011).
[CrossRef]

D. M. W. Powers, “Evaluation: from precision, recall and F-factor to ROC, informedness, markedness and correlation,” J. Mach. Learn. Technol. 2, 37–63 (2011).

2008

N. Lu, J. Wang, Q. H. Wu, and L. Yang, “An improved motion detection method for real-time surveillance,” IAENG Int. J. Comput. Sci. 35, 1 (2008).

2007

B. Fishbain, L. P. Yaroslavsky, and I. A. Ideses, “Real time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
[CrossRef]

2004

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C Appl. Rev. 34, 334–352 (2004).

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

2002

X. Zhu and J. M. Kahn, “Free-space optical communication through atmospheric turbulence channels,” IEEE Trans. Commun. 50, 1293–1300 (2002).
[CrossRef]

2001

Baldini, G.

G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, “A simple and robust method for moving target tracking,” in Proceedings of IASTED International Conference on Signal Processing, Pattern Recognition and Applications, (ACTA, 2012), pp. 108–112.

Barnich, O.

O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process. 20, 1709–1724 (2011).
[CrossRef]

Belen’kii, M. S.

Campadelli, P.

G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, “A simple and robust method for moving target tracking,” in Proceedings of IASTED International Conference on Signal Processing, Pattern Recognition and Applications, (ACTA, 2012), pp. 108–112.

Chen, E.

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically degraded video,” Opt. Eng. 51, 101710 (2012).
[CrossRef]

Cheung, S.

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

Cozzi, D.

G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, “A simple and robust method for moving target tracking,” in Proceedings of IASTED International Conference on Signal Processing, Pattern Recognition and Applications, (ACTA, 2012), pp. 108–112.

Davis, L.

A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proceedings of 6th European Conference on Computer Vision, Dublin, Ireland, Vol. 2 (Springer, 2000), pp. 751–767.

Dedeoglu, Y.

Y. Dedeoglu, Moving Object Detection, Tracking and Classication for Smart Video Surveillance (Bilkent University, 2004).

Elgammal, A.

A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proceedings of 6th European Conference on Computer Vision, Dublin, Ireland, Vol. 2 (Springer, 2000), pp. 751–767.

Elkabetz, A.

A. Elkabetz and Y. Yitzhaky, “Background modeling for moving object detection in long-distance imaging through turbulent medium,” Appl. Opt. (2014), to be published.

Fishbain, B.

B. Fishbain, L. P. Yaroslavsky, and I. A. Ideses, “Real time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Ganzalez, R. C.

R. C. Ganzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 9.

Gillespie, P.

Grimson, W. E. L.

C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).

Haik, O.

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically degraded video,” Opt. Eng. 51, 101710 (2012).
[CrossRef]

O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
[CrossRef]

Harwood, D.

A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proceedings of 6th European Conference on Computer Vision, Dublin, Ireland, Vol. 2 (Springer, 2000), pp. 751–767.

Hu, W.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C Appl. Rev. 34, 334–352 (2004).

Ideses, I. A.

B. Fishbain, L. P. Yaroslavsky, and I. A. Ideses, “Real time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Kahn, J. M.

X. Zhu and J. M. Kahn, “Free-space optical communication through atmospheric turbulence channels,” IEEE Trans. Commun. 50, 1293–1300 (2002).
[CrossRef]

Kamath, C.

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

Kopeika, N. S.

N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE, 1998), Chap. 15.

Lanzarotti, R.

G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, “A simple and robust method for moving target tracking,” in Proceedings of IASTED International Conference on Signal Processing, Pattern Recognition and Applications, (ACTA, 2012), pp. 108–112.

Liang, Y.

L. Zhang and Y. Liang, “Motion human detection based on background subtraction,” in Second International Workshop on Education Technology and Computer Science (IEEE, 2010).

Lu, N.

N. Lu, J. Wang, Q. H. Wu, and L. Yang, “An improved motion detection method for real-time surveillance,” IAENG Int. J. Comput. Sci. 35, 1 (2008).

Maybank, S.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C Appl. Rev. 34, 334–352 (2004).

Oreifej, O.

O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

Powers, D. M. W.

D. M. W. Powers, “Evaluation: from precision, recall and F-factor to ROC, informedness, markedness and correlation,” J. Mach. Learn. Technol. 2, 37–63 (2011).

Shah, M.

O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

Stauffer, C.

C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).

Stewart, J. M.

Tan, T.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C Appl. Rev. 34, 334–352 (2004).

Van Droogenbroeck, M.

O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process. 20, 1709–1724 (2011).
[CrossRef]

Wang, J.

N. Lu, J. Wang, Q. H. Wu, and L. Yang, “An improved motion detection method for real-time surveillance,” IAENG Int. J. Comput. Sci. 35, 1 (2008).

Wang, L.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C Appl. Rev. 34, 334–352 (2004).

Woods, R. E.

R. C. Ganzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 9.

Wu, Q. H.

N. Lu, J. Wang, Q. H. Wu, and L. Yang, “An improved motion detection method for real-time surveillance,” IAENG Int. J. Comput. Sci. 35, 1 (2008).

Xin, L.

O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

Yang, L.

N. Lu, J. Wang, Q. H. Wu, and L. Yang, “An improved motion detection method for real-time surveillance,” IAENG Int. J. Comput. Sci. 35, 1 (2008).

Yaroslavsky, L. P.

B. Fishbain, L. P. Yaroslavsky, and I. A. Ideses, “Real time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Yitzhaky, Y.

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically degraded video,” Opt. Eng. 51, 101710 (2012).
[CrossRef]

O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
[CrossRef]

A. Elkabetz and Y. Yitzhaky, “Background modeling for moving object detection in long-distance imaging through turbulent medium,” Appl. Opt. (2014), to be published.

Zhang, L.

L. Zhang and Y. Liang, “Motion human detection based on background subtraction,” in Second International Workshop on Education Technology and Computer Science (IEEE, 2010).

Zhu, X.

X. Zhu and J. M. Kahn, “Free-space optical communication through atmospheric turbulence channels,” IEEE Trans. Commun. 50, 1293–1300 (2002).
[CrossRef]

Appl. Opt.

IAENG Int. J. Comput. Sci.

N. Lu, J. Wang, Q. H. Wu, and L. Yang, “An improved motion detection method for real-time surveillance,” IAENG Int. J. Comput. Sci. 35, 1 (2008).

IEEE Trans. Commun.

X. Zhu and J. M. Kahn, “Free-space optical communication through atmospheric turbulence channels,” IEEE Trans. Commun. 50, 1293–1300 (2002).
[CrossRef]

IEEE Trans. Image Process.

O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” IEEE Trans. Image Process. 20, 1709–1724 (2011).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell.

O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

IEEE Trans. Syst., Man, Cybern. C Appl. Rev.

W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. Syst., Man, Cybern. C Appl. Rev. 34, 334–352 (2004).

J. Mach. Learn. Technol.

D. M. W. Powers, “Evaluation: from precision, recall and F-factor to ROC, informedness, markedness and correlation,” J. Mach. Learn. Technol. 2, 37–63 (2011).

J. Real-Time Image Proc.

B. Fishbain, L. P. Yaroslavsky, and I. A. Ideses, “Real time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Opt. Eng.

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically degraded video,” Opt. Eng. 51, 101710 (2012).
[CrossRef]

Proc. SPIE

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

Other

C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1999).

A. Elkabetz and Y. Yitzhaky, “Background modeling for moving object detection in long-distance imaging through turbulent medium,” Appl. Opt. (2014), to be published.

G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, “A simple and robust method for moving target tracking,” in Proceedings of IASTED International Conference on Signal Processing, Pattern Recognition and Applications, (ACTA, 2012), pp. 108–112.

A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proceedings of 6th European Conference on Computer Vision, Dublin, Ireland, Vol. 2 (Springer, 2000), pp. 751–767.

O. Oreifej, L. Xin, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” http://vision.eecs.ucf.edu/projects/Turbulence/TheeWayDec.zip .

O. Barnich and M. Van Droogenbroeck, “ViBe: a universal background subtraction algorithm for video sequences,” http://hdl.handle.net/2268/145853 .

Online Resource 1: http://www.ee.bgu.ac.il/~itzik/DetectTrackTurb/ .

L. Zhang and Y. Liang, “Motion human detection based on background subtraction,” in Second International Workshop on Education Technology and Computer Science (IEEE, 2010).

R. C. Ganzalez and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 9.

N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE, 1998), Chap. 15.

Y. Dedeoglu, Moving Object Detection, Tracking and Classication for Smart Video Surveillance (Bilkent University, 2004).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1.

Tracking procedure (Section 2.B) based on Ref. [7] and the additional grace window.

Fig. 2.
Fig. 2.

Background activity mask operation (Section 2.C).

Fig. 3.
Fig. 3.

Sample frames from three video sequences (with medium turbulence effects), showing the detection and tracking results obtained by the proposed method compared to the methods of [5] and [9]. Each column represents a different sequence (sequences 1–3 from left to right, respectively), while each row shows the results of a different method. The first row shows frames from the original recorded sequences (green bounding boxes mark the ground truth moving targets), the second row shows the results of the proposed method, the third row shows the result obtained using [5], the fourth and fifth rows show the results obtained using [9] with high and low thresholds, respectively, and the last row shows the results obtained using [11]. The detected objects in each method are marked by red bounding boxes. All video files are given in Online Resource 1 [21].

Fig. 4.
Fig. 4.

Same as Fig. 3, but with different sequences (sequences 4–6 from left to right, respectively) that were degraded by stronger turbulence effects.

Tables (2)

Tables Icon

Table 1. Comparison of the True Detections (TP), False Alarms (FP), and Miss Detections (FN) Between all Methods for the Video Sequences Sampled in Figs. 3 and 4

Tables Icon

Table 2. Comparison of the Precision (Prec.), Recall, and F1 Parameters for all the Methodsa

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

Bk(x⃗)=α·Bk1(x⃗)+(1α)·Ik(x⃗),
Fk(x⃗)={1ifDiffk(x⃗)>T(x⃗),0otherwise
Diffk(x⃗)=|Ik(x⃗)Bk(x⃗)|,
T(x⃗)=Gain·mediank[Diffk(x⃗)]+Offset.
Diffk(x⃗)=(1α)Diffk1(x⃗)+α|Ik(x⃗)Bk(x⃗)|,
T(x⃗)=Gain·Diffk(x⃗)+Offset.
|t=kNminksign(Vt)|.
α2=2.914·D1/3·Cn2·L,
τλL/υ,
Precision=TPTP+FP,
Recall=TPTP+FN,
F1=2Precision·RecallPrecision+Recall.

Metrics