Abstract

We aim to determine the effect of image restoration (deblurring) on the ability to acquire moving objects detected automatically from long-distance thermal video signals. This is done by first restoring the videos using a blind-deconvolution method developed recently, and then examining its effect on the geometrical features of automatically detected moving objects. Results show that for modern (low-noise and high-resolution) thermal imaging devices, the geometrical features obtained from the restored videos better resemble the true properties of the objects. These results correspond to a previous study, which demonstrated that image restoration can significantly improve the ability of human observers to acquire moving objects from long-range thermal videos.

© 2007 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |

  1. W. Hu, T. Tan, L. Wang, and S. Maybank, "A survey on visual surveillance of object motion and behaviors," IEEE Trans. Syst. Man Cybern. , Part C Appl. Rev. 34, 334-352 (2004).
    [CrossRef]
  2. Y. Dedeoglu, "Moving object detection, tracking and classification for smart video surveillance" (Bilkent University, 2004). NOTE: Master's thesis. Available online at [accessed 28 October 2007]: http://www.cs.bilkent.edu.tr/~yigithan/publications/MScThesis.pdf.
  3. N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE Press, 1998).
  4. D. L. Fried, "Optical resolution through a randomly inhomogeneous medium for very long and very short exposures," J. Opt. Soc. Am. 56, 1372-1379 (1966).
    [CrossRef]
  5. N. S. Kopeika, I. Dror, and D. Sadot, "The causes of atmospheric blur: comment on atmospheric scattering effect on spatial resolution of imaging systems," J. Opt. Soc. Am. A 15, 3097-3106 (1998).
    [CrossRef]
  6. I. Dror and N. S. Kopeika, "Experimental comparison of turbulence MTF and aerosol MTF through the open atmosphere," J. Opt. Soc. Am. A 12, 970-980 (1995).
    [CrossRef]
  7. Y. Yitzhaky, I. Dror, and N. S. Kopeika, "Restoration of atmospherically-blurred images according to weather-predicted atmospheric modulation transfer function (MTF)," Opt. Eng. 36, 3064-3072 (1997).
    [CrossRef]
  8. D. Li, R. Mersereau, D. H. Frakes, and M. J. T. Smith, "A new method for suppressing optical turbulence in video," in Proceedings of EUSIPCO (2005).
  9. S. Cheung and C. Kamath, "Robust techniques for background subtraction in urban traffic video," Proc. SPIE 5308, 881-892 (2004).
    [CrossRef]
  10. A. Strehl and J. K. Aggarwal, "Detecting moving objects in airborne forward looking infra-red sequences," in Proceedings of the IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3-12.
    [CrossRef]
  11. E. Estalayo, L. Salgado, F. Jaureguizar, and N. Garcia, "Efficient image stabilization and automatic target detection in aerial FLIR sequence," Proc. SPIE 6234, 62340N (2006).
    [CrossRef]
  12. O. Haik, Y. Lior, D. Nahmani, and Y. Yitzhaky, "Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere," Opt. Eng. 45, 117006 (2006).
    [CrossRef]
  13. O. Shacham, O. Haik, and Y. Yitzhaky, "Blind restoration of atmospherically degraded images by automatic best step-edge detection," Pattern Recogn. Lett. 28, 2094-2103 (2007).
    [CrossRef]
  14. A. K. Jain, Fundamentals of Digital Image Processing (Prentice Hall, 1989).
  15. D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Process. Mag. 13, 43-64 (1996).
    [CrossRef]
  16. A. Jalobeanu, J. Zerubia, and L. Blanc-Feraud, "Bayesian estimation of blur and noise in remote sensing imaging," in Blind image deconvolution: theory and applications, P. Campisi and K. Egiazarian, ed. (CRC Press, 2007).
  17. A. E. Savakis and H. J. Trussell, "Blur identification by residual spectral matching," IEEE Trans. Image Process. 2, 141-151 (1993).
    [CrossRef] [PubMed]
  18. G. Pavlovic and A. M. Tekalp, "Maximum likelihood parametric blur identification based on a continuous spatial domain model," IEEE Trans. Image Process. 1, 496-504 (1992).
    [CrossRef] [PubMed]
  19. D. G. Sheppard, H. Bobby, and M. Michael, "Iterative multi-frame super-resolution algorithms for atmospheric-turbulence-degraded imagery," J. Opt. Soc. Am. A 15, 978-992 (1998).
    [CrossRef]
  20. Q. Zhou and J. K. Aggarwal, "Object tracking in an outdoor environment using fusion of features and cameras," Image Vis. Comput. 24, 1244-1255 (2006).
    [CrossRef]
  21. A. J. Lipton, H. Fujiyoshi, and R. S. Patil, "Moving target classification and tracking from real-time video," in Proceedings of the IEEE Image Understanding Workshop (IEEE, 1998), pp. 129-136.
  22. M. A. Ali, S. Indupalli, and B. Boufama, "Tracking Multiple People for Video Surveillance," in First International Workshop on Video Processing for Security (2006).
  23. A. Elgammal, D. Harwood, and L. Davis, "Non-parametric model for background subtraction," in Proceedings of IEEE Conference on Computer Vision1843 (IEEE, 2000), pp. 751-767.
  24. G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, "A simple and robust method for moving target tracking," in Proceedings of the IASTED International Conference on Signal Processing, Pattern Recognition and Applications (SPPRA, 2002), pp. 108-112.
    [PubMed]
  25. A. Amer, "Voting-based simultaneous tracking of multiple video objects," Proc. SPIE 5022, 500-511 (2003).
    [CrossRef]
  26. CONTROP precision technologies LTD., "Innovative solutions for surveillance and reconnaissance," http://www.controp.co.il.
  27. H. He and L. P. Kondi, "A super-resolution technique with motion estimation considering atmospheric turbulence," Proc. SPIE 5789, 135-144 (2005).
    [CrossRef]
  28. D. Li and R. M. Mersereau, "Blur identification based on kurtosis minimization," in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2005), pp. 905-908.
  29. M. Lalonde, S. Foucher, L. Gagnon, E. Pronovost, M. Derenne, and A. Janelle, "A system to automatically track humans and vehicles with a PTZ camera," Proc. SPIE 6575, 657502 (2007).
    [CrossRef]
  30. Y. Yitzhaky's Home Page at Ben-Gurion University, Israel, http://www.ee.bgu.ac.il/~itzik/VideosAO07/.
  31. B. Bose and E. Grimson, "Learning to use scene context for object classification in surveillance," in Proceedings of the IEEE International Workshop on VS-PETS (IEEE, 2003), pp. 94-101.
  32. Y. Bogomolov, G. Dror, S. Lapchev, E. Rivlin, and M. Rudzsky, "Classification of moving targets based on motion and appearance," in BMVC03 (2003).

2007

O. Shacham, O. Haik, and Y. Yitzhaky, "Blind restoration of atmospherically degraded images by automatic best step-edge detection," Pattern Recogn. Lett. 28, 2094-2103 (2007).
[CrossRef]

M. Lalonde, S. Foucher, L. Gagnon, E. Pronovost, M. Derenne, and A. Janelle, "A system to automatically track humans and vehicles with a PTZ camera," Proc. SPIE 6575, 657502 (2007).
[CrossRef]

2006

Q. Zhou and J. K. Aggarwal, "Object tracking in an outdoor environment using fusion of features and cameras," Image Vis. Comput. 24, 1244-1255 (2006).
[CrossRef]

E. Estalayo, L. Salgado, F. Jaureguizar, and N. Garcia, "Efficient image stabilization and automatic target detection in aerial FLIR sequence," Proc. SPIE 6234, 62340N (2006).
[CrossRef]

O. Haik, Y. Lior, D. Nahmani, and Y. Yitzhaky, "Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere," Opt. Eng. 45, 117006 (2006).
[CrossRef]

2005

H. He and L. P. Kondi, "A super-resolution technique with motion estimation considering atmospheric turbulence," Proc. SPIE 5789, 135-144 (2005).
[CrossRef]

2004

S. Cheung and C. Kamath, "Robust techniques for background subtraction in urban traffic video," Proc. SPIE 5308, 881-892 (2004).
[CrossRef]

W. Hu, T. Tan, L. Wang, and S. Maybank, "A survey on visual surveillance of object motion and behaviors," IEEE Trans. Syst. Man Cybern. , Part C Appl. Rev. 34, 334-352 (2004).
[CrossRef]

2003

A. Amer, "Voting-based simultaneous tracking of multiple video objects," Proc. SPIE 5022, 500-511 (2003).
[CrossRef]

1998

1997

Y. Yitzhaky, I. Dror, and N. S. Kopeika, "Restoration of atmospherically-blurred images according to weather-predicted atmospheric modulation transfer function (MTF)," Opt. Eng. 36, 3064-3072 (1997).
[CrossRef]

1996

D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Process. Mag. 13, 43-64 (1996).
[CrossRef]

1995

1993

A. E. Savakis and H. J. Trussell, "Blur identification by residual spectral matching," IEEE Trans. Image Process. 2, 141-151 (1993).
[CrossRef] [PubMed]

1992

G. Pavlovic and A. M. Tekalp, "Maximum likelihood parametric blur identification based on a continuous spatial domain model," IEEE Trans. Image Process. 1, 496-504 (1992).
[CrossRef] [PubMed]

1966

IEEE Signal Process. Mag.

D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Process. Mag. 13, 43-64 (1996).
[CrossRef]

IEEE Trans. Image Process.

A. E. Savakis and H. J. Trussell, "Blur identification by residual spectral matching," IEEE Trans. Image Process. 2, 141-151 (1993).
[CrossRef] [PubMed]

G. Pavlovic and A. M. Tekalp, "Maximum likelihood parametric blur identification based on a continuous spatial domain model," IEEE Trans. Image Process. 1, 496-504 (1992).
[CrossRef] [PubMed]

IEEE Trans. Syst. Man Cybern.

W. Hu, T. Tan, L. Wang, and S. Maybank, "A survey on visual surveillance of object motion and behaviors," IEEE Trans. Syst. Man Cybern. , Part C Appl. Rev. 34, 334-352 (2004).
[CrossRef]

Image Vis. Comput.

Q. Zhou and J. K. Aggarwal, "Object tracking in an outdoor environment using fusion of features and cameras," Image Vis. Comput. 24, 1244-1255 (2006).
[CrossRef]

J. Opt. Soc. Am.

J. Opt. Soc. Am. A

Opt. Eng.

Y. Yitzhaky, I. Dror, and N. S. Kopeika, "Restoration of atmospherically-blurred images according to weather-predicted atmospheric modulation transfer function (MTF)," Opt. Eng. 36, 3064-3072 (1997).
[CrossRef]

O. Haik, Y. Lior, D. Nahmani, and Y. Yitzhaky, "Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere," Opt. Eng. 45, 117006 (2006).
[CrossRef]

Pattern Recogn. Lett.

O. Shacham, O. Haik, and Y. Yitzhaky, "Blind restoration of atmospherically degraded images by automatic best step-edge detection," Pattern Recogn. Lett. 28, 2094-2103 (2007).
[CrossRef]

Proc. SPIE

S. Cheung and C. Kamath, "Robust techniques for background subtraction in urban traffic video," Proc. SPIE 5308, 881-892 (2004).
[CrossRef]

A. Amer, "Voting-based simultaneous tracking of multiple video objects," Proc. SPIE 5022, 500-511 (2003).
[CrossRef]

M. Lalonde, S. Foucher, L. Gagnon, E. Pronovost, M. Derenne, and A. Janelle, "A system to automatically track humans and vehicles with a PTZ camera," Proc. SPIE 6575, 657502 (2007).
[CrossRef]

H. He and L. P. Kondi, "A super-resolution technique with motion estimation considering atmospheric turbulence," Proc. SPIE 5789, 135-144 (2005).
[CrossRef]

E. Estalayo, L. Salgado, F. Jaureguizar, and N. Garcia, "Efficient image stabilization and automatic target detection in aerial FLIR sequence," Proc. SPIE 6234, 62340N (2006).
[CrossRef]

Other

D. Li and R. M. Mersereau, "Blur identification based on kurtosis minimization," in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2005), pp. 905-908.

Y. Yitzhaky's Home Page at Ben-Gurion University, Israel, http://www.ee.bgu.ac.il/~itzik/VideosAO07/.

B. Bose and E. Grimson, "Learning to use scene context for object classification in surveillance," in Proceedings of the IEEE International Workshop on VS-PETS (IEEE, 2003), pp. 94-101.

Y. Bogomolov, G. Dror, S. Lapchev, E. Rivlin, and M. Rudzsky, "Classification of moving targets based on motion and appearance," in BMVC03 (2003).

CONTROP precision technologies LTD., "Innovative solutions for surveillance and reconnaissance," http://www.controp.co.il.

A. Strehl and J. K. Aggarwal, "Detecting moving objects in airborne forward looking infra-red sequences," in Proceedings of the IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3-12.
[CrossRef]

Y. Dedeoglu, "Moving object detection, tracking and classification for smart video surveillance" (Bilkent University, 2004). NOTE: Master's thesis. Available online at [accessed 28 October 2007]: http://www.cs.bilkent.edu.tr/~yigithan/publications/MScThesis.pdf.

N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE Press, 1998).

D. Li, R. Mersereau, D. H. Frakes, and M. J. T. Smith, "A new method for suppressing optical turbulence in video," in Proceedings of EUSIPCO (2005).

A. K. Jain, Fundamentals of Digital Image Processing (Prentice Hall, 1989).

A. Jalobeanu, J. Zerubia, and L. Blanc-Feraud, "Bayesian estimation of blur and noise in remote sensing imaging," in Blind image deconvolution: theory and applications, P. Campisi and K. Egiazarian, ed. (CRC Press, 2007).

A. J. Lipton, H. Fujiyoshi, and R. S. Patil, "Moving target classification and tracking from real-time video," in Proceedings of the IEEE Image Understanding Workshop (IEEE, 1998), pp. 129-136.

M. A. Ali, S. Indupalli, and B. Boufama, "Tracking Multiple People for Video Surveillance," in First International Workshop on Video Processing for Security (2006).

A. Elgammal, D. Harwood, and L. Davis, "Non-parametric model for background subtraction," in Proceedings of IEEE Conference on Computer Vision1843 (IEEE, 2000), pp. 751-767.

G. Baldini, P. Campadelli, D. Cozzi, and R. Lanzarotti, "A simple and robust method for moving target tracking," in Proceedings of the IASTED International Conference on Signal Processing, Pattern Recognition and Applications (SPPRA, 2002), pp. 108-112.
[PubMed]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1

(a) Original (“ground-truth”) image used for creating simulated atmospherically degraded videos under various imaging conditions as described in Subsection 4A; (b) original ground-truth human target used in the simulation examples as described in Subsection 4A; (c) same as (b) but for a car target.

Fig. 2
Fig. 2

Sample frames taken from four pairs of simulated-degraded and restored video sequences. Degraded frames are shown at the left side of the figure, while the corresponding restored frames are shown at its right side. Each row represents different imaging conditions as described in Subsection 4A. Specifically, the first row represents low noise and high blur situation; the second, high noise and blur; the third, low noise and blur; and the fourth, high noise and low blur. The moving object (a human holding a thin pole in this case) is marked by a circle.

Fig. 3
Fig. 3

Sample frames taken from four pairs of real-degraded and restored video sequences. Recorded frames are shown at the left side of the figure, while the corresponding restored frames are shown at its right side. The moving objects (described in Table 4) are numbered and marked by circles. The restored versus the degraded videos are available on the web [30].

Fig. 4
Fig. 4

(Color online) Results of applying the automatic MTF estimation method of Section 2 to the video frame shown in Fig. 3(g). (a) The pixel which is the center of the best identified step-edge region is marked by a circle; (b) the extracted best step-edge region (enlarged); (c) the estimated step-edge profile (ESF); and (d) the MTF's cross section calculated from the estimated ESF.

Fig. 5
Fig. 5

Results of applying the motion detection algorithm (Subsection 3A) to the restored and nonrestored videos sampled in Fig. 3. In these binary maps, white indicates foreground pixels [moving objects that satisfy Eq. (4)], while black indicates background pixels. The restored versus the degraded binary videos are available on the web [30].

Fig. 6
Fig. 6

Results of applying the tracking algorithm (Subsection 3B) to the binary video signals sampled in Fig. 5. The values of the parameters used to create these results are presented in Table 1. The restored versus the degraded binary videos are also available on the web [30].

Tables (5)

Tables Icon

Table 1 Values of the Parameters that were Used in the Tracking Stage (Subsection 3B); These Values were Used for Both the Degraded and Restored Video Sequences

Tables Icon

Table 2 Average Precision Values [Eq. (8)], Averaged for All Sequence Frames, Obtained for the Human Target [Whose Ground-Truth is Shown in Fig. 1(b)] in Different Imaging Conditions and Video Types (Sample Frames of These Videos Are Shown in Fig. 2) Using the Median Filter for Background Estimation (Subsection 3A) and Using the Nonparametric Model [23, 24] for Background Estimation; The Average Recall Value [Eq. (7)] in All Cases Was Set to the Same Value (0.99 in Our Implementation)

Tables Icon

Table 3 Average Precision Values, Averaged for All Sequence Frames, Obtained for the Car Target [Whose Ground-Truth is Shown in Fig. 1(c)] in Different Imaging Conditions and Video Types Using the Median Filter and Nonparametric Model for Background Estimation. The average recall value was set to 0.99.

Tables Icon

Table 4 Description of the Five Moving Objects that Appear in the Recorded Video Sequences; These Objects are Numbered and Marked by Circles in the Sample Frames Shown in Fig. 3

Tables Icon

Table 5 Dispersedness (perimeter2∕area) Values for Each of the Five Moving Objects that are Numbered and Marked by Circles in the Sample Frames Shown in Fig. 3; It Can be Noted that the Restoration Improved the Dispersedness Values by an Average of ∼28%

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

( E ( ( x μ ) 2 ) ) 3 E ( ( x μ ) 4 ) ,
f ^ ( m , n ) = 1 [ G ( u , ν )   Wiener ( u , ν ) ] ,
Wiener ( u , ν ) = H ( u , ν ) * | H ( u , ν ) | 2 + γ .
| I t ( x , y ) B t ( x , y ) | > THRESH_MOTION,
Dist ( c t p , c t + 1 i ) < THRESH_CM,
THRESH_SIZE s t p s t + 1 i 1 / THRESH_SIZE ,
Recall = Foreground   pixels   correctly   identified Foreground   pixels   in   the   ground-truth ,
Precision = Foreground   pixels   correctly   identified Foreground   pixels   detected .
Dispersedness = Perimeter 2 Area ,

Metrics