Abstract

A basic step in automatic moving objects detection is often modeling the background (i.e., the scene excluding the moving objects). The background model describes the temporal intensity distribution expected at different image locations. Long-distance imaging through atmospheric turbulent medium is affected mainly by blur and spatiotemporal movements in the image, which have contradicting effects on the temporal intensity distribution, mainly at edge locations. This paper addresses this modeling problem theoretically, and experimentally, for various long-distance imaging conditions. Results show that a unimodal distribution is usually a more appropriate model. However, if image deblurring is performed, a multimodal modeling might be more appropriate.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
    [CrossRef]
  2. O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
    [CrossRef]
  3. E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically-degraded video,” Opt. Eng. 51, 1–14 (2012).
  4. O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
    [CrossRef]
  5. B. Fishbain, L. Yaroslavsky, and I. Ideses, “Real-time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).
  6. S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
    [CrossRef]
  7. C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 747–757 (2000).
    [CrossRef]
  8. N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” 13th Conference on Uncertainty in Artificial Intelligence (UAI) (1997).
  9. A. Elgammal, D. Hanvood, and L. S. Davis, “Non parametric model for background subtraction,” ECCV 2000, 751–767 (2000).
  10. B. Chan, V. Mahadevan, and N. Vasconcelos, “Generalized Stauffer–Grimson background subtraction for dynamic scenes,” Mach. Vis. Appl. 22, 751–766 (2011).
    [CrossRef]
  11. Y. Elhabian, M. El-Sayed, and H. Ahmed, “Moving object detection in spatial domain using background removal techniques - state-of-art,” in Recent Patents on Computer Science (Bentham Science, 2008), pp. 32–54.
  12. Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
    [CrossRef]
  13. Q. Zhou and J. K. Aggarwal, “Object tracking in an outdoor environment using fusion of features and cameras,” Image Vis. Comput. 24, 1244–1255 (2006).
  14. Y. Yitzhaky’s website, video examples (last accessed 30/Dec/2013), http://www.ee.bgu.ac.il/~itzik/TurbImagEffects/ .
  15. N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE, 1998), pp. 458–475.
  16. D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am. 23, 52–61 (1966).
  17. G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001), pp. 20–25, 31–35.
  18. O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
    [CrossRef]
  19. S. Zamek and Y. Yitzhaky, “Turbulence strength estimation from an arbitrary set of atmospherically degraded images,” J. Opt. Soc. Am. A 23, 3106–3113 (2006).
    [CrossRef]
  20. A. Barjatya, “Block matching algorithms for motion estimation,” Tech. Rep. (Utah State University, 2004).
  21. E. H. Barney-Smith, “PSF estimation by gradient descent fit to the ESF,” Proc. SPIE 6059, 60590E (2006).
    [CrossRef]
  22. A. R. Palmer and C. Strobeck, “Fluctuating asymmetry analyses revisited,” in Developmental Instability: Causes and Consequences, M. Polak, ed. (Oxford University, 2003), pp. 279–319.
  23. Y. Yitzhaky, E. Chen, and O. Haik, “Surveillance in long-distance turbulence-degraded videos,” Proc. SPIE 8897, 889704 (2013).
    [CrossRef]

2013 (2)

O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

Y. Yitzhaky, E. Chen, and O. Haik, “Surveillance in long-distance turbulence-degraded videos,” Proc. SPIE 8897, 889704 (2013).
[CrossRef]

2012 (1)

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically-degraded video,” Opt. Eng. 51, 1–14 (2012).

2011 (1)

B. Chan, V. Mahadevan, and N. Vasconcelos, “Generalized Stauffer–Grimson background subtraction for dynamic scenes,” Mach. Vis. Appl. 22, 751–766 (2011).
[CrossRef]

2010 (1)

Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
[CrossRef]

2007 (3)

B. Fishbain, L. Yaroslavsky, and I. Ideses, “Real-time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
[CrossRef]

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

2006 (4)

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
[CrossRef]

E. H. Barney-Smith, “PSF estimation by gradient descent fit to the ESF,” Proc. SPIE 6059, 60590E (2006).
[CrossRef]

S. Zamek and Y. Yitzhaky, “Turbulence strength estimation from an arbitrary set of atmospherically degraded images,” J. Opt. Soc. Am. A 23, 3106–3113 (2006).
[CrossRef]

Q. Zhou and J. K. Aggarwal, “Object tracking in an outdoor environment using fusion of features and cameras,” Image Vis. Comput. 24, 1244–1255 (2006).

2004 (1)

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

2000 (2)

C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 747–757 (2000).
[CrossRef]

A. Elgammal, D. Hanvood, and L. S. Davis, “Non parametric model for background subtraction,” ECCV 2000, 751–767 (2000).

1966 (1)

D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am. 23, 52–61 (1966).

Aggarwal, J. K.

Q. Zhou and J. K. Aggarwal, “Object tracking in an outdoor environment using fusion of features and cameras,” Image Vis. Comput. 24, 1244–1255 (2006).

Ahmed, H.

Y. Elhabian, M. El-Sayed, and H. Ahmed, “Moving object detection in spatial domain using background removal techniques - state-of-art,” in Recent Patents on Computer Science (Bentham Science, 2008), pp. 32–54.

Barjatya, A.

A. Barjatya, “Block matching algorithms for motion estimation,” Tech. Rep. (Utah State University, 2004).

Barney-Smith, E. H.

E. H. Barney-Smith, “PSF estimation by gradient descent fit to the ESF,” Proc. SPIE 6059, 60590E (2006).
[CrossRef]

Benezeth, Y.

Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
[CrossRef]

Boreman, G. D.

G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001), pp. 20–25, 31–35.

Chan, B.

B. Chan, V. Mahadevan, and N. Vasconcelos, “Generalized Stauffer–Grimson background subtraction for dynamic scenes,” Mach. Vis. Appl. 22, 751–766 (2011).
[CrossRef]

Chen, E.

Y. Yitzhaky, E. Chen, and O. Haik, “Surveillance in long-distance turbulence-degraded videos,” Proc. SPIE 8897, 889704 (2013).
[CrossRef]

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically-degraded video,” Opt. Eng. 51, 1–14 (2012).

Cheung, S.

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

Davis, L. S.

A. Elgammal, D. Hanvood, and L. S. Davis, “Non parametric model for background subtraction,” ECCV 2000, 751–767 (2000).

Elgammal, A.

A. Elgammal, D. Hanvood, and L. S. Davis, “Non parametric model for background subtraction,” ECCV 2000, 751–767 (2000).

Elhabian, Y.

Y. Elhabian, M. El-Sayed, and H. Ahmed, “Moving object detection in spatial domain using background removal techniques - state-of-art,” in Recent Patents on Computer Science (Bentham Science, 2008), pp. 32–54.

El-Sayed, M.

Y. Elhabian, M. El-Sayed, and H. Ahmed, “Moving object detection in spatial domain using background removal techniques - state-of-art,” in Recent Patents on Computer Science (Bentham Science, 2008), pp. 32–54.

Emile, B.

Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
[CrossRef]

Fishbain, B.

B. Fishbain, L. Yaroslavsky, and I. Ideses, “Real-time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Fried, D. L.

D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am. 23, 52–61 (1966).

Friedman, N.

N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” 13th Conference on Uncertainty in Artificial Intelligence (UAI) (1997).

Grimson, W. E. L.

C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 747–757 (2000).
[CrossRef]

Haik, O.

Y. Yitzhaky, E. Chen, and O. Haik, “Surveillance in long-distance turbulence-degraded videos,” Proc. SPIE 8897, 889704 (2013).
[CrossRef]

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically-degraded video,” Opt. Eng. 51, 1–14 (2012).

O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
[CrossRef]

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
[CrossRef]

Hanvood, D.

A. Elgammal, D. Hanvood, and L. S. Davis, “Non parametric model for background subtraction,” ECCV 2000, 751–767 (2000).

Ideses, I.

B. Fishbain, L. Yaroslavsky, and I. Ideses, “Real-time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Jodoin, P. M.

Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
[CrossRef]

Kamath, C.

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

Kopeika, N. S.

N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE, 1998), pp. 458–475.

Laurent, H.

Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
[CrossRef]

Li, X.

O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

Lior, Y.

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
[CrossRef]

Mahadevan, V.

B. Chan, V. Mahadevan, and N. Vasconcelos, “Generalized Stauffer–Grimson background subtraction for dynamic scenes,” Mach. Vis. Appl. 22, 751–766 (2011).
[CrossRef]

Nahmani, D.

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
[CrossRef]

Oreifej, O.

O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

Palmer, A. R.

A. R. Palmer and C. Strobeck, “Fluctuating asymmetry analyses revisited,” in Developmental Instability: Causes and Consequences, M. Polak, ed. (Oxford University, 2003), pp. 279–319.

Rosenberger, C.

Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
[CrossRef]

Russell, S.

N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” 13th Conference on Uncertainty in Artificial Intelligence (UAI) (1997).

Shacham, O.

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

Shah, M.

O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

Stauffer, C.

C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 747–757 (2000).
[CrossRef]

Strobeck, C.

A. R. Palmer and C. Strobeck, “Fluctuating asymmetry analyses revisited,” in Developmental Instability: Causes and Consequences, M. Polak, ed. (Oxford University, 2003), pp. 279–319.

Vasconcelos, N.

B. Chan, V. Mahadevan, and N. Vasconcelos, “Generalized Stauffer–Grimson background subtraction for dynamic scenes,” Mach. Vis. Appl. 22, 751–766 (2011).
[CrossRef]

Yaroslavsky, L.

B. Fishbain, L. Yaroslavsky, and I. Ideses, “Real-time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Yitzhaky, Y.

Y. Yitzhaky, E. Chen, and O. Haik, “Surveillance in long-distance turbulence-degraded videos,” Proc. SPIE 8897, 889704 (2013).
[CrossRef]

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically-degraded video,” Opt. Eng. 51, 1–14 (2012).

O. Haik and Y. Yitzhaky, “Effects of image restoration on automatic acquisition of moving objects in thermal video sequences degraded by the atmosphere,” Appl. Opt. 46, 8562–8572 (2007).
[CrossRef]

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

S. Zamek and Y. Yitzhaky, “Turbulence strength estimation from an arbitrary set of atmospherically degraded images,” J. Opt. Soc. Am. A 23, 3106–3113 (2006).
[CrossRef]

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
[CrossRef]

Zamek, S.

Zhou, Q.

Q. Zhou and J. K. Aggarwal, “Object tracking in an outdoor environment using fusion of features and cameras,” Image Vis. Comput. 24, 1244–1255 (2006).

Appl. Opt. (1)

ECCV (1)

A. Elgammal, D. Hanvood, and L. S. Davis, “Non parametric model for background subtraction,” ECCV 2000, 751–767 (2000).

IEEE Trans. Pattern Anal. Mach. Intell. (2)

O. Oreifej, X. Li, and M. Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 450–462 (2013).
[CrossRef]

C. Stauffer and W. E. L. Grimson, “Learning patterns of activity using real-time tracking,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 747–757 (2000).
[CrossRef]

Image Vis. Comput. (1)

Q. Zhou and J. K. Aggarwal, “Object tracking in an outdoor environment using fusion of features and cameras,” Image Vis. Comput. 24, 1244–1255 (2006).

J. Electron. Imaging (1)

Y. Benezeth, P. M. Jodoin, B. Emile, H. Laurent, and C. Rosenberger, “Comparative study of background subtraction algorithms,” J. Electron. Imaging 19, 033003 (2010).
[CrossRef]

J. Opt. Soc. Am. (1)

D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am. 23, 52–61 (1966).

J. Opt. Soc. Am. A (1)

J. Real-Time Image Proc. (1)

B. Fishbain, L. Yaroslavsky, and I. Ideses, “Real-time stabilization of long range observation system turbulent video,” J. Real-Time Image Proc. 2, 11–22 (2007).

Mach. Vis. Appl. (1)

B. Chan, V. Mahadevan, and N. Vasconcelos, “Generalized Stauffer–Grimson background subtraction for dynamic scenes,” Mach. Vis. Appl. 22, 751–766 (2011).
[CrossRef]

Opt. Eng. (2)

E. Chen, O. Haik, and Y. Yitzhaky, “Classification of thermal moving objects in atmospherically-degraded video,” Opt. Eng. 51, 1–14 (2012).

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effects of image restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
[CrossRef]

Pattern Recogn. Lett. (1)

O. Shacham, O. Haik, and Y. Yitzhaky, “Blind restoration of atmospherically degraded images by automatic best step-edge detection,” Pattern Recogn. Lett. 28, 2094–2103 (2007).
[CrossRef]

Proc. SPIE (3)

E. H. Barney-Smith, “PSF estimation by gradient descent fit to the ESF,” Proc. SPIE 6059, 60590E (2006).
[CrossRef]

S. Cheung and C. Kamath, “Robust techniques for background subtraction in urban traffic video,” Proc. SPIE 5308, 881–892 (2004).
[CrossRef]

Y. Yitzhaky, E. Chen, and O. Haik, “Surveillance in long-distance turbulence-degraded videos,” Proc. SPIE 8897, 889704 (2013).
[CrossRef]

Other (7)

N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” 13th Conference on Uncertainty in Artificial Intelligence (UAI) (1997).

Y. Elhabian, M. El-Sayed, and H. Ahmed, “Moving object detection in spatial domain using background removal techniques - state-of-art,” in Recent Patents on Computer Science (Bentham Science, 2008), pp. 32–54.

A. R. Palmer and C. Strobeck, “Fluctuating asymmetry analyses revisited,” in Developmental Instability: Causes and Consequences, M. Polak, ed. (Oxford University, 2003), pp. 279–319.

A. Barjatya, “Block matching algorithms for motion estimation,” Tech. Rep. (Utah State University, 2004).

G. D. Boreman, Modulation Transfer Function in Optical and Electro-Optical Systems (SPIE, 2001), pp. 20–25, 31–35.

Y. Yitzhaky’s website, video examples (last accessed 30/Dec/2013), http://www.ee.bgu.ac.il/~itzik/TurbImagEffects/ .

N. S. Kopeika, A System Engineering Approach to Imaging, 2nd ed. (SPIE, 1998), pp. 458–475.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1.

Bimodal background case. (a) A frame from a “moving tree” video (Vid1 in [14]) taken at a windy day. The white arrows point to the checked pixel locations, one at an edge of a tree leaf and the other at the sky. (b) A bimodal histogram, describing the edge pixel gray level over time. (c) A single modal histogram, describing the fixed background (sky) pixel gray level over time.

Fig. 2.
Fig. 2.

Point movement (STD) at the NIR channel as a function of the object distance at the different turbulence strengths.

Fig. 3.
Fig. 3.

MTF analysis result at the NIR channel includes diffraction, detector, and turbulence effects for 5 km observation distance. (a) Moderate turbulence Cn2=10-15[m(2/3)]. (b) Heavy-moderate turbulence Cn2=1014[m(2/3)].

Fig. 4.
Fig. 4.

Blur spot size at the VIS channel as a function of the object distance and the turbulence strength.

Fig. 5.
Fig. 5.

Comparison between the movements and the blurring at the different observation channels for different turbulence strengths.

Fig. 6.
Fig. 6.

(a) Undistorted frame used for simulation. (b) The simulated blurred frame for 5 km distance at a moderate turbulence. (c) The simulated blurred frame for 15 km distance at a heavy turbulence.

Fig. 7.
Fig. 7.

(a) One frame from a simulated blurred and movement-distorted movie (moderate turbulence) (Vid3 in [14]). (b) The histogram of an edge pixel located at coordinates (459, 394) and marked by a red circle around it.

Fig. 8.
Fig. 8.

(a) One frame from a simulated “shaken” not blurred (restored) movie with detected motion vectors obtained from a real turbulent video using 32×32 pixels block size (Vid4 in [14]). (b) The histogram of the same edge pixel as in Fig. 7 (marked by a red circle around it).

Fig. 9.
Fig. 9.

Real-degraded video frames and their edge pixel histograms. (a) A video frame at the NIR channel in a distance of about 4.5 km at light-moderate turbulence (Vid5 in [14]). The edge area is shown within a red circle. (b) The corresponding histogram of a pixel located at the edge of the car. (c) A video frame at the NIR channel in a distance of about 15 km at moderate turbulence (Vid6 in [14]). (d) The corresponding histogram of an edge pixel.

Fig. 10.
Fig. 10.

Real-degraded and deblurred video frames and their edge pixel histograms. (a) A frame from a real distorted video (Vid7 in [14]). The checked edge area is shown within a red circle. (b) Histogram of an edge pixel from the degraded video. (c) A frame from the deblurred video (Vid8 in [14]). The edge area is shown within a red circle. (d) The corresponding histogram of an edge pixel.

Tables (1)

Tables Icon

Table 1. Optical Parameters of the Imaging Systems Used in this Study

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

FGt(x,y)={1,|It(x,y)median(In(x,y))|>T0,else}n=t1,t2tN,
FGt(x,y)={1,1Ni=tNt11σ2πexp{12(It(x,y)Ii(x,y))2σ2}<T0,else},
P(It(x,y))=i=1kωi,t·η(It(x,y),μi,t,σi,t,
α2=2.914·D13·Cn2·L,
Cn2=1013[m23],Heavy turbulenceCn2=1015[m23],Moderate turbulenceCn2=1017[m23],Light turbulence.
hF·α,
MTFturbulence(ξ)=exp(38·57.53·ξ53·λ13·Cn2·L),
MTFdiff(ξ)=2π{cos1(ξξcutoff)(ξξcutoff)[1(ξξcutoff)2]12},
MTFdetctor(ξ)=|sinc(ξw)|=|sin(πξw)πξw|,
E{(glμ)4}E{(glμ)2}2,

Metrics