Abstract

In this paper, a biologically inspired multilevel approach for simultaneously detecting multiple independently moving targets from airborne forward-looking infrared (FLIR) sequences is proposed. Due to the moving platform, low contrast infrared images, and nonrepeatability of the target signature, moving targets detection from FLIR sequences is still an open problem. Avoiding six parameter affine or eight parameter planar projective transformation matrix estimation of two adjacent frames, which are utilized by existing moving targets detection approaches to cope with the moving infrared camera and have become the bottleneck for the further elevation of the moving targets detection performance, the proposed moving targets detection approach comprises three sequential modules: motion perception for efficiently extracting motion cues, attended motion views extraction for coarsely localizing moving targets, and appearance perception in the local attended motion views for accurately detecting moving targets. Experimental results demonstrate that the proposed approach is efficient and outperforms the compared state-of-the-art approaches.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. H. Yalcin, M. Hebert, R. Collins, and M. Black, “A flow-based approach to vehicle detection and background mosaicking in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), Vol. 2.
  2. Z. Yin and R. Collins, “Belief propagation in a 3D spatiotemporal MRF for moving object detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  3. Q. Yu and G. Medioni, “Motion pattern interpretation and detection for tracking moving vehicles in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2671–2678.
  4. C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009).
    [CrossRef]
  5. X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011).
    [CrossRef]
  6. X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012).
    [CrossRef]
  7. K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012).
    [CrossRef]
  8. H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
    [CrossRef]
  9. Z. Yin and R. Collins, “Moving object localization in thermal imagery by forward backward MHI,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2006), pp. 133.
  10. A. Strehl and J. K. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.
  11. F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010).
    [CrossRef]
  12. S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.
  13. T. R. S. Kalyan and M. Malathi, “Architectural implementation of high speed optical flow computation based on Lucas-Kanade algorithm,” in Proceedings of International Conference on Electronics Computer Technology (IEEE, 2011), pp. 192–195.
  14. I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012).
    [CrossRef]
  15. G. Castellano, J. Boyce, and M. Sandler, “Moving target detection in infrared imagery using a regularized CDWT optical flow,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 13–22.
  16. C. Kim and P. Milanfar, “Visual saliency in noisy images,” J. Vis. 13(4):5, 1–14 (2013).
    [CrossRef]
  17. L. Ungerleider and M. Mishkin, “Two cortical visual systems,” in Analysis of Visual Behavior (MIT, 1982), pp. 549–586.
  18. D. C. Van Essen and J. H. R. Maunsell, “Hierarchical organization and functional streams in the visual cortex,” Trends Neurosci. 6, 370–375 (1983).
    [CrossRef]
  19. M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,” Trends Neurosci. 15, 20–25 (1992).
    [CrossRef]
  20. J. Norman, “Two visual systems and two theories of perception: an attempt to reconcile the constructivist and ecological approaches,” Behav. Brain Sci. 25, 73–96 (2002).
    [CrossRef]
  21. S. E. Palmer, Vision Science: Photons to Phenomenology (MIT, 1999).
  22. R. D. Mclntosh and T. Schenk, “Two visual streams for perception and action: current trends,” Neuropsychologia 47, 1391–1396 (2009).
    [CrossRef]
  23. L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995).
    [CrossRef]
  24. J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
    [CrossRef]
  25. L. Itti and P. Baldi, “Bayesian surprise attracts human attention,” Vis. Res. 49, 1295–1306 (2009).
    [CrossRef]
  26. B. Zhou, X. Hou, and L. Zhang, “A phase discrepancy analysis of object motion,” in Proceedings of the 10th Asian Conference on Computer Vision (2010), pp. 225–238.
  27. M. S. Longmire and E. H. Takken, “LMS and matched digital filters for optical clutter suppression,” Appl. Opt. 27, 1141–1159 (1988).
    [CrossRef]
  28. P. A. Ffrench, J. R. Zeidler, and W. H. Ku, “Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm,” IEEE Trans. Image Process. 6, 383–397 (1997).
    [CrossRef]
  29. S. Leonov, “Nonparametric methods for clutter removal,” IEEE Trans. Aerosp. Electron. Syst. 37, 832–848 (2001).
    [CrossRef]
  30. F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004).
    [CrossRef]
  31. Ch. Gao, J. Tian, and P. Wang, “Generalized structure tensor based infrared small target detection,” Electron. Lett. 44, 1349–1351 (2008).
    [CrossRef]
  32. Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
    [CrossRef]
  33. A. Yilmaz, K. Shafique, and M. Shah, “Target tracking in airborne forward looking infrared imagery,” Image Vis. Comput. 21, 623–635 (2003).
    [CrossRef]
  34. U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
    [CrossRef]
  35. S. Der, A. Chan, N. Nasrabadi, and H. Kwon, “Automated vehicle detection in forward-looking infrared imagery,” Appl. Opt. 43, 333–348 (2004).
    [CrossRef]
  36. J. F. Khan and M. S. Alam, “Target detection in cluttered forward-looking infrared imagery,” Opt. Eng. 44, 076404 (2005).
    [CrossRef]
  37. J. F. Khan, M. S. Alam, and S. Bhuiyan, “Automatic target detection in forward-looking infrared imagery via probabilistic neural networks,” Appl. Opt. 48, 464–476 (2009).
    [CrossRef]
  38. A. Akula, R. Ghosh, S. Kumar, and H. K. Sardana, “Moving target detection in thermal infrared imagery using spatiotemporal information,” J. Opt. Soc. Am. A 30, 1492–1501 (2013).
    [CrossRef]
  39. Y. Chen, X. Liu, and Q. Huang, “Real-time detection of rapid moving infrared target on variation background,” Infrared Phys. Technol. 51, 146–151 (2008).
    [CrossRef]
  40. Q. Wang, W. Zhu, and L. Zhang, “Moving object detection system with phase discrepancy,” in Proceedings of the 8th International Symposium on Neural Networks (Springer, 2011), pp. 402–411.
  41. L. Ren, Ch. Shi, and X. Ran, “Target detection of maritime search and rescue: saliency accumulation method,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (IEEE, 2012), pp. 1972–1976.
  42. Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
    [CrossRef]
  43. Y. F. Ma and H. Zhang, “Contrast-based image attention analysis by using fuzzy growing,” in Proceedings of the 11th ACM International Conference on Multimedia (ACM, 2003), pp. 374–381.
  44. L. Itti, C. Koch, and E. Niebur, “A model of saliency based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
    [CrossRef]
  45. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990).
    [CrossRef]
  46. T. Zhang, X. Wang, and Y. Wang, “Automatic threshold estimation for gradient image segmentation,” in Proceedings of Multispectral Image Processing and Pattern Recognition (SPIE, 2001), pp. 121–126.

2013

H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
[CrossRef]

C. Kim and P. Milanfar, “Visual saliency in noisy images,” J. Vis. 13(4):5, 1–14 (2013).
[CrossRef]

Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
[CrossRef]

A. Akula, R. Ghosh, S. Kumar, and H. K. Sardana, “Moving target detection in thermal infrared imagery using spatiotemporal information,” J. Opt. Soc. Am. A 30, 1492–1501 (2013).
[CrossRef]

Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
[CrossRef]

2012

X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012).
[CrossRef]

K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012).
[CrossRef]

I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012).
[CrossRef]

2011

X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011).
[CrossRef]

2010

F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010).
[CrossRef]

2009

C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009).
[CrossRef]

R. D. Mclntosh and T. Schenk, “Two visual streams for perception and action: current trends,” Neuropsychologia 47, 1391–1396 (2009).
[CrossRef]

L. Itti and P. Baldi, “Bayesian surprise attracts human attention,” Vis. Res. 49, 1295–1306 (2009).
[CrossRef]

J. F. Khan, M. S. Alam, and S. Bhuiyan, “Automatic target detection in forward-looking infrared imagery via probabilistic neural networks,” Appl. Opt. 48, 464–476 (2009).
[CrossRef]

2008

Y. Chen, X. Liu, and Q. Huang, “Real-time detection of rapid moving infrared target on variation background,” Infrared Phys. Technol. 51, 146–151 (2008).
[CrossRef]

Ch. Gao, J. Tian, and P. Wang, “Generalized structure tensor based infrared small target detection,” Electron. Lett. 44, 1349–1351 (2008).
[CrossRef]

2005

J. F. Khan and M. S. Alam, “Target detection in cluttered forward-looking infrared imagery,” Opt. Eng. 44, 076404 (2005).
[CrossRef]

2004

F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004).
[CrossRef]

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

S. Der, A. Chan, N. Nasrabadi, and H. Kwon, “Automated vehicle detection in forward-looking infrared imagery,” Appl. Opt. 43, 333–348 (2004).
[CrossRef]

2003

A. Yilmaz, K. Shafique, and M. Shah, “Target tracking in airborne forward looking infrared imagery,” Image Vis. Comput. 21, 623–635 (2003).
[CrossRef]

2002

J. Norman, “Two visual systems and two theories of perception: an attempt to reconcile the constructivist and ecological approaches,” Behav. Brain Sci. 25, 73–96 (2002).
[CrossRef]

2001

S. Leonov, “Nonparametric methods for clutter removal,” IEEE Trans. Aerosp. Electron. Syst. 37, 832–848 (2001).
[CrossRef]

1998

L. Itti, C. Koch, and E. Niebur, “A model of saliency based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[CrossRef]

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

1997

P. A. Ffrench, J. R. Zeidler, and W. H. Ku, “Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm,” IEEE Trans. Image Process. 6, 383–397 (1997).
[CrossRef]

1995

L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995).
[CrossRef]

1992

M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,” Trends Neurosci. 15, 20–25 (1992).
[CrossRef]

1990

P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990).
[CrossRef]

1988

1983

D. C. Van Essen and J. H. R. Maunsell, “Hierarchical organization and functional streams in the visual cortex,” Trends Neurosci. 6, 370–375 (1983).
[CrossRef]

Aggarwal, J. K.

A. Strehl and J. K. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.

Akula, A.

Alam, M. S.

Ali, S.

S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.

Baldi, P.

L. Itti and P. Baldi, “Bayesian surprise attracts human attention,” Vis. Res. 49, 1295–1306 (2009).
[CrossRef]

Benedek, C.

C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009).
[CrossRef]

Bhattacharya, S.

S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.

Bhuiyan, S.

Black, M.

H. Yalcin, M. Hebert, R. Collins, and M. Black, “A flow-based approach to vehicle detection and background mosaicking in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), Vol. 2.

Boyce, J.

G. Castellano, J. Boyce, and M. Sandler, “Moving target detection in infrared imagery using a regularized CDWT optical flow,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 13–22.

Braga-Neto, U.

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

Bullier, J.

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995).
[CrossRef]

Cao, X.

X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012).
[CrossRef]

X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011).
[CrossRef]

Castellano, G.

G. Castellano, J. Boyce, and M. Sandler, “Moving target detection in infrared imagery using a regularized CDWT optical flow,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 13–22.

Chan, A.

Chang, H.

H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
[CrossRef]

Chen, G.

K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012).
[CrossRef]

Chen, Y.

Y. Chen, X. Liu, and Q. Huang, “Real-time detection of rapid moving infrared target on variation background,” Infrared Phys. Technol. 51, 146–151 (2008).
[CrossRef]

Choudhary, M.

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

Collins, R.

Z. Yin and R. Collins, “Moving object localization in thermal imagery by forward backward MHI,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2006), pp. 133.

Z. Yin and R. Collins, “Belief propagation in a 3D spatiotemporal MRF for moving object detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

H. Yalcin, M. Hebert, R. Collins, and M. Black, “A flow-based approach to vehicle detection and background mosaicking in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), Vol. 2.

Der, S.

Du, Q.

K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012).
[CrossRef]

Ffrench, P. A.

P. A. Ffrench, J. R. Zeidler, and W. H. Ku, “Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm,” IEEE Trans. Image Process. 6, 383–397 (1997).
[CrossRef]

Gao, Ch.

Ch. Gao, J. Tian, and P. Wang, “Generalized structure tensor based infrared small target detection,” Electron. Lett. 44, 1349–1351 (2008).
[CrossRef]

Ghosh, R.

Girard, P.

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995).
[CrossRef]

Goodale, M. A.

M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,” Trends Neurosci. 15, 20–25 (1992).
[CrossRef]

Goutsias, J.

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

Hebert, M.

H. Yalcin, M. Hebert, R. Collins, and M. Black, “A flow-based approach to vehicle detection and background mosaicking in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), Vol. 2.

Hou, X.

B. Zhou, X. Hou, and L. Zhang, “A phase discrepancy analysis of object motion,” in Proceedings of the 10th Asian Conference on Computer Vision (2010), pp. 225–238.

Huang, Q.

Y. Chen, X. Liu, and Q. Huang, “Real-time detection of rapid moving infrared target on variation background,” Infrared Phys. Technol. 51, 146–151 (2008).
[CrossRef]

Hupe, J. M.

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

Idrees, H.

S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.

Ishii, I.

I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012).
[CrossRef]

Itti, L.

L. Itti and P. Baldi, “Bayesian surprise attracts human attention,” Vis. Res. 49, 1295–1306 (2009).
[CrossRef]

L. Itti, C. Koch, and E. Niebur, “A model of saliency based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[CrossRef]

James, A. C.

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

Kalyan, T. R. S.

T. R. S. Kalyan and M. Malathi, “Architectural implementation of high speed optical flow computation based on Lucas-Kanade algorithm,” in Proceedings of International Conference on Electronics Computer Technology (IEEE, 2011), pp. 192–195.

Kato, Z.

C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009).
[CrossRef]

Khan, J. F.

Kim, C.

C. Kim and P. Milanfar, “Visual saliency in noisy images,” J. Vis. 13(4):5, 1–14 (2013).
[CrossRef]

Koch, C.

L. Itti, C. Koch, and E. Niebur, “A model of saliency based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[CrossRef]

Ku, W. H.

P. A. Ffrench, J. R. Zeidler, and W. H. Ku, “Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm,” IEEE Trans. Image Process. 6, 383–397 (1997).
[CrossRef]

Kumar, S.

Kwon, H.

Lan, J.

X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012).
[CrossRef]

Leonov, S.

S. Leonov, “Nonparametric methods for clutter removal,” IEEE Trans. Aerosp. Electron. Syst. 37, 832–848 (2001).
[CrossRef]

Li, Sh.

H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
[CrossRef]

Li, X.

X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012).
[CrossRef]

X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011).
[CrossRef]

Li, Y.

Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
[CrossRef]

Liu, K.

K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012).
[CrossRef]

Liu, X.

Y. Chen, X. Liu, and Q. Huang, “Real-time detection of rapid moving infrared target on variation background,” Infrared Phys. Technol. 51, 146–151 (2008).
[CrossRef]

Lomber, S. G.

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

Longmire, M. S.

Ma, B.

K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012).
[CrossRef]

Ma, J.

Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
[CrossRef]

Ma, L.

Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
[CrossRef]

Ma, Y. F.

Y. F. Ma and H. Zhang, “Contrast-based image attention analysis by using fuzzy growing,” in Proceedings of the 11th ACM International Conference on Multimedia (ACM, 2003), pp. 374–381.

Malathi, M.

T. R. S. Kalyan and M. Malathi, “Architectural implementation of high speed optical flow computation based on Lucas-Kanade algorithm,” in Proceedings of International Conference on Electronics Computer Technology (IEEE, 2011), pp. 192–195.

Malik, J.

P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990).
[CrossRef]

Malkani, M.

F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010).
[CrossRef]

Maunsell, J. H. R.

D. C. Van Essen and J. H. R. Maunsell, “Hierarchical organization and functional streams in the visual cortex,” Trends Neurosci. 6, 370–375 (1983).
[CrossRef]

Mclntosh, R. D.

R. D. Mclntosh and T. Schenk, “Two visual streams for perception and action: current trends,” Neuropsychologia 47, 1391–1396 (2009).
[CrossRef]

Medioni, G.

Q. Yu and G. Medioni, “Motion pattern interpretation and detection for tracking moving vehicles in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2671–2678.

Milanfar, P.

C. Kim and P. Milanfar, “Visual saliency in noisy images,” J. Vis. 13(4):5, 1–14 (2013).
[CrossRef]

Milner, A. D.

M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,” Trends Neurosci. 15, 20–25 (1992).
[CrossRef]

Mishkin, M.

L. Ungerleider and M. Mishkin, “Two cortical visual systems,” in Analysis of Visual Behavior (MIT, 1982), pp. 549–586.

Munk, M.

L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995).
[CrossRef]

Nasrabadi, N.

Niebur, E.

L. Itti, C. Koch, and E. Niebur, “A model of saliency based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[CrossRef]

Norman, J.

J. Norman, “Two visual systems and two theories of perception: an attempt to reconcile the constructivist and ecological approaches,” Behav. Brain Sci. 25, 73–96 (2002).
[CrossRef]

Nowak, L.

L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995).
[CrossRef]

Palmer, S. E.

S. E. Palmer, Vision Science: Photons to Phenomenology (MIT, 1999).

Payne, B. R.

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

Perona, P.

P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990).
[CrossRef]

Qi, Sh.

Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
[CrossRef]

Ran, X.

L. Ren, Ch. Shi, and X. Ran, “Target detection of maritime search and rescue: saliency accumulation method,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (IEEE, 2012), pp. 1972–1976.

Ren, L.

L. Ren, Ch. Shi, and X. Ran, “Target detection of maritime search and rescue: saliency accumulation method,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (IEEE, 2012), pp. 1972–1976.

Sadjadi, F. A.

Saleemi, I.

S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.

Sandler, M.

G. Castellano, J. Boyce, and M. Sandler, “Moving target detection in infrared imagery using a regularized CDWT optical flow,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 13–22.

Sardana, H. K.

Schenk, T.

R. D. Mclntosh and T. Schenk, “Two visual streams for perception and action: current trends,” Neuropsychologia 47, 1391–1396 (2009).
[CrossRef]

Sekmen, A.

F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010).
[CrossRef]

Shafique, K.

A. Yilmaz, K. Shafique, and M. Shah, “Target tracking in airborne forward looking infrared imagery,” Image Vis. Comput. 21, 623–635 (2003).
[CrossRef]

Shah, M.

A. Yilmaz, K. Shafique, and M. Shah, “Target tracking in airborne forward looking infrared imagery,” Image Vis. Comput. 21, 623–635 (2003).
[CrossRef]

S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.

Shao, G.

F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010).
[CrossRef]

Shen, H.

H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
[CrossRef]

Sheng, B.

Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
[CrossRef]

Shi, Ch.

L. Ren, Ch. Shi, and X. Ran, “Target detection of maritime search and rescue: saliency accumulation method,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (IEEE, 2012), pp. 1972–1976.

Strehl, A.

A. Strehl and J. K. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.

Sziranyi, T.

C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009).
[CrossRef]

Takaki, T.

I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012).
[CrossRef]

Takken, E. H.

Taniguchi, T.

I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012).
[CrossRef]

Tao, Ch.

Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
[CrossRef]

Tian, J.

Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
[CrossRef]

Ch. Gao, J. Tian, and P. Wang, “Generalized structure tensor based infrared small target detection,” Electron. Lett. 44, 1349–1351 (2008).
[CrossRef]

Ungerleider, L.

L. Ungerleider and M. Mishkin, “Two cortical visual systems,” in Analysis of Visual Behavior (MIT, 1982), pp. 549–586.

Van Essen, D. C.

D. C. Van Essen and J. H. R. Maunsell, “Hierarchical organization and functional streams in the visual cortex,” Trends Neurosci. 6, 370–375 (1983).
[CrossRef]

Wang, P.

Ch. Gao, J. Tian, and P. Wang, “Generalized structure tensor based infrared small target detection,” Electron. Lett. 44, 1349–1351 (2008).
[CrossRef]

Wang, Q.

Q. Wang, W. Zhu, and L. Zhang, “Moving object detection system with phase discrepancy,” in Proceedings of the 8th International Symposium on Neural Networks (Springer, 2011), pp. 402–411.

Wang, X.

T. Zhang, X. Wang, and Y. Wang, “Automatic threshold estimation for gradient image segmentation,” in Proceedings of Multispectral Image Processing and Pattern Recognition (SPIE, 2001), pp. 121–126.

Wang, Y.

T. Zhang, X. Wang, and Y. Wang, “Automatic threshold estimation for gradient image segmentation,” in Proceedings of Multispectral Image Processing and Pattern Recognition (SPIE, 2001), pp. 121–126.

Wu, Ch.

X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011).
[CrossRef]

Wu, W.

Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
[CrossRef]

Xie, Zh.

Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
[CrossRef]

Yalcin, H.

H. Yalcin, M. Hebert, R. Collins, and M. Black, “A flow-based approach to vehicle detection and background mosaicking in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), Vol. 2.

Yamamoto, K.

I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012).
[CrossRef]

Yan, P.

X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012).
[CrossRef]

X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011).
[CrossRef]

Yang, Ch.

Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
[CrossRef]

Yao, F.

F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010).
[CrossRef]

Yilmaz, A.

A. Yilmaz, K. Shafique, and M. Shah, “Target tracking in airborne forward looking infrared imagery,” Image Vis. Comput. 21, 623–635 (2003).
[CrossRef]

Yin, Z.

Z. Yin and R. Collins, “Moving object localization in thermal imagery by forward backward MHI,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2006), pp. 133.

Z. Yin and R. Collins, “Belief propagation in a 3D spatiotemporal MRF for moving object detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Yu, Q.

Q. Yu and G. Medioni, “Motion pattern interpretation and detection for tracking moving vehicles in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2671–2678.

Zeidler, J. R.

P. A. Ffrench, J. R. Zeidler, and W. H. Ku, “Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm,” IEEE Trans. Image Process. 6, 383–397 (1997).
[CrossRef]

Zerubia, J.

C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009).
[CrossRef]

Zhang, H.

Y. F. Ma and H. Zhang, “Contrast-based image attention analysis by using fuzzy growing,” in Proceedings of the 11th ACM International Conference on Multimedia (ACM, 2003), pp. 374–381.

Zhang, J.

H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
[CrossRef]

Zhang, L.

Q. Wang, W. Zhu, and L. Zhang, “Moving object detection system with phase discrepancy,” in Proceedings of the 8th International Symposium on Neural Networks (Springer, 2011), pp. 402–411.

B. Zhou, X. Hou, and L. Zhang, “A phase discrepancy analysis of object motion,” in Proceedings of the 10th Asian Conference on Computer Vision (2010), pp. 225–238.

Zhang, T.

T. Zhang, X. Wang, and Y. Wang, “Automatic threshold estimation for gradient image segmentation,” in Proceedings of Multispectral Image Processing and Pattern Recognition (SPIE, 2001), pp. 121–126.

Zhou, B.

B. Zhou, X. Hou, and L. Zhang, “A phase discrepancy analysis of object motion,” in Proceedings of the 10th Asian Conference on Computer Vision (2010), pp. 225–238.

Zhu, Ch.

H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
[CrossRef]

Zhu, W.

Q. Wang, W. Zhu, and L. Zhang, “Moving object detection system with phase discrepancy,” in Proceedings of the 8th International Symposium on Neural Networks (Springer, 2011), pp. 402–411.

Appl. Opt.

Behav. Brain Sci.

J. Norman, “Two visual systems and two theories of perception: an attempt to reconcile the constructivist and ecological approaches,” Behav. Brain Sci. 25, 73–96 (2002).
[CrossRef]

Chin. J. Aeronaut.

H. Shen, Sh. Li, Ch. Zhu, H. Chang, and J. Zhang, “Moving object detection in aerial video based on spatiotemporal saliency,” Chin. J. Aeronaut. 26, 1211–1217 (2013).
[CrossRef]

Electron. Lett.

Ch. Gao, J. Tian, and P. Wang, “Generalized structure tensor based infrared small target detection,” Electron. Lett. 44, 1349–1351 (2008).
[CrossRef]

EURASIP J. Image Video Process.

F. Yao, G. Shao, A. Sekmen, and M. Malkani, “Real-time multiple moving targets detection from airborne IR imagery by dynamic Gabor filter and dynamic Gaussian detector,” EURASIP J. Image Video Process. 3, 124681 (2010).
[CrossRef]

IEEE Geosci. Remote Sens. Lett.

Sh. Qi, J. Ma, Ch. Tao, Ch. Yang, and J. Tian, “A robust directional saliency-based method for infrared small-target detection under various complex backgrounds,” IEEE Geosci. Remote Sens. Lett. 10, 495–499 (2013).
[CrossRef]

IEEE Trans. Aerosp. Electron. Syst.

S. Leonov, “Nonparametric methods for clutter removal,” IEEE Trans. Aerosp. Electron. Syst. 37, 832–848 (2001).
[CrossRef]

IEEE Trans. Circuits Syst. Video Technol.

Y. Li, B. Sheng, L. Ma, W. Wu, and Zh. Xie, “Temporally coherent video saliency using regional dynamic contrast,” IEEE Trans. Circuits Syst. Video Technol. 23, 2067–2076 (2013).
[CrossRef]

I. Ishii, T. Taniguchi, K. Yamamoto, and T. Takaki, “High-frame-rate optical flow system,” IEEE Trans. Circuits Syst. Video Technol. 22, 105–112 (2012).
[CrossRef]

X. Cao, Ch. Wu, P. Yan, and X. Li, “Vehicle detection and motion analysis in low-altitude airborne video under urban environment,” IEEE Trans. Circuits Syst. Video Technol. 21, 1522–1533 (2011).
[CrossRef]

IEEE Trans. Image Process.

C. Benedek, T. Sziranyi, Z. Kato, and J. Zerubia, “Detection of object motion regions in aerial image pairs with a multilayer markovian model,” IEEE Trans. Image Process. 18, 2303–2315 (2009).
[CrossRef]

P. A. Ffrench, J. R. Zeidler, and W. H. Ku, “Enhanced detectability of small objects in correlated clutter using an improved 2-D adaptive lattice algorithm,” IEEE Trans. Image Process. 6, 383–397 (1997).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell.

L. Itti, C. Koch, and E. Niebur, “A model of saliency based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[CrossRef]

P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990).
[CrossRef]

Image Vis. Comput.

A. Yilmaz, K. Shafique, and M. Shah, “Target tracking in airborne forward looking infrared imagery,” Image Vis. Comput. 21, 623–635 (2003).
[CrossRef]

Infrared Phys. Technol.

Y. Chen, X. Liu, and Q. Huang, “Real-time detection of rapid moving infrared target on variation background,” Infrared Phys. Technol. 51, 146–151 (2008).
[CrossRef]

J. Appl. Remote Sens.

K. Liu, B. Ma, Q. Du, and G. Chen, “Fast motion detection from airborne videos using graphics processing unit,” J. Appl. Remote Sens. 6, 061505 (2012).
[CrossRef]

J. Electron. Imaging

U. Braga-Neto, M. Choudhary, and J. Goutsias, “Automatic target detection and tracking in forward-looking infrared image sequences using morphological connected operators,” J. Electron. Imaging 13, 802–813 (2004).
[CrossRef]

J. Opt. Soc. Am. A

J. Vis.

C. Kim and P. Milanfar, “Visual saliency in noisy images,” J. Vis. 13(4):5, 1–14 (2013).
[CrossRef]

Mach. Vis. Appl.

X. Cao, J. Lan, P. Yan, and X. Li, “Vehicle detection and tracking in airborne videos by multi-motion layer analysis,” Mach. Vis. Appl. 23, 921–935 (2012).
[CrossRef]

Nature

J. M. Hupe, A. C. James, B. R. Payne, S. G. Lomber, P. Girard, and J. Bullier, “Cortical feedback improves discrimination between figure and background by V1, V2 and V3 neurons,” Nature 394, 784–787 (1998).
[CrossRef]

Neuropsychologia

R. D. Mclntosh and T. Schenk, “Two visual streams for perception and action: current trends,” Neuropsychologia 47, 1391–1396 (2009).
[CrossRef]

Opt. Eng.

J. F. Khan and M. S. Alam, “Target detection in cluttered forward-looking infrared imagery,” Opt. Eng. 44, 076404 (2005).
[CrossRef]

Trends Neurosci.

D. C. Van Essen and J. H. R. Maunsell, “Hierarchical organization and functional streams in the visual cortex,” Trends Neurosci. 6, 370–375 (1983).
[CrossRef]

M. A. Goodale and A. D. Milner, “Separate visual pathways for perception and action,” Trends Neurosci. 15, 20–25 (1992).
[CrossRef]

Vis. Neurosci.

L. Nowak, M. Munk, P. Girard, and J. Bullier, “Visual latencies in areas v1 and v2 of the macaque monkey,” Vis. Neurosci. 12, 371–384 (1995).
[CrossRef]

Vis. Res.

L. Itti and P. Baldi, “Bayesian surprise attracts human attention,” Vis. Res. 49, 1295–1306 (2009).
[CrossRef]

Other

B. Zhou, X. Hou, and L. Zhang, “A phase discrepancy analysis of object motion,” in Proceedings of the 10th Asian Conference on Computer Vision (2010), pp. 225–238.

S. E. Palmer, Vision Science: Photons to Phenomenology (MIT, 1999).

L. Ungerleider and M. Mishkin, “Two cortical visual systems,” in Analysis of Visual Behavior (MIT, 1982), pp. 549–586.

G. Castellano, J. Boyce, and M. Sandler, “Moving target detection in infrared imagery using a regularized CDWT optical flow,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 13–22.

S. Bhattacharya, H. Idrees, I. Saleemi, S. Ali, and M. Shah, “Moving object detection and tracking in forward looking infra-red aerial imagery,” in Machine Vision Beyond Visible Spectrum (Springer, 2011), pp. 221–252.

T. R. S. Kalyan and M. Malathi, “Architectural implementation of high speed optical flow computation based on Lucas-Kanade algorithm,” in Proceedings of International Conference on Electronics Computer Technology (IEEE, 2011), pp. 192–195.

H. Yalcin, M. Hebert, R. Collins, and M. Black, “A flow-based approach to vehicle detection and background mosaicking in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2005), Vol. 2.

Z. Yin and R. Collins, “Belief propagation in a 3D spatiotemporal MRF for moving object detection,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Q. Yu and G. Medioni, “Motion pattern interpretation and detection for tracking moving vehicles in airborne video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 2671–2678.

Z. Yin and R. Collins, “Moving object localization in thermal imagery by forward backward MHI,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (IEEE, 2006), pp. 133.

A. Strehl and J. K. Aggarwal, “Detecting moving objects in airborne forward looking infra-red sequences,” in Proceedings of IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (IEEE, 1999), pp. 3–12.

Q. Wang, W. Zhu, and L. Zhang, “Moving object detection system with phase discrepancy,” in Proceedings of the 8th International Symposium on Neural Networks (Springer, 2011), pp. 402–411.

L. Ren, Ch. Shi, and X. Ran, “Target detection of maritime search and rescue: saliency accumulation method,” in Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (IEEE, 2012), pp. 1972–1976.

Y. F. Ma and H. Zhang, “Contrast-based image attention analysis by using fuzzy growing,” in Proceedings of the 11th ACM International Conference on Multimedia (ACM, 2003), pp. 374–381.

T. Zhang, X. Wang, and Y. Wang, “Automatic threshold estimation for gradient image segmentation,” in Proceedings of Multispectral Image Processing and Pattern Recognition (SPIE, 2001), pp. 121–126.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1.

Moving object detection framework inspired by the two-streams hypothesis. From left to right are (a) original image sequences, (b) the attended motion views based on the motion perception result, and (c) the moving object detection results in the local attended motion views.

Fig. 2.
Fig. 2.

Adaptive upper-bound selection illustration. (a) The motion perception result, (b) the local maximum map of the motion perception result, and (c) the quantization histogram of the local maximum map.

Fig. 3.
Fig. 3.

Intermediate results of our proposed BIMLMDF. The first row denotes the moving targets detection situation in Sequence 1, the second row denotes the moving targets detection situation in Sequence 2, the third row denotes the moving targets detection situation in Sequence 3, and the fourth row denotes the moving targets detection situation in Sequence 4. From left to right are (a) the original image, (b) the motion perception result from short-term image sequences, (c) the initial attended motion focus set, (d) the refined attended motion focus set, (e) the attended motion view set, (f) the mask of the moving targets, (g) the saliency map of moving targets in the mask, and (h) the moving targets detection result.

Fig. 4.
Fig. 4.

Detection performance under different upper-bound number M. From left to right are (a) the detection performance change curve on Sequence 1, (b) the detection performance change curve on Sequence 2, (c) the detection performance change curve on Sequence 3, and (d) the detection performance change curve on Sequence 4.

Fig. 5.
Fig. 5.

Detection performance under different histTh. From left to right are (a) the detection performance change curve on Sequence 1, (b) the detection performance change curve on Sequence 2, (c) the detection performance change curve on Sequence 3, and (d) the detection performance change curve on Sequence 4.

Fig. 6.
Fig. 6.

Comparison result with other state-of-the-art moving targets detection methods in the airborne forward-looking infrared (FLIR) sequences. The first row denotes the moving targets detection result in Sequence 1, the second row denotes the moving targets detection result in Sequence 2, the third row denotes the moving targets detection result in Sequence 3, and the fourth row denotes the moving targets detection result in Sequence 4. From left to right are (a) the original image, (b) the ground truth, (c) the moving targets detection result by [9], (d) the moving targets detection result by [11], and (e) the moving targets detection result by our proposed BIMLMDF.

Fig. 7.
Fig. 7.

Comparison with two state-of-the-art moving targets detection methods in the airborne FLIR sequences. From left to right are (a) comparison result on Sequence 1, (b) comparison result on Sequence 2, (c) comparison result on Sequence 3, and (d) comparison result on Sequence 4.

Fig. 8.
Fig. 8.

More visual results of our proposed BIMLMDF. The first row denotes the case that the moving targets to be detected have a relatively low SNR. The second row denotes the case that the moving targets distribute densely. The third row denotes the case that the airborne is gradually close to the targets to be detected.

Fig. 9.
Fig. 9.

Intermediate results of our proposed BIMLMDF. Corresponding to Fig. 3, the first row, second row, third row, and fourth row denote the moving targets detection situation in Sequence 1, Sequence 2, Sequence 3, and Sequence 4, respectively. From left to right are (a) the original image, (b) the motion perception result from short-term image sequences, (c) the initial attended motion focus set, (d) the refined attended motion focus set and attended motion view set, and (e) the moving targets detection result.

Fig. 10.
Fig. 10.

Detection performance under different resolution. Seq1(O) denotes the detection performance of BIMLMDF, where the motion perception is implemented in the original images (the resolution 320×256). Seq1(D) denotes the detection performance of BIMLMDF, where the motion perception is implemented in the down-sampling images (the resolution 160×128). Similarly, Seq2(O), Seq3(O), and Seq4(O) denote the detection performance in the original images, and Seq2(D), Seq3(D), and Seq4(D) denote the detection performance where the motion perception is implemented in the down-sampling images.

Tables (2)

Tables Icon

Table 1. Three Testing Airborne FLIR Sequences

Tables Icon

Table 2. Running Time of Different Algorithms (320×256)

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I(x,y,t)=I(x,y,t)I(,t)¯std(I(,t)),
MSal(x,y,t,t+Δ)={F1[(|Ft+Δ(ω)||Ft(ω)|)·eiFt+Δ(ω)]}2,
PMSal(x,y,t)=min(+Δ+1MSal(x,y,t,tk)dk,+1+ΔMSal(x,y,t,t+k)dk),
PMSal(x,y,t)=min(MSal(x,y,t,tΔ),MSal(x,y,t,t+Δ)).
(x,y)Ω1(f)PMSal(x,y,t)R(Ω1(f))ub+m×σb,
(x,y)Ω1(f)PMSal(x,y,t)(x,y)Ω2(f)/Ω1(f)PMSal(x,y,t)n×R(Ω2(f))R(Ω1(f))R(Ω1(f)),
w0=2α(x,y)Ω3(f0)PMSal(x,y)·(xx0)(x,y)Ω3(f0)PMSal(x,y),
h0=2α(x,y)Ω3(f0)PMSal(x,y)·(yy0)(x,y)Ω3(f0)PMSal(x,y),
GM(x,y)=(Ix)2+(Iy)2.
DM(x,y)=σGM(x,y)(G(x,y,kσ)G(x,y,σ)),
SM(x,y)=Diff(λ·GM(x,y)+(1λ)·DM(x,y)),
HR=HH+MFAR=FH+F.

Metrics