Abstract

Raw iris images collected outdoors at standoff distances exceeding 25 m are susceptible to noise and atmospheric blur and even under ideal imaging conditions are too degraded to carry out recognition with high accuracy. Traditionally, atmospherically distorted images have been corrected through the use of unique hardware components such as adaptive optics. Here we apply a pure digital image restoration approach to correct for optical aberrations. Image restoration was applied to both single images and image sequences. We propose both a single-frame denoising and deblurring approach, and a multiframe fusion and deblurring approach. To compare performance of the proposed methods, iris recognitions were carried out using the approach of Daugman. Hamming distances (HDs) of computed binary iris codes were measured before and after the restoration. We found the HD decreased from >0.46 prior to a mean value of <0.39 for random single images. The multiframe fusion approach produced the most robust restoration and achieved a mean HD for all subjects in our data set of 0.33 while known false matches remained at 0.44. These results show that, when used properly, image restoration approaches do significantly increase recognition performance for known true positives with low increase in false positive detections, and irises can be recognized in turbulent atmospheric conditions.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. R. P. Wildes, “Automated iris recognition: an emerging biometric technology,” Proc. IEEE 85, 1348–1363 (1997).
    [CrossRef]
  2. J. Daugman, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Circuits Syst. Video Technol. 14, 21–30 (2004).
    [CrossRef]
  3. A. Ghanizadeh, A. A. Abarghouei, S. Sinaie, P. Saad, and S. M. Shamsuddin, “Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata,” Appl. Opt. 50, 3191–3200 (2011).
    [CrossRef]
  4. J. Tesch and S. Gibson, “Optimal and adaptive control of aero-optical wavefronts for adaptive optics,” J. Opt. Soc. Am. A 29, 1625–1638 (2012).
    [CrossRef]
  5. R. Narayanswamy, G. Johnson, P. X. Silveira, and H. Wach, “Extending the imaging volume for biometric iris recognition,” Appl. Opt. 44, 701–712 (2005).
    [CrossRef]
  6. D. S. Barwick, “Increasing the information acquisition volume in iris recognition systems,” Appl. Opt. 47, 4684–4691(2008).
    [CrossRef]
  7. M. Loktev, O. Soloviev, S. Savenko, and G. Vdovin, “Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation,” Opt. Lett. 36, 2656–2658 (2011).
    [CrossRef]
  8. Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 73 (2008).
  9. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095(2007).
    [CrossRef]
  10. X. Zhu and P. Milanfar, “Stabilizing and deblurring atmospheric turbulence,” in 2011 IEEE International Conference On Computational Photography (ICCP 2011) (IEEE, 2011), pp. 1–8.
  11. M. Horvath, P. V. Pauca, and J. van der Gracht, “Optimized restoration strategies for recognition of blurred iris images,” in Frontiers in Optics (Optical Society of America, 2005), p. FTuT3.
  12. L. Masek and P. Kovesi, “Matlab source code for a biometric identification system based on iris patterns,” The School of Computer Science and Software Engineering, The University of Western Australia (2003), http://www.csse.uwa.edu.au/~pk/studentprojects/libor/sourcecode.html .
  13. J. G. Daugman and C. J. Downing, “Demodulation, predictive coding, and spatial vision,” J. Opt. Soc. Am. A 12, 641–660 (1995).
    [CrossRef]
  14. G. Vdovin, M. Loktev, and O. Soloviev, “Adaptive optics: turbulent surveillance-or how to see a Kalashnikov from a safe distance,” Laser Focus World (1 March 2012).
  15. M. Aubailly, M. A. Vorontsov, G. Carhat, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463,  74630C (2009).
    [CrossRef]
  16. B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision (ijcai),” presented at 7th International Joint Conference on Artificial Intelligence (IJCAI ’81), Vancouver, B.C., Canada, 24–28 August 1981, 1981.

2012

2011

2009

M. Aubailly, M. A. Vorontsov, G. Carhat, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463,  74630C (2009).
[CrossRef]

2008

D. S. Barwick, “Increasing the information acquisition volume in iris recognition systems,” Appl. Opt. 47, 4684–4691(2008).
[CrossRef]

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 73 (2008).

2007

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095(2007).
[CrossRef]

2005

2004

J. Daugman, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Circuits Syst. Video Technol. 14, 21–30 (2004).
[CrossRef]

1997

R. P. Wildes, “Automated iris recognition: an emerging biometric technology,” Proc. IEEE 85, 1348–1363 (1997).
[CrossRef]

1995

Abarghouei, A. A.

Agarwala, A.

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 73 (2008).

Aubailly, M.

M. Aubailly, M. A. Vorontsov, G. Carhat, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463,  74630C (2009).
[CrossRef]

Barwick, D. S.

Carhat, G.

M. Aubailly, M. A. Vorontsov, G. Carhat, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463,  74630C (2009).
[CrossRef]

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095(2007).
[CrossRef]

Daugman, J.

J. Daugman, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Circuits Syst. Video Technol. 14, 21–30 (2004).
[CrossRef]

Daugman, J. G.

Downing, C. J.

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095(2007).
[CrossRef]

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095(2007).
[CrossRef]

Ghanizadeh, A.

Gibson, S.

Horvath, M.

M. Horvath, P. V. Pauca, and J. van der Gracht, “Optimized restoration strategies for recognition of blurred iris images,” in Frontiers in Optics (Optical Society of America, 2005), p. FTuT3.

Jia, J.

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 73 (2008).

Johnson, G.

Kanade, T.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision (ijcai),” presented at 7th International Joint Conference on Artificial Intelligence (IJCAI ’81), Vancouver, B.C., Canada, 24–28 August 1981, 1981.

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095(2007).
[CrossRef]

Kovesi, P.

L. Masek and P. Kovesi, “Matlab source code for a biometric identification system based on iris patterns,” The School of Computer Science and Software Engineering, The University of Western Australia (2003), http://www.csse.uwa.edu.au/~pk/studentprojects/libor/sourcecode.html .

Loktev, M.

M. Loktev, O. Soloviev, S. Savenko, and G. Vdovin, “Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation,” Opt. Lett. 36, 2656–2658 (2011).
[CrossRef]

G. Vdovin, M. Loktev, and O. Soloviev, “Adaptive optics: turbulent surveillance-or how to see a Kalashnikov from a safe distance,” Laser Focus World (1 March 2012).

Lucas, B. D.

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision (ijcai),” presented at 7th International Joint Conference on Artificial Intelligence (IJCAI ’81), Vancouver, B.C., Canada, 24–28 August 1981, 1981.

Masek, L.

L. Masek and P. Kovesi, “Matlab source code for a biometric identification system based on iris patterns,” The School of Computer Science and Software Engineering, The University of Western Australia (2003), http://www.csse.uwa.edu.au/~pk/studentprojects/libor/sourcecode.html .

Milanfar, P.

X. Zhu and P. Milanfar, “Stabilizing and deblurring atmospheric turbulence,” in 2011 IEEE International Conference On Computational Photography (ICCP 2011) (IEEE, 2011), pp. 1–8.

Narayanswamy, R.

Pauca, P. V.

M. Horvath, P. V. Pauca, and J. van der Gracht, “Optimized restoration strategies for recognition of blurred iris images,” in Frontiers in Optics (Optical Society of America, 2005), p. FTuT3.

Saad, P.

Savenko, S.

Shamsuddin, S. M.

Shan, Q.

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 73 (2008).

Silveira, P. X.

Sinaie, S.

Soloviev, O.

M. Loktev, O. Soloviev, S. Savenko, and G. Vdovin, “Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation,” Opt. Lett. 36, 2656–2658 (2011).
[CrossRef]

G. Vdovin, M. Loktev, and O. Soloviev, “Adaptive optics: turbulent surveillance-or how to see a Kalashnikov from a safe distance,” Laser Focus World (1 March 2012).

Tesch, J.

Valley, M. T.

M. Aubailly, M. A. Vorontsov, G. Carhat, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463,  74630C (2009).
[CrossRef]

van der Gracht, J.

M. Horvath, P. V. Pauca, and J. van der Gracht, “Optimized restoration strategies for recognition of blurred iris images,” in Frontiers in Optics (Optical Society of America, 2005), p. FTuT3.

Vdovin, G.

M. Loktev, O. Soloviev, S. Savenko, and G. Vdovin, “Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation,” Opt. Lett. 36, 2656–2658 (2011).
[CrossRef]

G. Vdovin, M. Loktev, and O. Soloviev, “Adaptive optics: turbulent surveillance-or how to see a Kalashnikov from a safe distance,” Laser Focus World (1 March 2012).

Vorontsov, M. A.

M. Aubailly, M. A. Vorontsov, G. Carhat, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463,  74630C (2009).
[CrossRef]

Wach, H.

Wildes, R. P.

R. P. Wildes, “Automated iris recognition: an emerging biometric technology,” Proc. IEEE 85, 1348–1363 (1997).
[CrossRef]

Zhu, X.

X. Zhu and P. Milanfar, “Stabilizing and deblurring atmospheric turbulence,” in 2011 IEEE International Conference On Computational Photography (ICCP 2011) (IEEE, 2011), pp. 1–8.

ACM Trans. Graph.

Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring from a single image,” ACM Trans. Graph. 27(3), 73 (2008).

Appl. Opt.

IEEE Trans. Circuits Syst. Video Technol.

J. Daugman, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Circuits Syst. Video Technol. 14, 21–30 (2004).
[CrossRef]

IEEE Trans. Image Process.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095(2007).
[CrossRef]

J. Opt. Soc. Am. A

Opt. Lett.

Proc. IEEE

R. P. Wildes, “Automated iris recognition: an emerging biometric technology,” Proc. IEEE 85, 1348–1363 (1997).
[CrossRef]

Proc. SPIE

M. Aubailly, M. A. Vorontsov, G. Carhat, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463,  74630C (2009).
[CrossRef]

Other

B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision (ijcai),” presented at 7th International Joint Conference on Artificial Intelligence (IJCAI ’81), Vancouver, B.C., Canada, 24–28 August 1981, 1981.

X. Zhu and P. Milanfar, “Stabilizing and deblurring atmospheric turbulence,” in 2011 IEEE International Conference On Computational Photography (ICCP 2011) (IEEE, 2011), pp. 1–8.

M. Horvath, P. V. Pauca, and J. van der Gracht, “Optimized restoration strategies for recognition of blurred iris images,” in Frontiers in Optics (Optical Society of America, 2005), p. FTuT3.

L. Masek and P. Kovesi, “Matlab source code for a biometric identification system based on iris patterns,” The School of Computer Science and Software Engineering, The University of Western Australia (2003), http://www.csse.uwa.edu.au/~pk/studentprojects/libor/sourcecode.html .

G. Vdovin, M. Loktev, and O. Soloviev, “Adaptive optics: turbulent surveillance-or how to see a Kalashnikov from a safe distance,” Laser Focus World (1 March 2012).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1.

The iris recognition sensor is based around a 10 inch Ritchey–Chretien telescope. The sensor includes a number of wide field cameras and illumination source that are attached to a motorized gimbal pointing system.

Fig. 2.
Fig. 2.

Beam profile and power density at 25 m.

Fig. 3.
Fig. 3.

Forward gaze detection. (a) Forward gaze was detected when the corneal glint was centered in the retinal retro-reflection, (b) 10° left gaze, (c) 10° right gaze, (d) pupil detections at 260 frames per second, and (e) detection of forward gaze indicated by the blue box around both ocular regions.

Fig. 4.
Fig. 4.

Indoor and outdoor (a) setup, (b) point spread function (measured from corneal glint), and (c) ocular images. PSF, point spread function.

Fig. 5.
Fig. 5.

Iris similarity was measured my calculating the bit-wise similarity between two computed iris codes. Manually segmented iris images contain locations of the pupil, iris and eyelids: (a) input image; (b) segmented iris, pupil, and lids; (c) radial coordinate system; and (d) resulting binary pattern used to compare the similarity of two irises.

Fig. 6.
Fig. 6.

Face and iris images collected with increasing exposure times. The noisy and denoised iris images are shown as a function of increasing integration time.

Fig. 7.
Fig. 7.

HD between an ideally captured reference image was calculated for several integration times ranging from 250 µs to 3 ms. In each case the reference enrollment image was restored in the same way as the test image.

Fig. 8.
Fig. 8.

We studied the recognition performance of the Shan deblurring algorithm as a function of the deblur strength and noise parameters. Deblur strength was used to control the completeness of the deblurring, while noise strength was used to enforce sparsity in the solution.

Fig. 9.
Fig. 9.

Shot-to-shot variation of recognition performance (indoors-outdoors). Shown here is the recognition performance of all of subject B’s outdoor images for the choice of two different indoor images.

Fig. 10.
Fig. 10.

Turbulence removal using multiframe image fusion. Here we registered and fused 16 noisy frames and then used to blind deblur approach of Shan to achieve 40 contrast at 2lp/mm. Additional linen pairs at higher frequency are visible in the center of the test resolution chart.

Fig. 11.
Fig. 11.

The proposed multiframe restoration algorithm uses stable iris features to extract optical flow. Two features were examined: the iris perimeter and the corneal glint. Registered irises are fused using weights computed from iris occlusions. The fused images were then deblurred prior to computing the binary iris code.

Fig. 12.
Fig. 12.

Fused iris images from subject A. Images were registered using the location of the iris perimeter.

Tables (3)

Tables Icon

Table 1. Recognition Performance of Unrestored Iris Imagesa

Tables Icon

Table 2. Recognition Performance of Restored, Single Iris Imagesa

Tables Icon

Table 3. Recognition Performance of Fused Iris Image Sequencesa

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

HDc=(HDfalseHDtrue)/(HDfalse+HDtrue),
G=Fh+n,
C(F,h)=GhF2+λfRf(F)+λhRh(h).
gk=fhhk+nk,
rk=gδΔr⃗k,
F=kWk·Ik/kWk.

Metrics