Abstract

I propose a new iris image acquisition method based on wide- and narrow-view iris cameras. The narrow-view camera has the functionalities of automatic zooming, focusing, panning, and tilting based on the two-dimensional and three-dimensional eye positions detected from the wide- and narrow-view stereo cameras. By using the wide- and narrow-view iris cameras, I compute the user’s gaze position, which is used for aligning the XY position of the user’s eye, and I use the visible-light illuminator for fake-eye detection.

© 2005 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1160 (1993).
    [CrossRef]
  2. J. Ernst, “The iris recognition homepage—iris recognition and identification,” 2December2002, retrieved 6October2004, http://www.iris-recognition.org .
  3. J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).
  4. A. K. Jain, Biometrics: Personal Identification in Networked Society (Kluwer Academic, Dordrecht, The Netherlands, 1998).
  5. T. Mansfield, G. Kelly, D. Chandler, J. Kane, “Biometric product testing final report,” Draft 0.6 (National Physical Laboratory, Teddington, Middlesex, UK, 2001).
  6. “LG Electronics/Iris Technology Division,” (LG Electronics, Jamesburg, N.J.); retrieved 6October2004, http://www.lgiris.com .
  7. “Iridian Technologies,” (Moorestown, N.J.); retrieved 6October2004, http://www.iridiantech.com/products.php?page=4 .
  8. “Panasonic ideas for life,” Matsushita Electric Corporation of America, Secaucus, N.J.); retrieved 6October2004, http://www.panasonic.com/cctv/products/biometrics.asp .
  9. J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
    [CrossRef]
  10. H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
    [CrossRef]
  11. T. Haruki, K. Kikuchi, “Video camera system using fuzzy logic,” IEEE Trans. Consum. Electron. 38, 624–634 (1992).
    [CrossRef]
  12. K. Ooi, K. Izumi, M. Nozaki, I. Takeda, “An advanced auto-focusing system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36, 526–529 (1990).
    [CrossRef]
  13. K. Hanma, M. Masuda, H. Nabeyama, Y. Saito, “Novel technologies for automatic focusing and white balancing of solid state color video camera,” IEEE Trans. Consum. Electron. CE-29, 376–381 (1983).
    [CrossRef]
  14. R. A. Jarvis, “Focus optimization criteria for computer image processing,” Microscope 24, 163–180 (1976).
  15. S. K. Nayar, Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
    [CrossRef]
  16. K.-S. Choi, J.-S. Lee, S.-J. Ko, “New auto-focusing technique using the frequency selective weight median filter for video cameras,” IEEE Trans. Consum. Electron. 45, 820–827 (1999).
    [CrossRef]
  17. K. Jack, Video Demystified—A Handbook for the Digital Engineer, 2nd ed. (LLH Technology, Eagle Rock, Va., 1996).
  18. Y. Park, H. Yun, M. Song, J. Kim, “A fast circular edge detector for the iris region segmentation,” in Lecture Notes in Computer Science, S.-W. Lee, H. H. Bülthoff, T. Poggio, eds. (Springer-Verlag, Heidelberg, 2000), Vol. 1811, pp. 417–423.
    [CrossRef]
  19. K. R. Park, “Facial and eye gaze detection,” in Lecture Notes in Computer Science (Springer-Verlag, Heidelberg, 2002), Vol. 2525, pp. 368–376.
    [CrossRef]
  20. R. C. Gonzalez, R. E. Woods, Digital Image Processing (Addison-Wesley, Boston, Mass., 1992).
  21. K. R. Park, J. Kim, “Gaze point detection by computing the 3D positions and 3D motions of face,” IEICE Trans. Inf. Syst. E83-D, 884–894 (2000).
  22. K. R. Park, “Gaze detection by estimating the depth and 3D motions of facial features in monocular images,” IEICE Trans. Fundam. E82-A, 2274–2284 (1999).
  23. S. C. Chapra, Raymond P. Canale, Numerical Methods for Engineers (McGraw-Hill, Columbus, Ohio, 1989).
  24. Polhemus, 40 Hercules Drive, P.O. Box 560, Colchester, Vt. 05446, retrieved 6October2004, http://www.polhemus.com .
  25. R. Jain, Machine Vision (McGraw-Hill, Columbus, Ohio, 1995).
  26. A. N. Rajagopalan, S. Chaudhuri, “MRF model-based identification of shift-variant point spread function for a class of imaging systems,” Signal Process. 76, 285–299 (1999).
    [CrossRef]
  27. H. J. Trussell, A. E. Savakis, “Blur identification by statistical analysis,” IEEE Trans. Acoust. Speech Signal Process. 4, 2493–2496 (1991).
  28. M. K. Ozkan, A. M. Tekalp, M. I. Sezan, “POCS-based restoration of space-varying blurred images,” IEEE Trans. Image Process. 3, 450–454 (1994).
    [CrossRef] [PubMed]
  29. D. Kundur, D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13, 43–64 (1996).
    [CrossRef]
  30. A. E. Savakis, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
    [CrossRef] [PubMed]
  31. M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).
  32. M. Suzaki, Y. Kuno, “Eye image recognition method eye image selection method and system therefor,” U.S. patent6,215,891 (10April2001).
  33. M. Suzaki, “Eye image recognition method eye image selection method and system therefor,” U.S. patent6,307,954 (23October2001).
  34. D. Ioammou, W. Huda, A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image Vision Comput. 17, 15–26 (1999).
    [CrossRef]
  35. R. W. Smith, “Body check—biometric access protection devices and their programs put to the test,” (Heise Online, November2002), retrieved 6October2004, http://www.heise.de/ct/english/02/11/114/ .

2001 (1)

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

2000 (1)

K. R. Park, J. Kim, “Gaze point detection by computing the 3D positions and 3D motions of face,” IEICE Trans. Inf. Syst. E83-D, 884–894 (2000).

1999 (4)

K. R. Park, “Gaze detection by estimating the depth and 3D motions of facial features in monocular images,” IEICE Trans. Fundam. E82-A, 2274–2284 (1999).

A. N. Rajagopalan, S. Chaudhuri, “MRF model-based identification of shift-variant point spread function for a class of imaging systems,” Signal Process. 76, 285–299 (1999).
[CrossRef]

K.-S. Choi, J.-S. Lee, S.-J. Ko, “New auto-focusing technique using the frequency selective weight median filter for video cameras,” IEEE Trans. Consum. Electron. 45, 820–827 (1999).
[CrossRef]

D. Ioammou, W. Huda, A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image Vision Comput. 17, 15–26 (1999).
[CrossRef]

1996 (1)

D. Kundur, D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13, 43–64 (1996).
[CrossRef]

1995 (1)

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

1994 (2)

S. K. Nayar, Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

M. K. Ozkan, A. M. Tekalp, M. I. Sezan, “POCS-based restoration of space-varying blurred images,” IEEE Trans. Image Process. 3, 450–454 (1994).
[CrossRef] [PubMed]

1993 (2)

A. E. Savakis, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
[CrossRef] [PubMed]

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1160 (1993).
[CrossRef]

1992 (1)

T. Haruki, K. Kikuchi, “Video camera system using fuzzy logic,” IEEE Trans. Consum. Electron. 38, 624–634 (1992).
[CrossRef]

1991 (1)

H. J. Trussell, A. E. Savakis, “Blur identification by statistical analysis,” IEEE Trans. Acoust. Speech Signal Process. 4, 2493–2496 (1991).

1990 (1)

K. Ooi, K. Izumi, M. Nozaki, I. Takeda, “An advanced auto-focusing system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36, 526–529 (1990).
[CrossRef]

1986 (1)

H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
[CrossRef]

1983 (1)

K. Hanma, M. Masuda, H. Nabeyama, Y. Saito, “Novel technologies for automatic focusing and white balancing of solid state color video camera,” IEEE Trans. Consum. Electron. CE-29, 376–381 (1983).
[CrossRef]

1976 (1)

R. A. Jarvis, “Focus optimization criteria for computer image processing,” Microscope 24, 163–180 (1976).

Aida, H.

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

Canale, Raymond P.

S. C. Chapra, Raymond P. Canale, Numerical Methods for Engineers (McGraw-Hill, Columbus, Ohio, 1989).

Chandler, D.

T. Mansfield, G. Kelly, D. Chandler, J. Kane, “Biometric product testing final report,” Draft 0.6 (National Physical Laboratory, Teddington, Middlesex, UK, 2001).

Chapra, S. C.

S. C. Chapra, Raymond P. Canale, Numerical Methods for Engineers (McGraw-Hill, Columbus, Ohio, 1989).

Chaudhuri, S.

A. N. Rajagopalan, S. Chaudhuri, “MRF model-based identification of shift-variant point spread function for a class of imaging systems,” Signal Process. 76, 285–299 (1999).
[CrossRef]

Choi, K.-S.

K.-S. Choi, J.-S. Lee, S.-J. Ko, “New auto-focusing technique using the frequency selective weight median filter for video cameras,” IEEE Trans. Consum. Electron. 45, 820–827 (1999).
[CrossRef]

Daugman, J. G.

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1160 (1993).
[CrossRef]

Gonzalez, R. C.

R. C. Gonzalez, R. E. Woods, Digital Image Processing (Addison-Wesley, Boston, Mass., 1992).

Hanma, K.

K. Hanma, M. Masuda, H. Nabeyama, Y. Saito, “Novel technologies for automatic focusing and white balancing of solid state color video camera,” IEEE Trans. Consum. Electron. CE-29, 376–381 (1983).
[CrossRef]

Harada, H.

H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
[CrossRef]

Haruki, T.

T. Haruki, K. Kikuchi, “Video camera system using fuzzy logic,” IEEE Trans. Consum. Electron. 38, 624–634 (1992).
[CrossRef]

Hatzinakos, D.

D. Kundur, D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13, 43–64 (1996).
[CrossRef]

Horikawa, S.-I.

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

Huda, W.

D. Ioammou, W. Huda, A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image Vision Comput. 17, 15–26 (1999).
[CrossRef]

Ioammou, D.

D. Ioammou, W. Huda, A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image Vision Comput. 17, 15–26 (1999).
[CrossRef]

Izumi, K.

K. Ooi, K. Izumi, M. Nozaki, I. Takeda, “An advanced auto-focusing system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36, 526–529 (1990).
[CrossRef]

Jack, K.

K. Jack, Video Demystified—A Handbook for the Digital Engineer, 2nd ed. (LLH Technology, Eagle Rock, Va., 1996).

Jain, A. K.

A. K. Jain, Biometrics: Personal Identification in Networked Society (Kluwer Academic, Dordrecht, The Netherlands, 1998).

Jain, R.

R. Jain, Machine Vision (McGraw-Hill, Columbus, Ohio, 1995).

Jarvis, R. A.

R. A. Jarvis, “Focus optimization criteria for computer image processing,” Microscope 24, 163–180 (1976).

Kane, J.

T. Mansfield, G. Kelly, D. Chandler, J. Kane, “Biometric product testing final report,” Draft 0.6 (National Physical Laboratory, Teddington, Middlesex, UK, 2001).

Katz, J.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Kelly, G.

T. Mansfield, G. Kelly, D. Chandler, J. Kane, “Biometric product testing final report,” Draft 0.6 (National Physical Laboratory, Teddington, Middlesex, UK, 2001).

Kikuchi, K.

T. Haruki, K. Kikuchi, “Video camera system using fuzzy logic,” IEEE Trans. Consum. Electron. 38, 624–634 (1992).
[CrossRef]

Kim, H.-G.

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

Kim, J.

K. R. Park, J. Kim, “Gaze point detection by computing the 3D positions and 3D motions of face,” IEICE Trans. Inf. Syst. E83-D, 884–894 (2000).

Y. Park, H. Yun, M. Song, J. Kim, “A fast circular edge detector for the iris region segmentation,” in Lecture Notes in Computer Science, S.-W. Lee, H. H. Bülthoff, T. Poggio, eds. (Springer-Verlag, Heidelberg, 2000), Vol. 1811, pp. 417–423.
[CrossRef]

Kim, K.-S.

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

Kitamura, Y.

H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
[CrossRef]

Ko, S.-J.

K.-S. Choi, J.-S. Lee, S.-J. Ko, “New auto-focusing technique using the frequency selective weight median filter for video cameras,” IEEE Trans. Consum. Electron. 45, 820–827 (1999).
[CrossRef]

Kulhanek, C.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Kundur, D.

D. Kundur, D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13, 43–64 (1996).
[CrossRef]

Kuno, Y.

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

M. Suzaki, Y. Kuno, “Eye image recognition method eye image selection method and system therefor,” U.S. patent6,215,891 (10April2001).

Kusunose, R.

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

Kwon, Y.-M.

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

Laine, A. F.

D. Ioammou, W. Huda, A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image Vision Comput. 17, 15–26 (1999).
[CrossRef]

Lee, J.-C.

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

Lee, J.-H.

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

Lee, J.-S.

K.-S. Choi, J.-S. Lee, S.-J. Ko, “New auto-focusing technique using the frequency selective weight median filter for video cameras,” IEEE Trans. Consum. Electron. 45, 820–827 (1999).
[CrossRef]

Lisimaque, G.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Mansfield, T.

T. Mansfield, G. Kelly, D. Chandler, J. Kane, “Biometric product testing final report,” Draft 0.6 (National Physical Laboratory, Teddington, Middlesex, UK, 2001).

Masuda, M.

K. Hanma, M. Masuda, H. Nabeyama, Y. Saito, “Novel technologies for automatic focusing and white balancing of solid state color video camera,” IEEE Trans. Consum. Electron. CE-29, 376–381 (1983).
[CrossRef]

McGovern, M.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

McKoen, J.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Medich, C.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Nabeyama, H.

K. Hanma, M. Masuda, H. Nabeyama, Y. Saito, “Novel technologies for automatic focusing and white balancing of solid state color video camera,” IEEE Trans. Consum. Electron. CE-29, 376–381 (1983).
[CrossRef]

Nakagawa, Y.

S. K. Nayar, Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

Nam, B.-D.

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

Nayar, S. K.

S. K. Nayar, Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

Nishikawa, S.

H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
[CrossRef]

Nozaki, M.

K. Ooi, K. Izumi, M. Nozaki, I. Takeda, “An advanced auto-focusing system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36, 526–529 (1990).
[CrossRef]

Onishi, M.

H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
[CrossRef]

Ooi, K.

K. Ooi, K. Izumi, M. Nozaki, I. Takeda, “An advanced auto-focusing system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36, 526–529 (1990).
[CrossRef]

Ozkan, M. K.

M. K. Ozkan, A. M. Tekalp, M. I. Sezan, “POCS-based restoration of space-varying blurred images,” IEEE Trans. Image Process. 3, 450–454 (1994).
[CrossRef] [PubMed]

Park, K. R.

K. R. Park, J. Kim, “Gaze point detection by computing the 3D positions and 3D motions of face,” IEICE Trans. Inf. Syst. E83-D, 884–894 (2000).

K. R. Park, “Gaze detection by estimating the depth and 3D motions of facial features in monocular images,” IEICE Trans. Fundam. E82-A, 2274–2284 (1999).

K. R. Park, “Facial and eye gaze detection,” in Lecture Notes in Computer Science (Springer-Verlag, Heidelberg, 2002), Vol. 2525, pp. 368–376.
[CrossRef]

Park, Y.

Y. Park, H. Yun, M. Song, J. Kim, “A fast circular edge detector for the iris region segmentation,” in Lecture Notes in Computer Science, S.-W. Lee, H. H. Bülthoff, T. Poggio, eds. (Springer-Verlag, Heidelberg, 2000), Vol. 1811, pp. 417–423.
[CrossRef]

Pauzie, G.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Pilozzi, J.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Rajagopalan, A. N.

A. N. Rajagopalan, S. Chaudhuri, “MRF model-based identification of shift-variant point spread function for a class of imaging systems,” Signal Process. 76, 285–299 (1999).
[CrossRef]

Saito, Y.

K. Hanma, M. Masuda, H. Nabeyama, Y. Saito, “Novel technologies for automatic focusing and white balancing of solid state color video camera,” IEEE Trans. Consum. Electron. CE-29, 376–381 (1983).
[CrossRef]

Sasaki, N.

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

Savakis, A. E.

A. E. Savakis, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
[CrossRef] [PubMed]

H. J. Trussell, A. E. Savakis, “Blur identification by statistical analysis,” IEEE Trans. Acoust. Speech Signal Process. 4, 2493–2496 (1991).

Sezan, M. I.

M. K. Ozkan, A. M. Tekalp, M. I. Sezan, “POCS-based restoration of space-varying blurred images,” IEEE Trans. Image Process. 3, 450–454 (1994).
[CrossRef] [PubMed]

Song, M.

Y. Park, H. Yun, M. Song, J. Kim, “A fast circular edge detector for the iris region segmentation,” in Lecture Notes in Computer Science, S.-W. Lee, H. H. Bülthoff, T. Poggio, eds. (Springer-Verlag, Heidelberg, 2000), Vol. 1811, pp. 417–423.
[CrossRef]

Squibbs, M.

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

Suzaki, M.

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

M. Suzaki, Y. Kuno, “Eye image recognition method eye image selection method and system therefor,” U.S. patent6,215,891 (10April2001).

M. Suzaki, “Eye image recognition method eye image selection method and system therefor,” U.S. patent6,307,954 (23October2001).

Takeda, I.

K. Ooi, K. Izumi, M. Nozaki, I. Takeda, “An advanced auto-focusing system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36, 526–529 (1990).
[CrossRef]

Tekalp, A. M.

M. K. Ozkan, A. M. Tekalp, M. I. Sezan, “POCS-based restoration of space-varying blurred images,” IEEE Trans. Image Process. 3, 450–454 (1994).
[CrossRef] [PubMed]

Toyoda, H.

H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
[CrossRef]

Trussell, H. J.

H. J. Trussell, A. E. Savakis, “Blur identification by statistical analysis,” IEEE Trans. Acoust. Speech Signal Process. 4, 2493–2496 (1991).

Woods, R. E.

R. C. Gonzalez, R. E. Woods, Digital Image Processing (Addison-Wesley, Boston, Mass., 1992).

Yun, H.

Y. Park, H. Yun, M. Song, J. Kim, “A fast circular edge detector for the iris region segmentation,” in Lecture Notes in Computer Science, S.-W. Lee, H. H. Bülthoff, T. Poggio, eds. (Springer-Verlag, Heidelberg, 2000), Vol. 1811, pp. 417–423.
[CrossRef]

IEEE Signal Process. Mag. (1)

D. Kundur, D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13, 43–64 (1996).
[CrossRef]

IEEE Trans. Acoust. Speech Signal Process. (1)

H. J. Trussell, A. E. Savakis, “Blur identification by statistical analysis,” IEEE Trans. Acoust. Speech Signal Process. 4, 2493–2496 (1991).

IEEE Trans. Consum Electron. (1)

H. Toyoda, S. Nishikawa, Y. Kitamura, M. Onishi, H. Harada, “New Automatic Focusing System for Video Camera,” IEEE Trans. Consum Electron. CE-32, 312–319 (1986).
[CrossRef]

IEEE Trans. Consum. Electron. (4)

T. Haruki, K. Kikuchi, “Video camera system using fuzzy logic,” IEEE Trans. Consum. Electron. 38, 624–634 (1992).
[CrossRef]

K. Ooi, K. Izumi, M. Nozaki, I. Takeda, “An advanced auto-focusing system for video camera using quasi condition reasoning,” IEEE Trans. Consum. Electron. 36, 526–529 (1990).
[CrossRef]

K. Hanma, M. Masuda, H. Nabeyama, Y. Saito, “Novel technologies for automatic focusing and white balancing of solid state color video camera,” IEEE Trans. Consum. Electron. CE-29, 376–381 (1983).
[CrossRef]

K.-S. Choi, J.-S. Lee, S.-J. Ko, “New auto-focusing technique using the frequency selective weight median filter for video cameras,” IEEE Trans. Consum. Electron. 45, 820–827 (1999).
[CrossRef]

IEEE Trans. Consumer Electron. (1)

J.-H. Lee, K.-S. Kim, B.-D. Nam, J.-C. Lee, Y.-M. Kwon, H.-G. Kim, “Implementation of a passive automatic focusing algorithm for digital still camera,” IEEE Trans. Consumer Electron. 41, 449–454 (1995).
[CrossRef]

IEEE Trans. Image Process. (2)

M. K. Ozkan, A. M. Tekalp, M. I. Sezan, “POCS-based restoration of space-varying blurred images,” IEEE Trans. Image Process. 3, 450–454 (1994).
[CrossRef] [PubMed]

A. E. Savakis, “Blur identification by residual spectral matching,” IEEE Trans. Image Process. 2, 141–151 (1993).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

S. K. Nayar, Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

J. G. Daugman, “High confidence visual recognition of persons by a test of statistical independence,” IEEE Trans. Pattern Anal. Mach. Intell. 15, 1148–1160 (1993).
[CrossRef]

IEICE Trans. Fundam. (1)

K. R. Park, “Gaze detection by estimating the depth and 3D motions of facial features in monocular images,” IEICE Trans. Fundam. E82-A, 2274–2284 (1999).

IEICE Trans. Inf. Syst. (2)

K. R. Park, J. Kim, “Gaze point detection by computing the 3D positions and 3D motions of face,” IEICE Trans. Inf. Syst. E83-D, 884–894 (2000).

M. Suzaki, S.-I. Horikawa, Y. Kuno, H. Aida, N. Sasaki, R. Kusunose, “Racehorse identification system using iris recognition,” IEICE Trans. Inf. Syst. J84-D2, 1061–1072 (2001).

Image Vision Comput. (1)

D. Ioammou, W. Huda, A. F. Laine, “Circle recognition through a 2D Hough transform and radius histogramming,” Image Vision Comput. 17, 15–26 (1999).
[CrossRef]

Microscope (1)

R. A. Jarvis, “Focus optimization criteria for computer image processing,” Microscope 24, 163–180 (1976).

Signal Process. (1)

A. N. Rajagopalan, S. Chaudhuri, “MRF model-based identification of shift-variant point spread function for a class of imaging systems,” Signal Process. 76, 285–299 (1999).
[CrossRef]

Other (17)

S. C. Chapra, Raymond P. Canale, Numerical Methods for Engineers (McGraw-Hill, Columbus, Ohio, 1989).

Polhemus, 40 Hercules Drive, P.O. Box 560, Colchester, Vt. 05446, retrieved 6October2004, http://www.polhemus.com .

R. Jain, Machine Vision (McGraw-Hill, Columbus, Ohio, 1995).

R. W. Smith, “Body check—biometric access protection devices and their programs put to the test,” (Heise Online, November2002), retrieved 6October2004, http://www.heise.de/ct/english/02/11/114/ .

M. Suzaki, Y. Kuno, “Eye image recognition method eye image selection method and system therefor,” U.S. patent6,215,891 (10April2001).

M. Suzaki, “Eye image recognition method eye image selection method and system therefor,” U.S. patent6,307,954 (23October2001).

K. Jack, Video Demystified—A Handbook for the Digital Engineer, 2nd ed. (LLH Technology, Eagle Rock, Va., 1996).

Y. Park, H. Yun, M. Song, J. Kim, “A fast circular edge detector for the iris region segmentation,” in Lecture Notes in Computer Science, S.-W. Lee, H. H. Bülthoff, T. Poggio, eds. (Springer-Verlag, Heidelberg, 2000), Vol. 1811, pp. 417–423.
[CrossRef]

K. R. Park, “Facial and eye gaze detection,” in Lecture Notes in Computer Science (Springer-Verlag, Heidelberg, 2002), Vol. 2525, pp. 368–376.
[CrossRef]

R. C. Gonzalez, R. E. Woods, Digital Image Processing (Addison-Wesley, Boston, Mass., 1992).

J. Ernst, “The iris recognition homepage—iris recognition and identification,” 2December2002, retrieved 6October2004, http://www.iris-recognition.org .

J. Katz, C. Medich, G. Lisimaque, G. Pauzie, C. Kulhanek, J. Pilozzi, M. McGovern, M. Squibbs, J. McKoen, “Smart cards and biometrics in privacy-sensitive secure personal identification systems,” (white paper, Smart Card Alliance, Princeton Junction, N.J., 2002).

A. K. Jain, Biometrics: Personal Identification in Networked Society (Kluwer Academic, Dordrecht, The Netherlands, 1998).

T. Mansfield, G. Kelly, D. Chandler, J. Kane, “Biometric product testing final report,” Draft 0.6 (National Physical Laboratory, Teddington, Middlesex, UK, 2001).

“LG Electronics/Iris Technology Division,” (LG Electronics, Jamesburg, N.J.); retrieved 6October2004, http://www.lgiris.com .

“Iridian Technologies,” (Moorestown, N.J.); retrieved 6October2004, http://www.iridiantech.com/products.php?page=4 .

“Panasonic ideas for life,” Matsushita Electric Corporation of America, Secaucus, N.J.); retrieved 6October2004, http://www.panasonic.com/cctv/products/biometrics.asp .

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (28)

Fig. 1
Fig. 1

Human iris.

Fig. 2
Fig. 2

Overall structure of the conventional iris recognition system.

Fig. 3
Fig. 3

Automated iris recognition camera.

Fig. 4
Fig. 4

On–off controlling of the IR-LED illuminator for detecting eye features. VD, vertical drive.

Fig. 5
Fig. 5

(a) Even field image and the SR made by the IR-LED illuminator of Fig. 3(1), (b) odd field image of frame 1 in Fig. 4, and (c) the difference image between the even and the odd images.

Fig. 6
Fig. 6

NTSC signal range versus the A/D conversion range.

Fig. 7
Fig. 7

Eye-corner templates for (a) the left and (b) the right eyes.

Fig. 8
Fig. 8

Initial viewing angle of the narrow-view camera.

Fig. 9
Fig. 9

Coordinate conversion between the narrow-view camera and the wide-view camera.

Fig. 10
Fig. 10

Camera calibration with a calibration panel.

Fig. 11
Fig. 11

Gray-level difference curve.

Fig. 12
Fig. 12

DOF region and the defocused region.

Fig. 13
Fig. 13

SR blur when the transparent cup is attached to the surface of the printed iris image. (a) SR (indicated by the arrow) in the defocused region I, in which the dark-gray pixels are on the edge of the SR. (b) SR in the DOF region III. (c) SR (indicated by the arrow) in the defocused region II, in which the dark-gray pixels are in the center of the SR.

Fig. 14
Fig. 14

SR blurring function. The arrows indicate (a) SR [g1(i, j)] in defocused region I in the case of the SR blur model ao-44-5-713-i001; (b) SR [f(m, n)] in DOF region III; (c) SR [g2(i, j)] in defocused region II in the case of the SR blur model ao-44-5-713-i002.

Fig. 15
Fig. 15

Zoom- and focus-lens tracing curves.

Fig. 16
Fig. 16

Zoom- and focus-lens positions versus SR diameter.

Fig. 17
Fig. 17

SRs on the cornea, the facial skin, and the eyeglass surface. (a) SRs in the image with a normal brightness value. (b) SRs in the image with a lower brightness value.

Fig. 18
Fig. 18

Successive on–off scheme for the IR-LED illuminators. VD, vertical drive.

Fig. 19
Fig. 19

Field images versus the frame image.

Fig. 20
Fig. 20

Difference image from two fields.

Fig. 21
Fig. 21

Eye image with SRs on eyeglasses. (a) Eye image with the left illuminator. (b) Eye image with the right illuminator.

Fig. 22
Fig. 22

Counterfeit iris samples: (a) printed iris, (b) photographed iris, (c) 2D printed iris with a convex cup [(1)], (d) artificial PMMA iris, (e) artificial glass iris, (f) semitransparent iris, (g) opaque iris.

Fig. 23
Fig. 23

Dilation (1) and contraction (2) of iris patterns by hippus movement (real iris): (a) visible light on and (b) visible light off.

Fig. 24
Fig. 24

Dilation (1) and contraction (2) of iris patterns by hippus movement (patterned lens): (a) visible light on and (b) visible light off.

Fig. 25
Fig. 25

Example of (a) dilation and (b) contraction of iris patterns by hippus movement (real iris).

Fig. 26
Fig. 26

Angle between the line of sight and the irradiating line.

Fig. 27
Fig. 27

Focus value versus focus-lens positions for users without glasses. Reference numbers are given in square brackets.

Fig. 28
Fig. 28

Focus value versus focus-lens positions for users with glasses. Reference numbers given in square brackets.

Tables (3)

Tables Icon

Table 1 Comparison of the Average Focusing Timea

Tables Icon

Table 2 Distance versus Iris Image Acquisition Time, Recognition Time, and Recognition Rate

Tables Icon

Table 3 Lighting Conditions versus Iris Image Acquisition Time, Recognition Time, and Recognition Rate

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

X 1 = x 1 ( f 1 - Z 1 ) f 1 , Y 1 = y 1 ( f 1 - Z 1 ) f 1 , X 2 = x 2 ( f 2 - Z 2 ) f 2 , Y 2 = y 2 ( f 2 - Z 2 ) f 2 ,
( X 1 Y 1 Z 1 1 ) = ( 1 0 0 0 0 cos α sin α 0 0 - sin α cos α 0 0 0 0 1 ) × ( cos β 0 - sin β 0 0 1 0 0 sin β 0 cos β 0 0 0 0 1 ) × ( 1 0 0 0 0 1 0 - T 0 y 0 0 1 - T 0 z 0 0 0 1 ) ( X 2 Y 2 Z 2 1 ) .
Z 1 = ( A / B ) + cos α sin β x 2 - sin α y 2 + T 0 y sin α - T 0 z cos α cos β ,
A = ( cos α cos β - cos α sin β x 2 / f 2 + sin α y 2 / f 2 ) × ( y 2 f 2 - y 1 f 1 f 2 2 + y 1 f 2 2 cos α sin β x 2 + y 1 f 2 2 T 0 y sin α - y 1 f 2 2 T 0 z cos α cos β + f 1 f 2 2 sin α sin β x 2 - f 1 f 2 2 T 0 y cos α - f 1 f 2 2 T 0 z sin α cos β , B = y 2 + y 1 f 2 cos α sin β x 2 - y 1 f 2 2 cos α cos β + f 1 f 2 sin α sin β x 2 - f 1 f 2 2 sin α cos β .
C ( T 0 y , T 0 z , f 1 , f 2 , α , β ) = ( Z 1 - [ ( A / B ) + cos α sin β x 2 - sin α y 2 + T 0 y sin α - T 0 z cos α cos β ) ] 2 .
g ( i , j ) = m n f ( m , n ) h ( i , j ;     m , n ) + w ( i , j ) ,
h ( i , j ;     m , n ) = h ( x , y , θ ) ,
( x , y ) R h h ( x , y , θ ) = 1.0 ,
h ( x , y , θ ) = { K if ( x , y ) R h ( θ ) 0 elsewhere ,
h 1 ( i , j ;     m , n ) = h ( i , j , θ ) = h ( i , j , σ g 2 ) = { K exp { - ( i 2 + j 2 ) 2 σ g 2 } if ( i , j ) R h 0 elsewhere ,
h 2 ( i , j ;     m , n ) = h 2 ( i , j , θ ) = { K 1 ( i 2 + j 2 ) + K 2 if ( i , j ) R h 0 elsewhere ,
average gray - level difference = i = 0 image height j = 0 image width [ G 1 ( i , j ) - g 1 ( i , j ) ] .
DifferImage ( x , y ) = 127 + [ Oddfield ( x , y ) - Evenfield ( x , y ) ] / 2.

Metrics