Abstract

A recently proposed visual aid for patients with a restricted visual field (tunnel vision) combines a see-through head-mounted display and a simultaneous minified contour view of the wide-field image of the environment. Such a widening of the effective visual field is helpful for tasks, such as visual search, mobility, and orientation. The sufficiency of image contours for performing everyday visual tasks is of major importance for this application, as well as for other applications, and for basic understanding of human vision. This research aims is to examine and compare the use of different types of automatically created contours, and contour representations, for practical everyday visual operations using commonly observed images. The visual operations include visual searching for items, such as cutlery, housewares, etc. Considering different recognition levels, identification of an object is distinguished from mere detection (when the object is not necessarily identified). Some nonconventional visual-based contour representations were developed for this purpose. Experiments were performed with normal-vision subjects by superposing contours of the wide field of the scene over a narrow field (see-through) background. From the results, it appears that about 85% success is obtained for searched object identification when the best contour versions are employed. Pilot experiments with video simulations are reported at the end of the paper.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. E. Peli, “Vision multiplexing: an engineering approach to vision rehabilitation device development,” Optom. Vis. Sci. 78, 304–315 (2001).
  2. F. Vargas-Martin and E. Peli, “Augmented-view for restricted visual field: multiple device implementations,” Optom. Vis. Sci. 79, 715–723 (2002).
  3. E. Peli, G. Luo, A. Bowers, and N. Rensing, “Development and evaluation of vision multiplexing devices for vision impairment,” Int. J. Artif. Intell. Tools 18, 365–378 (2009).
    [CrossRef]
  4. I. Biederman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38–64 (1988).
    [CrossRef]
  5. T. Sanocki, K. W. Bowyer, M. D. Heath, and S. Sarkar, “Are edges sufficient for object recognition?” J. Exp. Psychol. Hum. Percept. Perform. 24, 340–349 (1998).
    [CrossRef]
  6. M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1338–1359 (1997).
    [CrossRef]
  7. M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “Comparison of edge detectors: a methodology and initial study,” Comput. Vis. Image Underst. 69, 38–54 (1998).
    [CrossRef]
  8. N. S. Kopeika, A System Engineering Approach to Imaging (SPIE, 1998), Chap. 10.
  9. Y. Yitzhaky and E. Peli, “A method for objective edge detection evaluation and detector parameter selection,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1027–1033 (2003).
    [CrossRef]
  10. R. Koren and Y. Yitzhaky, “Automatic selection of edge detector parameters based on spatial and statistical measures,” Comput. Vis. Image Underst. 102, 204–213 (2006).
    [CrossRef]
  11. T. Peli and D. Malah, “A study of edge detection algorithms,” Comput. Graph. Image Process. 20, 1–21 (1982).
    [CrossRef]
  12. D. Ziou and S. Tabbone, “Edge detection techniques—an overview,” Tech. Rep. 195 (Department of Mathematics and Informatique, Universite de Sherbrooke, 1997).
  13. M. Basu, “Gaussian-based edge-detection methods—a survey,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 32, 252–260 (2002).
    [CrossRef]
  14. M. C. Shin, D. Goldgof, and K. W. Bowyer, “Comparison of edge detector performance through use in an object recognition task,” Comput. Vis. Image Underst. 84, 160–178 (2001).
    [CrossRef]
  15. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8, 679–698 (1986).
    [CrossRef]
  16. F. Bergholm, “Edge focusing,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 726–741 (1987).
    [CrossRef]
  17. E. Peli, “Feature detection algorithm based on a visual system model,” Proc. IEEE 90, 78–93 (2002).
    [CrossRef]
  18. R. C. Gonzales and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 10.2.5.
  19. S. Di Zenzo, “A note on the gradient of multi-image,” Comput. Vis. Graph. Image Process. 33, 116–125 (1986).
    [CrossRef]
  20. O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effect of restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).
  21. P. E. Shrout and J. L. Fleiss, “Intraclass correlation: uses in assessing rater reliability,” Psychol. Bull. 86, 420–428 (1979).
    [CrossRef]
  22. http://www.youtube.com/watch?v=9KSeCqrD5es&feature=youtu.be .
  23. C. Dickinson, Low Vision: Principles and Practice(Butterworth-Heinemann, 1998).
  24. F. Vargas-Martin and E. Peli, “Eye movements of patients with tunnel vision while walking,” Investig. Ophthalmol. Vis. Sci. 47, 5295–5302 (2006).
    [CrossRef]
  25. P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).
    [CrossRef]
  26. P. D. Kovesi, “Image features from phase congruency,” J. Comput. Vis. Res. 1, 1–26 (1999).

2011 (1)

P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).
[CrossRef]

2009 (1)

E. Peli, G. Luo, A. Bowers, and N. Rensing, “Development and evaluation of vision multiplexing devices for vision impairment,” Int. J. Artif. Intell. Tools 18, 365–378 (2009).
[CrossRef]

2006 (3)

R. Koren and Y. Yitzhaky, “Automatic selection of edge detector parameters based on spatial and statistical measures,” Comput. Vis. Image Underst. 102, 204–213 (2006).
[CrossRef]

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effect of restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).

F. Vargas-Martin and E. Peli, “Eye movements of patients with tunnel vision while walking,” Investig. Ophthalmol. Vis. Sci. 47, 5295–5302 (2006).
[CrossRef]

2003 (1)

Y. Yitzhaky and E. Peli, “A method for objective edge detection evaluation and detector parameter selection,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1027–1033 (2003).
[CrossRef]

2002 (3)

F. Vargas-Martin and E. Peli, “Augmented-view for restricted visual field: multiple device implementations,” Optom. Vis. Sci. 79, 715–723 (2002).

E. Peli, “Feature detection algorithm based on a visual system model,” Proc. IEEE 90, 78–93 (2002).
[CrossRef]

M. Basu, “Gaussian-based edge-detection methods—a survey,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 32, 252–260 (2002).
[CrossRef]

2001 (2)

M. C. Shin, D. Goldgof, and K. W. Bowyer, “Comparison of edge detector performance through use in an object recognition task,” Comput. Vis. Image Underst. 84, 160–178 (2001).
[CrossRef]

E. Peli, “Vision multiplexing: an engineering approach to vision rehabilitation device development,” Optom. Vis. Sci. 78, 304–315 (2001).

1999 (1)

P. D. Kovesi, “Image features from phase congruency,” J. Comput. Vis. Res. 1, 1–26 (1999).

1998 (2)

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “Comparison of edge detectors: a methodology and initial study,” Comput. Vis. Image Underst. 69, 38–54 (1998).
[CrossRef]

T. Sanocki, K. W. Bowyer, M. D. Heath, and S. Sarkar, “Are edges sufficient for object recognition?” J. Exp. Psychol. Hum. Percept. Perform. 24, 340–349 (1998).
[CrossRef]

1997 (1)

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1338–1359 (1997).
[CrossRef]

1988 (1)

I. Biederman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38–64 (1988).
[CrossRef]

1987 (1)

F. Bergholm, “Edge focusing,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 726–741 (1987).
[CrossRef]

1986 (2)

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8, 679–698 (1986).
[CrossRef]

S. Di Zenzo, “A note on the gradient of multi-image,” Comput. Vis. Graph. Image Process. 33, 116–125 (1986).
[CrossRef]

1982 (1)

T. Peli and D. Malah, “A study of edge detection algorithms,” Comput. Graph. Image Process. 20, 1–21 (1982).
[CrossRef]

1979 (1)

P. E. Shrout and J. L. Fleiss, “Intraclass correlation: uses in assessing rater reliability,” Psychol. Bull. 86, 420–428 (1979).
[CrossRef]

Arbelaez, P.

P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).
[CrossRef]

Basu, M.

M. Basu, “Gaussian-based edge-detection methods—a survey,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 32, 252–260 (2002).
[CrossRef]

Bergholm, F.

F. Bergholm, “Edge focusing,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 726–741 (1987).
[CrossRef]

Biederman, I.

I. Biederman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38–64 (1988).
[CrossRef]

Bowers, A.

E. Peli, G. Luo, A. Bowers, and N. Rensing, “Development and evaluation of vision multiplexing devices for vision impairment,” Int. J. Artif. Intell. Tools 18, 365–378 (2009).
[CrossRef]

Bowyer, K. W.

M. C. Shin, D. Goldgof, and K. W. Bowyer, “Comparison of edge detector performance through use in an object recognition task,” Comput. Vis. Image Underst. 84, 160–178 (2001).
[CrossRef]

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “Comparison of edge detectors: a methodology and initial study,” Comput. Vis. Image Underst. 69, 38–54 (1998).
[CrossRef]

T. Sanocki, K. W. Bowyer, M. D. Heath, and S. Sarkar, “Are edges sufficient for object recognition?” J. Exp. Psychol. Hum. Percept. Perform. 24, 340–349 (1998).
[CrossRef]

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1338–1359 (1997).
[CrossRef]

Canny, J.

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8, 679–698 (1986).
[CrossRef]

Di Zenzo, S.

S. Di Zenzo, “A note on the gradient of multi-image,” Comput. Vis. Graph. Image Process. 33, 116–125 (1986).
[CrossRef]

Dickinson, C.

C. Dickinson, Low Vision: Principles and Practice(Butterworth-Heinemann, 1998).

Fleiss, J. L.

P. E. Shrout and J. L. Fleiss, “Intraclass correlation: uses in assessing rater reliability,” Psychol. Bull. 86, 420–428 (1979).
[CrossRef]

Fowlkes, C.

P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).
[CrossRef]

Goldgof, D.

M. C. Shin, D. Goldgof, and K. W. Bowyer, “Comparison of edge detector performance through use in an object recognition task,” Comput. Vis. Image Underst. 84, 160–178 (2001).
[CrossRef]

Gonzales, R. C.

R. C. Gonzales and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 10.2.5.

Haik, O.

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effect of restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).

Heath, M.

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “Comparison of edge detectors: a methodology and initial study,” Comput. Vis. Image Underst. 69, 38–54 (1998).
[CrossRef]

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1338–1359 (1997).
[CrossRef]

Heath, M. D.

T. Sanocki, K. W. Bowyer, M. D. Heath, and S. Sarkar, “Are edges sufficient for object recognition?” J. Exp. Psychol. Hum. Percept. Perform. 24, 340–349 (1998).
[CrossRef]

Ju, G.

I. Biederman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38–64 (1988).
[CrossRef]

Kopeika, N. S.

N. S. Kopeika, A System Engineering Approach to Imaging (SPIE, 1998), Chap. 10.

Koren, R.

R. Koren and Y. Yitzhaky, “Automatic selection of edge detector parameters based on spatial and statistical measures,” Comput. Vis. Image Underst. 102, 204–213 (2006).
[CrossRef]

Kovesi, P. D.

P. D. Kovesi, “Image features from phase congruency,” J. Comput. Vis. Res. 1, 1–26 (1999).

Lior, Y.

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effect of restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).

Luo, G.

E. Peli, G. Luo, A. Bowers, and N. Rensing, “Development and evaluation of vision multiplexing devices for vision impairment,” Int. J. Artif. Intell. Tools 18, 365–378 (2009).
[CrossRef]

Maire, M.

P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).
[CrossRef]

Malah, D.

T. Peli and D. Malah, “A study of edge detection algorithms,” Comput. Graph. Image Process. 20, 1–21 (1982).
[CrossRef]

Malik, J.

P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).
[CrossRef]

Nahmani, D.

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effect of restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).

Peli, E.

E. Peli, G. Luo, A. Bowers, and N. Rensing, “Development and evaluation of vision multiplexing devices for vision impairment,” Int. J. Artif. Intell. Tools 18, 365–378 (2009).
[CrossRef]

F. Vargas-Martin and E. Peli, “Eye movements of patients with tunnel vision while walking,” Investig. Ophthalmol. Vis. Sci. 47, 5295–5302 (2006).
[CrossRef]

Y. Yitzhaky and E. Peli, “A method for objective edge detection evaluation and detector parameter selection,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1027–1033 (2003).
[CrossRef]

E. Peli, “Feature detection algorithm based on a visual system model,” Proc. IEEE 90, 78–93 (2002).
[CrossRef]

F. Vargas-Martin and E. Peli, “Augmented-view for restricted visual field: multiple device implementations,” Optom. Vis. Sci. 79, 715–723 (2002).

E. Peli, “Vision multiplexing: an engineering approach to vision rehabilitation device development,” Optom. Vis. Sci. 78, 304–315 (2001).

Peli, T.

T. Peli and D. Malah, “A study of edge detection algorithms,” Comput. Graph. Image Process. 20, 1–21 (1982).
[CrossRef]

Rensing, N.

E. Peli, G. Luo, A. Bowers, and N. Rensing, “Development and evaluation of vision multiplexing devices for vision impairment,” Int. J. Artif. Intell. Tools 18, 365–378 (2009).
[CrossRef]

Sanocki, T.

T. Sanocki, K. W. Bowyer, M. D. Heath, and S. Sarkar, “Are edges sufficient for object recognition?” J. Exp. Psychol. Hum. Percept. Perform. 24, 340–349 (1998).
[CrossRef]

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “Comparison of edge detectors: a methodology and initial study,” Comput. Vis. Image Underst. 69, 38–54 (1998).
[CrossRef]

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1338–1359 (1997).
[CrossRef]

Sarkar, S.

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “Comparison of edge detectors: a methodology and initial study,” Comput. Vis. Image Underst. 69, 38–54 (1998).
[CrossRef]

T. Sanocki, K. W. Bowyer, M. D. Heath, and S. Sarkar, “Are edges sufficient for object recognition?” J. Exp. Psychol. Hum. Percept. Perform. 24, 340–349 (1998).
[CrossRef]

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1338–1359 (1997).
[CrossRef]

Shin, M. C.

M. C. Shin, D. Goldgof, and K. W. Bowyer, “Comparison of edge detector performance through use in an object recognition task,” Comput. Vis. Image Underst. 84, 160–178 (2001).
[CrossRef]

Shrout, P. E.

P. E. Shrout and J. L. Fleiss, “Intraclass correlation: uses in assessing rater reliability,” Psychol. Bull. 86, 420–428 (1979).
[CrossRef]

Tabbone, S.

D. Ziou and S. Tabbone, “Edge detection techniques—an overview,” Tech. Rep. 195 (Department of Mathematics and Informatique, Universite de Sherbrooke, 1997).

Vargas-Martin, F.

F. Vargas-Martin and E. Peli, “Eye movements of patients with tunnel vision while walking,” Investig. Ophthalmol. Vis. Sci. 47, 5295–5302 (2006).
[CrossRef]

F. Vargas-Martin and E. Peli, “Augmented-view for restricted visual field: multiple device implementations,” Optom. Vis. Sci. 79, 715–723 (2002).

Woods, R. E.

R. C. Gonzales and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 10.2.5.

Yitzhaky, Y.

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effect of restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).

R. Koren and Y. Yitzhaky, “Automatic selection of edge detector parameters based on spatial and statistical measures,” Comput. Vis. Image Underst. 102, 204–213 (2006).
[CrossRef]

Y. Yitzhaky and E. Peli, “A method for objective edge detection evaluation and detector parameter selection,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1027–1033 (2003).
[CrossRef]

Ziou, D.

D. Ziou and S. Tabbone, “Edge detection techniques—an overview,” Tech. Rep. 195 (Department of Mathematics and Informatique, Universite de Sherbrooke, 1997).

Cogn. Psychol. (1)

I. Biederman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38–64 (1988).
[CrossRef]

Comput. Graph. Image Process. (1)

T. Peli and D. Malah, “A study of edge detection algorithms,” Comput. Graph. Image Process. 20, 1–21 (1982).
[CrossRef]

Comput. Vis. Graph. Image Process. (1)

S. Di Zenzo, “A note on the gradient of multi-image,” Comput. Vis. Graph. Image Process. 33, 116–125 (1986).
[CrossRef]

Comput. Vis. Image Underst. (3)

R. Koren and Y. Yitzhaky, “Automatic selection of edge detector parameters based on spatial and statistical measures,” Comput. Vis. Image Underst. 102, 204–213 (2006).
[CrossRef]

M. C. Shin, D. Goldgof, and K. W. Bowyer, “Comparison of edge detector performance through use in an object recognition task,” Comput. Vis. Image Underst. 84, 160–178 (2001).
[CrossRef]

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “Comparison of edge detectors: a methodology and initial study,” Comput. Vis. Image Underst. 69, 38–54 (1998).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (5)

M. Heath, S. Sarkar, T. Sanocki, and K. W. Bowyer, “A robust visual method for assessing the relative performance of edge detection algorithms,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1338–1359 (1997).
[CrossRef]

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. 8, 679–698 (1986).
[CrossRef]

F. Bergholm, “Edge focusing,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 726–741 (1987).
[CrossRef]

Y. Yitzhaky and E. Peli, “A method for objective edge detection evaluation and detector parameter selection,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 1027–1033 (2003).
[CrossRef]

P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 898–916 (2011).
[CrossRef]

IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. (1)

M. Basu, “Gaussian-based edge-detection methods—a survey,” IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 32, 252–260 (2002).
[CrossRef]

Int. J. Artif. Intell. Tools (1)

E. Peli, G. Luo, A. Bowers, and N. Rensing, “Development and evaluation of vision multiplexing devices for vision impairment,” Int. J. Artif. Intell. Tools 18, 365–378 (2009).
[CrossRef]

Investig. Ophthalmol. Vis. Sci. (1)

F. Vargas-Martin and E. Peli, “Eye movements of patients with tunnel vision while walking,” Investig. Ophthalmol. Vis. Sci. 47, 5295–5302 (2006).
[CrossRef]

J. Comput. Vis. Res. (1)

P. D. Kovesi, “Image features from phase congruency,” J. Comput. Vis. Res. 1, 1–26 (1999).

J. Exp. Psychol. Hum. Percept. Perform. (1)

T. Sanocki, K. W. Bowyer, M. D. Heath, and S. Sarkar, “Are edges sufficient for object recognition?” J. Exp. Psychol. Hum. Percept. Perform. 24, 340–349 (1998).
[CrossRef]

Opt. Eng. (1)

O. Haik, D. Nahmani, Y. Lior, and Y. Yitzhaky, “Effect of restoration on acquisition of moving objects from thermal video sequences degraded by the atmosphere,” Opt. Eng. 45, 117006 (2006).

Optom. Vis. Sci. (2)

E. Peli, “Vision multiplexing: an engineering approach to vision rehabilitation device development,” Optom. Vis. Sci. 78, 304–315 (2001).

F. Vargas-Martin and E. Peli, “Augmented-view for restricted visual field: multiple device implementations,” Optom. Vis. Sci. 79, 715–723 (2002).

Proc. IEEE (1)

E. Peli, “Feature detection algorithm based on a visual system model,” Proc. IEEE 90, 78–93 (2002).
[CrossRef]

Psychol. Bull. (1)

P. E. Shrout and J. L. Fleiss, “Intraclass correlation: uses in assessing rater reliability,” Psychol. Bull. 86, 420–428 (1979).
[CrossRef]

Other (5)

http://www.youtube.com/watch?v=9KSeCqrD5es&feature=youtu.be .

C. Dickinson, Low Vision: Principles and Practice(Butterworth-Heinemann, 1998).

R. C. Gonzales and R. E. Woods, Digital Image Processing, 3rd ed. (Prentice-Hall, 2008), Chap. 10.2.5.

D. Ziou and S. Tabbone, “Edge detection techniques—an overview,” Tech. Rep. 195 (Department of Mathematics and Informatique, Universite de Sherbrooke, 1997).

N. S. Kopeika, A System Engineering Approach to Imaging (SPIE, 1998), Chap. 10.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1.

Example of an original image (a) and its corresponding contour-based augmented-view version (enlarged) (b), at which contours of the wide FOV are placed upon the central small visual field of the image simulating the tunnel-vision FOV. A red circle marks the searched object (a small cellular phone) at each image. In this image 44 out of 48 viewers have detected the phone (from the contours) in an average time of 14.6 s.

Fig. 2.
Fig. 2.

Example of two images with different contour appearances for the same edge detection methods. (a) Original “carrot” image. (b) Original “plate” image. (c) Prewitt detector, color appearance; (d) Prewitt detector, single-line black or white appearance; (e) Canny detector, adapted to background appearance; and (f) Canny detector, double-line black-and-white appearance. A red circle marks the searched object at each image.

Fig. 3.
Fig. 3.

Statistics from viewers for each recognition type. (a) Probability of each recognition type for a person with error bars showing the STDs between persons. (b) Object search time durations until decision for each recognition type (averages over all persons). The error bars represent STDs of the search duration between persons.

Fig. 4.
Fig. 4.

Statistics from images for each recognition type. (a) Probability of each recognition type for an image with error bars showing the STDs between images. (b) Object search durations until decision for each recognition type (averages over all images). The error bars represent the STDs of search duration between images.

Fig. 5.
Fig. 5.

Probabilities of recognition for each recognition type for different contour versions (a, single-line black or white; b, double-line black and white; c, color; d, adaptive to background). (a) Probabilities of true certain identification. (b) Probabilities of false certain identification. (c) Probabilities of true uncertain identification. (d) Probabilities of false uncertain identification.

Fig. 6.
Fig. 6.

Time durations until decision for each recognition type for the different contour versions. (a) Certain identification. (b) False certain identification. (c) Uncertain identification. (d) False uncertain identification.

Fig. 7.
Fig. 7.

(a) Probability of true detection (both certain or uncertain identification). (b) Time durations until decision for true detection.

Fig. 8.
Fig. 8.

Image from the outdoor video after approaching close to the person holding a magazine with Einstein’s face being recognizable from the contours (Prewitt edge detector with color appearance).

Tables (3)

Tables Icon

Table 1. Recognition Probabilities and Search Time Durations of the Overall Recognition Results for 48 Subjects, Searching and Recognizing Objects in 128 Images (a Total of 6144 Object Recognition Operations)a

Tables Icon

Table 2. ANOVA Result of Each Recognition Type for the Ratings of the Different Contour Versions and the Different Imagesa

Tables Icon

Table 3. ANOVA P-Values of the Different Contour Versions with Respect to the Specific Version that Obtained the Best Recognition Probability (“Max” for the “True” Recognition Probabilities, and “Min” for the “False” Recognition Probabilities)a

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

pcert_id=(Num of true certain identificatios)(Num of all answers).
pfalse_cert_id=(Num of false certain identifications)(Numof of all answers).
puncert_id=(Num of true uncertain identifications)(Num of all answers).
pfalse_uncert_id=(Num of false uncertain identifications)(Num of all answers).
pdet=(Num of all true identifcations)(Num of all answers)=pcert_id+puncert_id.
pfalse_det=(Num of all false identifications)(Num of all answers)=pfalse_cert_id+pfalse_uncert_id.

Metrics