Abstract

Image recovery under noise is widely studied. However, there is little emphasis on performance as a function of object size. In this work we analyze the probability of recovery as a function of object spatial frequency. The analysis uses a physical model for the acquired signal and noise, and also accounts for potential postacquisition noise filtering. Linear-systems analysis yields an effective cutoff frequency, which is induced by noise, despite having no optical blur in the imaging model. This means that a low signal-to-noise ratio (SNR) in images causes resolution loss, similar to image blur. We further consider the effect on SNR of pointwise image formation models, such as added specular or indirect reflections, additive scattering, radiance attenuation in haze, and flash photography. The result is a tool that assesses the ability to recover (within a desirable success rate) an object or feature having a certain size, distance from the camera, and radiance difference from its nearby background, per attenuation coefficient of the medium. The bounds rely on the camera specifications.

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [CrossRef]
  2. R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-dependent image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  3. G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
    [CrossRef]
  4. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Machine Intell. 31, 385–399 (2009).
    [CrossRef]
  5. P. Dayan and L. F. Abbott, Theoretical Neuroscience (MIT, 2001), Chap. 4, pp. 139–141.
  6. N. S. Kopeika, A System Engineering Approach to Imaging (SPIE, 1998), Chaps. 9, 10, 19.
  7. S. W. Smith, The Scientist & Engineer’s Guide to Digital Signal Processing (California Tech. Publishing, 1997), Chap. 11.
  8. J. Lloyd, Thermal Imaging Systems (Springer, 1975). Chaps. 5, 10.
  9. O. Schade, “An evaluation of photographic image quality and resolving power,” J. Soc. Motion Pict. Telev. Eng. 73, 81–119 (1964).
  10. H. Barrett, “Objective assessment of image quality: effects of quantum noise and object variability,” J. Opt. Soc. Am. A 7, 1266–1278 (1990).
    [CrossRef]
  11. H. Barrett, “NEQ: its progenitors and progeny,” Proc. SPIE 7263, 72630F (2009).
  12. W. Geisler, Ideal Observer Analysis (MIT Press, 2003), pp. 825–837.
  13. I. Cunningham, and R. Shaw, “Signal-to-noise optimization of medical imaging systems,” J. Opt. Soc. Am. A 16, 621–632 (1999).
    [CrossRef]
  14. M. Unser, B. L. Trus, and A. C. Steven, “A new resolution criterion based on spectral signal-to-noise ratios,” Ultramicroscopy 23, 39–51 (1987).
    [CrossRef]
  15. M. Shahram and P. Milanfar, “Imaging below the diffraction limit: a statistical analysis,” IEEE Trans. Image Process. 13, 677–689 (2004).
    [CrossRef]
  16. M. Shahram and P. Milanfar, “Statistical and information-theoretic analysis of resolution in imaging,” IEEE Trans. Inf. Theory 52, 3411–3437 (2006).
    [CrossRef]
  17. H. Farid and E. H. Adelson, “Separating reflections from images by use of independent component analysis,” J. Opt. Soc. Am. A 16, 2136–2145 (1999).
    [CrossRef]
  18. S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21, 163–186 (1997).
    [CrossRef]
  19. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27, 72, (2008).
    [CrossRef]
  20. J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
    [CrossRef]
  21. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003).
    [CrossRef]
  22. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), p. 108.
  23. J. Gu, R. Ramamoorthi, P. Belhumeur, and S. K. Nayar, “Dirty glass: rendering contamination on transparent surfaces,” in Eurographics Symposium on Rendering (Springer, 2007), p. 159–170.
  24. T. Treibitz and Y. Y. Schechner, “Recovery limits in pointwise degradation,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.
  25. G. E. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Machine Intell. 16, 267–276 (1994).
    [CrossRef]
  26. A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
    [CrossRef]
  27. J. Takamatsu, Y. Matsushita, and K. Ikeuchi, “Estimating radiometric response functions from image noise variance,” in Proceedings of European Conference on Computer Vision (Springer, 2008), pp. 623–637.
  28. T. J. Fellers, and M. W. Davidson, “CCD noise sources and signal-to-noise ratio,” Optical Microscopy Primer (Molecular Expressions™) (2004).
  29. S. Inoué, and K. R. Spring, Video Microscopy: The Fundamentals, 2nd ed (Springer, 1997), Chap. 7, p. 316.
  30. Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Machine Intell. 29, 1339–1354 (2007).
    [CrossRef]
  31. C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
    [CrossRef]
  32. Y. Matsushita, and S. Lin, “Radiometric calibration from noise distributions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), p. 1–8.
  33. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006).
    [CrossRef]
  34. S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1440–1447.
  35. F. Koreban, and Y. Y. Schechner, “Geometry by deflaring,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.
  36. S. Bobrov, and Y. Y. Schechner, “Image-based prediction of imaging and vision performance,” J. Opt. Soc. Am. A 24, 1920–1929 (2007).
    [CrossRef]
  37. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A 17, 831–835 (2000).
    [CrossRef]
  38. K. Tan, and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18, 2460–2467 (2001).
    [CrossRef]
  39. Y. Schechner, D. Diner, and J. Martonchik, “Spaceborne underwater imaging,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2011), p. 1–8.
  40. M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
    [CrossRef]
  41. A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005).
    [CrossRef]
  42. B. Wells, “MTF provides an image-quality metric,” Laser Focus World 41 (2005).
  43. S. G. Narasimhan, C. Wang, and S. K. Nayar, “All the images of an outdoor scene,” in Proceedings of European Conference on Computer Vision (IEEE, 2002), pp. 148–162.
  44. J. C. Leachtenauer, W. Malila, J. Irvine, L. Colburn, and N. Salvaggio, “General image-quality equation: GIQE,” Appl. Opt. 36, 8322–8328 (1997).
    [CrossRef]
  45. P. Roetling, E. Trabka, and R. Kinzly, “Theoretical prediction of image quality,” J. Opt. Soc. Am. 58, 342–344 (1968).
    [CrossRef]
  46. R. D. Fiete, and T. A. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001).
    [CrossRef]
  47. A. Burgess, “The rose model, revisited,” J. Opt. Soc. Am. A 16, 633–646 (1999).
    [CrossRef]
  48. B. W. Keelan, Handbook of Image Quality (Dekker, 2002),Chaps. 2, 3.
  49. D. H. Kelly, “Adaptation effects on spatio-temporal sine-wave thresholds,” Vis. Res. 12, 89–101 (1972).
    [CrossRef]
  50. R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011).
    [CrossRef]
  51. A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
    [CrossRef]
  52. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
    [CrossRef]
  53. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
    [CrossRef]
  54. P. Chatterjee and P. Milanfar, “Clustering-based denoising with locally learned dictionaries,” IEEE Trans. Image Process. 18, 1438–1451 (2009).
    [CrossRef]
  55. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003).
    [CrossRef]
  56. P. Chatterjee and P. Milanfar, “Is denoising dead?,” IEEE Trans. Image Process. 19, 895–911 (2010).
    [CrossRef]
  57. A. Levin and B. Nadler, “Natural image denoising: optimality and inherent bounds,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 2833–2840.
  58. J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Denoising examples,” decsai.ugr.es/~javier/denoise/examples.
  59. S. Hasinoff, F. Durand, and W. Freeman, “Noise-optimal capture for high dynamic range photography,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 553–560.

2011

R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011).
[CrossRef]

2010

P. Chatterjee and P. Milanfar, “Is denoising dead?,” IEEE Trans. Image Process. 19, 895–911 (2010).
[CrossRef]

2009

P. Chatterjee and P. Milanfar, “Clustering-based denoising with locally learned dictionaries,” IEEE Trans. Image Process. 18, 1438–1451 (2009).
[CrossRef]

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Machine Intell. 31, 385–399 (2009).
[CrossRef]

H. Barrett, “NEQ: its progenitors and progeny,” Proc. SPIE 7263, 72630F (2009).

2008

R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27, 72, (2008).
[CrossRef]

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
[CrossRef]

2007

S. Bobrov, and Y. Y. Schechner, “Image-based prediction of imaging and vision performance,” J. Opt. Soc. Am. A 24, 1920–1929 (2007).
[CrossRef]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[CrossRef]

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Machine Intell. 29, 1339–1354 (2007).
[CrossRef]

2006

M. Shahram and P. Milanfar, “Statistical and information-theoretic analysis of resolution in imaging,” IEEE Trans. Inf. Theory 52, 3411–3437 (2006).
[CrossRef]

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[CrossRef]

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006).
[CrossRef]

2005

A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005).
[CrossRef]

B. Wells, “MTF provides an image-quality metric,” Laser Focus World 41 (2005).

A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[CrossRef]

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

2004

M. Shahram and P. Milanfar, “Imaging below the diffraction limit: a statistical analysis,” IEEE Trans. Image Process. 13, 677–689 (2004).
[CrossRef]

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

2003

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003).
[CrossRef]

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003).
[CrossRef]

2001

K. Tan, and J. P. Oakley, “Physics-based approach to color image enhancement in poor visibility conditions,” J. Opt. Soc. Am. A 18, 2460–2467 (2001).
[CrossRef]

R. D. Fiete, and T. A. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001).
[CrossRef]

2000

1999

1997

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21, 163–186 (1997).
[CrossRef]

J. C. Leachtenauer, W. Malila, J. Irvine, L. Colburn, and N. Salvaggio, “General image-quality equation: GIQE,” Appl. Opt. 36, 8322–8328 (1997).
[CrossRef]

1994

G. E. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Machine Intell. 16, 267–276 (1994).
[CrossRef]

1990

1987

M. Unser, B. L. Trus, and A. C. Steven, “A new resolution criterion based on spectral signal-to-noise ratios,” Ultramicroscopy 23, 39–51 (1987).
[CrossRef]

1972

D. H. Kelly, “Adaptation effects on spatio-temporal sine-wave thresholds,” Vis. Res. 12, 89–101 (1972).
[CrossRef]

1968

1964

O. Schade, “An evaluation of photographic image quality and resolving power,” J. Soc. Motion Pict. Telev. Eng. 73, 81–119 (1964).

Abbott, L. F.

P. Dayan and L. F. Abbott, Theoretical Neuroscience (MIT, 2001), Chap. 4, pp. 139–141.

Adelson, E. H.

Agrawal, A.

A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005).
[CrossRef]

Agrawala, M.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

Aharon, M.

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[CrossRef]

Barrett, H.

Belhumeur, P.

J. Gu, R. Ramamoorthi, P. Belhumeur, and S. K. Nayar, “Dirty glass: rendering contamination on transparent surfaces,” in Eurographics Symposium on Rendering (Springer, 2007), p. 159–170.

Belhumeur, P. N.

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Machine Intell. 29, 1339–1354 (2007).
[CrossRef]

Bobrov, S.

Bolas, M.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

Boult, T.

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21, 163–186 (1997).
[CrossRef]

Bovik, A.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Buades, A.

A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[CrossRef]

Burgess, A.

Chatterjee, P.

P. Chatterjee and P. Milanfar, “Is denoising dead?,” IEEE Trans. Image Process. 19, 895–911 (2010).
[CrossRef]

P. Chatterjee and P. Milanfar, “Clustering-based denoising with locally learned dictionaries,” IEEE Trans. Image Process. 18, 1438–1451 (2009).
[CrossRef]

Chen, B.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

Chitwood, D.

Cohen, M.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

Cohen-Or, D.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

Colburn, L.

Coll, B.

A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[CrossRef]

Cunningham, I.

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[CrossRef]

Davidson, M. W.

T. J. Fellers, and M. W. Davidson, “CCD noise sources and signal-to-noise ratio,” Optical Microscopy Primer (Molecular Expressions™) (2004).

Dayan, P.

P. Dayan and L. F. Abbott, Theoretical Neuroscience (MIT, 2001), Chap. 4, pp. 139–141.

Debevec, P.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

Deussen, O.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

Diner, D.

Y. Schechner, D. Diner, and J. Martonchik, “Spaceborne underwater imaging,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2011), p. 1–8.

Durand, F.

S. Hasinoff, F. Durand, and W. Freeman, “Noise-optimal capture for high dynamic range photography,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 553–560.

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[CrossRef]

Elad, M.

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[CrossRef]

Fang, X. S.

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21, 163–186 (1997).
[CrossRef]

Farid, H.

Fattal, R.

R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27, 72, (2008).
[CrossRef]

Fellers, T. J.

T. J. Fellers, and M. W. Davidson, “CCD noise sources and signal-to-noise ratio,” Optical Microscopy Primer (Molecular Expressions™) (2004).

Fiete, R. D.

R. D. Fiete, and T. A. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001).
[CrossRef]

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[CrossRef]

Freeman, W.

S. Hasinoff, F. Durand, and W. Freeman, “Noise-optimal capture for high dynamic range photography,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 553–560.

Freeman, W. T.

C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
[CrossRef]

Gardner, A.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

Geisler, W.

W. Geisler, Ideal Observer Analysis (MIT Press, 2003), pp. 825–837.

Grossberg, M. D.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006).
[CrossRef]

Gu, J.

J. Gu, R. Ramamoorthi, P. Belhumeur, and S. K. Nayar, “Dirty glass: rendering contamination on transparent surfaces,” in Eurographics Symposium on Rendering (Springer, 2007), p. 159–170.

Hasinoff, S.

S. Hasinoff, F. Durand, and W. Freeman, “Noise-optimal capture for high dynamic range photography,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 553–560.

Hawkins, T.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

Healey, G. E.

G. E. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Machine Intell. 16, 267–276 (1994).
[CrossRef]

Heidrich, W.

R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011).
[CrossRef]

Henry, R. C.

Hoppe, H.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

Horowitz, M.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

Ikeuchi, K.

J. Takamatsu, Y. Matsushita, and K. Ikeuchi, “Estimating radiometric response functions from image noise variance,” in Proceedings of European Conference on Computer Vision (Springer, 2008), pp. 623–637.

Inoué, S.

S. Inoué, and K. R. Spring, Video Microscopy: The Fundamentals, 2nd ed (Springer, 1997), Chap. 7, p. 316.

Irvine, J.

Kaftory, R.

R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-dependent image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Kang, S. B.

C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
[CrossRef]

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[CrossRef]

Keelan, B. W.

B. W. Keelan, Handbook of Image Quality (Dekker, 2002),Chaps. 2, 3.

Kelly, D. H.

D. H. Kelly, “Adaptation effects on spatio-temporal sine-wave thresholds,” Vis. Res. 12, 89–101 (1972).
[CrossRef]

Kim, K.

R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011).
[CrossRef]

Kinzly, R.

Kondepudy, R.

G. E. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Machine Intell. 16, 267–276 (1994).
[CrossRef]

Kopeika, N. S.

N. S. Kopeika, A System Engineering Approach to Imaging (SPIE, 1998), Chaps. 9, 10, 19.

Kopf, J.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

Koreban, F.

F. Koreban, and Y. Y. Schechner, “Geometry by deflaring,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.

Krishnan, G.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006).
[CrossRef]

Kutulakos, K. N.

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1440–1447.

Leachtenauer, J. C.

Levin, A.

A. Levin and B. Nadler, “Natural image denoising: optimality and inherent bounds,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 2833–2840.

Levoy, M.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

Li, Y.

A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005).
[CrossRef]

Lin, S.

Y. Matsushita, and S. Lin, “Radiometric calibration from noise distributions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), p. 1–8.

Lischinski, D.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

Liu, C.

C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
[CrossRef]

Lloyd, J.

J. Lloyd, Thermal Imaging Systems (Springer, 1975). Chaps. 5, 10.

Mahadev, S.

Malila, W.

Mantiuk, R.

R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011).
[CrossRef]

Martonchik, J.

Y. Schechner, D. Diner, and J. Martonchik, “Spaceborne underwater imaging,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2011), p. 1–8.

Matsushita, Y.

Y. Matsushita, and S. Lin, “Radiometric calibration from noise distributions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), p. 1–8.

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1440–1447.

J. Takamatsu, Y. Matsushita, and K. Ikeuchi, “Estimating radiometric response functions from image noise variance,” in Proceedings of European Conference on Computer Vision (Springer, 2008), pp. 623–637.

McDowall, I.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

Milanfar, P.

P. Chatterjee and P. Milanfar, “Is denoising dead?,” IEEE Trans. Image Process. 19, 895–911 (2010).
[CrossRef]

P. Chatterjee and P. Milanfar, “Clustering-based denoising with locally learned dictionaries,” IEEE Trans. Image Process. 18, 1438–1451 (2009).
[CrossRef]

M. Shahram and P. Milanfar, “Statistical and information-theoretic analysis of resolution in imaging,” IEEE Trans. Inf. Theory 52, 3411–3437 (2006).
[CrossRef]

M. Shahram and P. Milanfar, “Imaging below the diffraction limit: a statistical analysis,” IEEE Trans. Image Process. 13, 677–689 (2004).
[CrossRef]

Morel, J. M.

A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[CrossRef]

Nadler, B.

A. Levin and B. Nadler, “Natural image denoising: optimality and inherent bounds,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 2833–2840.

Narasimhan, S. G.

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003).
[CrossRef]

S. G. Narasimhan, C. Wang, and S. K. Nayar, “All the images of an outdoor scene,” in Proceedings of European Conference on Computer Vision (IEEE, 2002), pp. 148–162.

Nayar, S. K.

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Machine Intell. 29, 1339–1354 (2007).
[CrossRef]

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006).
[CrossRef]

A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005).
[CrossRef]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003).
[CrossRef]

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21, 163–186 (1997).
[CrossRef]

J. Gu, R. Ramamoorthi, P. Belhumeur, and S. K. Nayar, “Dirty glass: rendering contamination on transparent surfaces,” in Eurographics Symposium on Rendering (Springer, 2007), p. 159–170.

S. G. Narasimhan, C. Wang, and S. K. Nayar, “All the images of an outdoor scene,” in Proceedings of European Conference on Computer Vision (IEEE, 2002), pp. 148–162.

Neubert, B.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

Oakley, J. P.

Petschnigg, G.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

Portilla, J.

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003).
[CrossRef]

Ramamoorthi, R.

J. Gu, R. Ramamoorthi, P. Belhumeur, and S. K. Nayar, “Dirty glass: rendering contamination on transparent surfaces,” in Eurographics Symposium on Rendering (Springer, 2007), p. 159–170.

Raskar, R.

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006).
[CrossRef]

A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005).
[CrossRef]

Rempel, A.

R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011).
[CrossRef]

Roetling, P.

Salvaggio, N.

Schade, O.

O. Schade, “An evaluation of photographic image quality and resolving power,” J. Soc. Motion Pict. Telev. Eng. 73, 81–119 (1964).

Schechner, Y.

Y. Schechner, D. Diner, and J. Martonchik, “Spaceborne underwater imaging,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2011), p. 1–8.

Schechner, Y. Y.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Machine Intell. 31, 385–399 (2009).
[CrossRef]

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Machine Intell. 29, 1339–1354 (2007).
[CrossRef]

S. Bobrov, and Y. Y. Schechner, “Image-based prediction of imaging and vision performance,” J. Opt. Soc. Am. A 24, 1920–1929 (2007).
[CrossRef]

Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization-based vision through haze,” Appl. Opt. 42, 511–525 (2003).
[CrossRef]

T. Treibitz and Y. Y. Schechner, “Recovery limits in pointwise degradation,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.

R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-dependent image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

F. Koreban, and Y. Y. Schechner, “Geometry by deflaring,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.

Seitz, S. M.

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1440–1447.

Shahram, M.

M. Shahram and P. Milanfar, “Statistical and information-theoretic analysis of resolution in imaging,” IEEE Trans. Inf. Theory 52, 3411–3437 (2006).
[CrossRef]

M. Shahram and P. Milanfar, “Imaging below the diffraction limit: a statistical analysis,” IEEE Trans. Image Process. 13, 677–689 (2004).
[CrossRef]

Shaw, R.

Sheikh, H.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Simoncelli, E.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Simoncelli, E. P.

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003).
[CrossRef]

Smith, S. W.

S. W. Smith, The Scientist & Engineer’s Guide to Digital Signal Processing (California Tech. Publishing, 1997), Chap. 11.

Spring, K. R.

S. Inoué, and K. R. Spring, Video Microscopy: The Fundamentals, 2nd ed (Springer, 1997), Chap. 7, p. 316.

Steven, A. C.

M. Unser, B. L. Trus, and A. C. Steven, “A new resolution criterion based on spectral signal-to-noise ratios,” Ultramicroscopy 23, 39–51 (1987).
[CrossRef]

Strela, V.

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003).
[CrossRef]

Szeliski, R.

C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
[CrossRef]

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

Takamatsu, J.

J. Takamatsu, Y. Matsushita, and K. Ikeuchi, “Estimating radiometric response functions from image noise variance,” in Proceedings of European Conference on Computer Vision (Springer, 2008), pp. 623–637.

Tan, K.

Tan, R. T.

R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), p. 108.

Tantalo, T. A.

R. D. Fiete, and T. A. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001).
[CrossRef]

Tchou, C.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

Toyama, K.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

Trabka, E.

Treibitz, T.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Machine Intell. 31, 385–399 (2009).
[CrossRef]

T. Treibitz and Y. Y. Schechner, “Recovery limits in pointwise degradation,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.

Trus, B. L.

M. Unser, B. L. Trus, and A. C. Steven, “A new resolution criterion based on spectral signal-to-noise ratios,” Ultramicroscopy 23, 39–51 (1987).
[CrossRef]

Unger, J.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

Unser, M.

M. Unser, B. L. Trus, and A. C. Steven, “A new resolution criterion based on spectral signal-to-noise ratios,” Ultramicroscopy 23, 39–51 (1987).
[CrossRef]

Urquijo, S.

Uyttendaele, M.

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

Vaish, V.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

Wainwright, M. J.

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003).
[CrossRef]

Wang, C.

S. G. Narasimhan, C. Wang, and S. K. Nayar, “All the images of an outdoor scene,” in Proceedings of European Conference on Computer Vision (IEEE, 2002), pp. 148–162.

Wang, Z.

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

Wells, B.

B. Wells, “MTF provides an image-quality metric,” Laser Focus World 41 (2005).

Wenger, A.

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

Zeevi, Y. Y.

R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-dependent image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Zitnick, C. L.

C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
[CrossRef]

ACM Trans. Graph.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” ACM Trans. Graph. 23, 664–672 (2004).
[CrossRef]

R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27, 72, (2008).
[CrossRef]

J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: model-based photograph enhancement and viewing,” ACM Trans. Graph. 27, 116 (2008).
[CrossRef]

A. Wenger, A. Gardner, C. Tchou, J. Unger, T. Hawkins, and P. Debevec, “Performance relighting and reflectance transformation with time-multiplexed illumination,” ACM Trans. Graph. 24, 756–764 (2005).
[CrossRef]

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004).
[CrossRef]

A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li, “Removing photography artifacts using gradient projection and flash-exposure sampling,” ACM Trans. Graph. 24, 828–835 (2005).
[CrossRef]

S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25, 935–944 (2006).
[CrossRef]

R. Mantiuk, K. Kim, A. Rempel, and W. Heidrich, “Hdr-vdp-2: A calibrated visual metric for visibility and quality predictions in all luminance conditions,” ACM Trans. Graph. 30, 40 (2011).
[CrossRef]

Appl. Opt.

IEEE Trans. Image Process.

M. Shahram and P. Milanfar, “Imaging below the diffraction limit: a statistical analysis,” IEEE Trans. Image Process. 13, 677–689 (2004).
[CrossRef]

Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[CrossRef]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[CrossRef]

M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Trans. Image Process. 15, 3736–3745 (2006).
[CrossRef]

P. Chatterjee and P. Milanfar, “Clustering-based denoising with locally learned dictionaries,” IEEE Trans. Image Process. 18, 1438–1451 (2009).
[CrossRef]

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Image denoising using scale mixtures of gaussians in the wavelet domain,” IEEE Trans. Image Process. 12, 1338–1351 (2003).
[CrossRef]

P. Chatterjee and P. Milanfar, “Is denoising dead?,” IEEE Trans. Image Process. 19, 895–911 (2010).
[CrossRef]

IEEE Trans. Inf. Theory

M. Shahram and P. Milanfar, “Statistical and information-theoretic analysis of resolution in imaging,” IEEE Trans. Inf. Theory 52, 3411–3437 (2006).
[CrossRef]

IEEE Trans. Pattern Anal. Machine Intell.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Machine Intell. 31, 385–399 (2009).
[CrossRef]

Y. Y. Schechner, S. K. Nayar, and P. N. Belhumeur, “Multiplexing for optimal lighting,” IEEE Trans. Pattern Anal. Machine Intell. 29, 1339–1354 (2007).
[CrossRef]

C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,” IEEE Trans. Pattern Anal. Machine Intell. 30, 299–314 (2008).
[CrossRef]

G. E. Healey and R. Kondepudy, “Radiometric CCD camera calibration and noise estimation,” IEEE Trans. Pattern Anal. Machine Intell. 16, 267–276 (1994).
[CrossRef]

Int. J. Comput. Vis.

S. K. Nayar, X. S. Fang, and T. Boult, “Separation of reflection components using color and polarization,” Int. J. Comput. Vis. 21, 163–186 (1997).
[CrossRef]

J. Opt. Soc. Am.

J. Opt. Soc. Am. A

J. Soc. Motion Pict. Telev. Eng.

O. Schade, “An evaluation of photographic image quality and resolving power,” J. Soc. Motion Pict. Telev. Eng. 73, 81–119 (1964).

Laser Focus World

B. Wells, “MTF provides an image-quality metric,” Laser Focus World 41 (2005).

Multiscale Model. Simul.

A. Buades, B. Coll, and J. M. Morel, “A review of image denoising algorithms, with a new one,” Multiscale Model. Simul. 4, 490–530 (2005).
[CrossRef]

Opt. Eng.

R. D. Fiete, and T. A. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. 40, 574–585 (2001).
[CrossRef]

Proc. SPIE

H. Barrett, “NEQ: its progenitors and progeny,” Proc. SPIE 7263, 72630F (2009).

Ultramicroscopy

M. Unser, B. L. Trus, and A. C. Steven, “A new resolution criterion based on spectral signal-to-noise ratios,” Ultramicroscopy 23, 39–51 (1987).
[CrossRef]

Vis. Res.

D. H. Kelly, “Adaptation effects on spatio-temporal sine-wave thresholds,” Vis. Res. 12, 89–101 (1972).
[CrossRef]

Other

B. W. Keelan, Handbook of Image Quality (Dekker, 2002),Chaps. 2, 3.

S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2005), pp. 1440–1447.

F. Koreban, and Y. Y. Schechner, “Geometry by deflaring,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.

A. Levin and B. Nadler, “Natural image denoising: optimality and inherent bounds,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 2833–2840.

J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli, “Denoising examples,” decsai.ugr.es/~javier/denoise/examples.

S. Hasinoff, F. Durand, and W. Freeman, “Noise-optimal capture for high dynamic range photography,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), p. 553–560.

R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008), p. 108.

J. Gu, R. Ramamoorthi, P. Belhumeur, and S. K. Nayar, “Dirty glass: rendering contamination on transparent surfaces,” in Eurographics Symposium on Rendering (Springer, 2007), p. 159–170.

T. Treibitz and Y. Y. Schechner, “Recovery limits in pointwise degradation,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), p. 1–8.

W. Geisler, Ideal Observer Analysis (MIT Press, 2003), pp. 825–837.

R. Kaftory, Y. Y. Schechner, and Y. Y. Zeevi, “Variational distance-dependent image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

P. Dayan and L. F. Abbott, Theoretical Neuroscience (MIT, 2001), Chap. 4, pp. 139–141.

N. S. Kopeika, A System Engineering Approach to Imaging (SPIE, 1998), Chaps. 9, 10, 19.

S. W. Smith, The Scientist & Engineer’s Guide to Digital Signal Processing (California Tech. Publishing, 1997), Chap. 11.

J. Lloyd, Thermal Imaging Systems (Springer, 1975). Chaps. 5, 10.

S. G. Narasimhan, C. Wang, and S. K. Nayar, “All the images of an outdoor scene,” in Proceedings of European Conference on Computer Vision (IEEE, 2002), pp. 148–162.

Y. Schechner, D. Diner, and J. Martonchik, “Spaceborne underwater imaging,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2011), p. 1–8.

Y. Matsushita, and S. Lin, “Radiometric calibration from noise distributions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), p. 1–8.

J. Takamatsu, Y. Matsushita, and K. Ikeuchi, “Estimating radiometric response functions from image noise variance,” in Proceedings of European Conference on Computer Vision (Springer, 2008), pp. 623–637.

T. J. Fellers, and M. W. Davidson, “CCD noise sources and signal-to-noise ratio,” Optical Microscopy Primer (Molecular Expressions™) (2004).

S. Inoué, and K. R. Spring, Video Microscopy: The Fundamentals, 2nd ed (Springer, 1997), Chap. 7, p. 316.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1.

(a) An image of a tree, with a negligible amount of noise. (b), (c) Noise is added to the image with a standard deviation of σ=0.5, 1 (image maximum is 1), respectively. At the higher noise level (c), the fine details (small branch) are lost, whereas the coarse details (trunk) are still visible. The fine details correspond to high spatial frequencies, and the coarse details correspond to lower spatial frequencies. Thus, the noise induces an effect that has resembling consequences to a low-pass filter, even when there is no blur in the image formation.

Fig. 2.
Fig. 2.

A raw noisy image. The horizontal amplitude change is given by a+scos[2πuxx]. The spatial frequency is uxx. Here a is a bias and s is the amplitude. White additive noise increases with y. The result is then contrast stretched. At low frequencies (small x) the pattern is visible even in a very low input SNR. Beyond the marked line, on the upper-right corner, it becomes very difficult (if at all) to reliably distinguish the signal details under the noise.

Fig. 3.
Fig. 3.

Horizontal discrete-time Fourier transform of a+scos[2πuxx], which underlies Fig. 2 without noise. Except for the DC component of the image, the energy is rather uniformly distributed across all frequencies. There is no significant or consistent falloff at high frequencies, specifically no 1/ux falloff. Thus, the visibility loss in Fig. 2 is due to noise, rather than the raw signal.

Fig. 4.
Fig. 4.

Image formation and processing flow. The object radiance lobject is degraded by pointwise effects. The measured image I is noisy, characterized by SNRinput. After postprocessing, the recovery output is characterized by SNRinput, leading to a success rate of ρsuccess.

Fig. 5.
Fig. 5.

(a) A noise-free hazy image, simulated by β=0.2km1 and a linearly changing z[1,30]km. Airlight acts as local bias. (b) A slightly noisy version. (c) Regional contrast stretching of (a) reveals the objects and details. (d) Regional contrast stretching of (b) does not recover small details at large z, over noise.

Fig. 6.
Fig. 6.

[Top] Clear day scene. [Middle] Small details seen on a clear day at z2km but lost in mist. [Bottom] At z3.5km, visibility in mist quickly worsens: even large buildings are lost. Images taken from the WILD database [43]

Fig. 7.
Fig. 7.

SNR improvement C, as a function of u and W. The curve of Wmax(u) is plotted on top. As u increases, windows are limited to smaller sizes. This limits the ability to suppress noise while maintaining the signal.

Fig. 8.
Fig. 8.

Maximal possible SNR improvement Cmax, as a function of u. As u increases, Cmax decreases. The derivation for the Gaussian window appears in Appendix B.

Fig. 9.
Fig. 9.

Filtering the image in Fig. 2 with window size Wmax(u) improves visibility. Cutoff lines are derived using Eqs. (17) and (18). Below the line of ρsuccess=70%, the pattern is clearly seen. Above the upper line, ρsuccess<50%, and noise dominates. The definition of ρsuccess appears in Subsection 4.B.

Fig. 10.
Fig. 10.

Cutoff frequency as a function of SNRinput, for different values of SNRoutputmin. A better input SNR yields a better output resolution. When SNRinput<SNRoutputmin, the image starts to lose reliability.

Fig. 11.
Fig. 11.

Randomness of noise imposes randomness in the success of recovering a single object. [Left] A clear square. [Right] Under the same noise level, the bottom and left edges are visible, while the right one is lost.

Fig. 12.
Fig. 12.

An edge under noise. [Left] Original. [Center] Noise is added to both pixels; however, the edge still keeps its sign. [Right] The edge reverses its sign under noise.

Fig. 13.
Fig. 13.

Probability for success as a function of the output SNR (Eq. 27).

Fig. 14.
Fig. 14.

[Left] Examples of four image pairs from our simulation. The noise STD that was added to each frequency u is calculated to achieve a fixed, constant ρsuccess [Eq. (31)]. Their SSIMs are plotted on the graph [right] using black dots. [Right] SSIM as a function of u for three values of ρsuccess.

Fig. 15.
Fig. 15.

Values of bcritical [in Eq. (36)] as a function of SNRoutputmin, for different cameras. Here, E˜=25%.

Fig. 16.
Fig. 16.

Values of ucutoff(βz) from Eq. (37) as a function of βz, for different cameras and values of SNRoutputmin. Here E˜=25%. When βz reaches the value of bcritical, the value SNRinput drops below the value SNRoutputmin. Then, there is a frequency cutoff.

Fig. 17.
Fig. 17.

Values of Mhaze from Eq. (38) as a function of z, for different cameras, SNRoutputmin=1, E˜=25%. [Left] Nikon D100, ISO 200, [Right] Nikon D100, ISO 800. Increasing z decreases SNRinput. When SNRinput drops below SNRoutputmin, there is a frequency cutoff. Then, Mhaze increases above Mgeometry. For example, at ISO 200, when β=0.2km1, at z=30m the visibility is so bad that only large objects the size of tens of meters can be reliably seen. Changing the ISO setting dramatically changes Mhaze. For example, at β=0.2km1, Mhaze in ISO 800 is five times larger than Mhaze in ISO 200.

Equations (54)

Equations on this page are rendered with MathJax. Learn more.

σ2AI(x)+B,
m=Mfzp.
m12ucutoff
M(z)=zp2fucutoff.
Mgeometry=mzp/f.
I(x)=lobject(x)t(x)+a(x)+n(x),
t(x)=eβz(x)
a(x)=a[1t(x)],
ν=m·ucutoff.
s(x)=12S(u)cos2πux.
SNRinput(u)=|S(u)|σ.
C(u)SNRoutput(u)SNRinput(u)=sin(πWu)sin(πu),
Wmax=1u(2κ+0.5),κ=0,1,2.
Cmax(u)CWmax(u)=1/sin(πu),
SNRinput(u)SNRinputmin.
Cmax(u)SNRinput(u)SNRoutputmin.
SNRinputsin(πu)SNRoutputmin.
ucutoff=1π[arcsin(SNRinputSNRoutputmin)].
u>ucutoff,SNRinput<sin(πu)SNRoutputmin.
S=[lobject(u)lback(u)]t,
NoutputbackNoutputobject<Soutput.
Soutput=SNRoutputσoutput,
ρsignkeep=P(NoutputbackNoutputobject<SNRoutputσoutput).
P(η<χ)=12[1+erf(χ2σoutput)].
ρsignkeep=12[1+erf(SNRoutput/2)].
ρfalse=1ρsignkeep.
ρsuccess=ρsignkeepρfalse=2ρsignkeep1=erf(SNRoutput/2).
SNRoutputmin=2erf1(ρsuccess).
ucutoff=1πarcsin[SNRinput2erf1(ρsuccess)].
ρsuccess=erf[SNRinput2sin(πu)].
σ=|S|/[2sin(πu)erf1(ρsuccess)],
σ2(b)=B+A{12(lobject+lback)t(b)+a[1t(b)]},
V=2B1,
E˜=|lobjectlback|/V.
SNRinput(b)=ebE˜VB+AV[l˜eb+a˜(1eb)].
SNRinput(bcritical)=SNRoutputmin.
ucutoff(βz)=1πarcsin[SNRinput(βz)2erf1(ρsuccess)].
Mhaze(β,z)=πzp2farcsin[SNRinput(βz)/2erf1(ρsuccess)].
I^=I*hW
noutput(x)=1W2xiΩ(x)n(xi).
σoutput=σ/W.
HW(u)=DTFT{hW(x)}=sin(πWu)sin(πWv)W2sin(πu)sin(πv).
HW(u,0)=1Wsin(πWu)sin(πu).
|S(u)output|=HW(u,0)S(u).
SNRoutput(u)=|S(u)output|σoutput=HW(u,0)|S(u)|σ/W.
hg[x,y]=12πWg2exp(x2+y22Wg2),
Hg[u,v]exp[2π2(u2+v2)Wg2].
σoutput2=x,yhg2(x,y)σ2.
C(u,Wg)SNRoutput(u)SNRinput(u).
Cmaxgaus(u)=maxWgC(u,Wg).
(σoutput/σ)2=x,yhg2(x,y)=0.50.50.50.5Hg2(u,v)dudv=1/(2πWg)2.
C(u,Wg)=Hgσoutput/σ=exp[2π2u2Wg2](2πWg).
WgC(u,Wg)=0Wgmax=1/(2πu),
Cmaxgaus(u)=e0.5πu13u1πu.

Metrics