Abstract

The resolution of a camera system determines the fidelity of visual features in captured images. Higher resolution implies greater fidelity and, thus, greater accuracy when performing automated vision tasks, such as object detection, recognition, and tracking. However, the resolution of any camera is fundamentally limited by geometric aberrations. In the past, it has generally been accepted that the resolution of lenses with geometric aberrations cannot be increased beyond a certain threshold. We derive an analytic scaling law showing that, for lenses with spherical aberrations, resolution can be increased beyond the aberration limit by applying a postcapture deblurring step. We then show that resolution can be further increased when image priors are introduced. Based on our analysis, we advocate for computational camera designs consisting of a spherical lens shared by several small planar sensors. We show example images captured with a proof-of-concept gigapixel camera, demonstrating that high resolution can be achieved with a compact form factor and low complexity. We conclude with an analysis on the trade-off between performance and complexity for computational imaging systems with spherical lenses.

© 2011 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. “DARPA at 50” (2010), http://www.darpa.mil/WorkArea/DownloadAsset.aspx?id=2685.
  2. K. Fife, A. El Gamal, and H. Wong, “A 3 Mpixel multi-aperture image sensor with 0.7 um pixels in 0.11 um CMOS,” in IEEE International Solid State Circuits Conference (IEEE, 2008), p. 48.
  3. J. Goodman, Introduction to Fourier Optics (Roberts, 2005).
  4. M. Robinson, G. Feng, and D. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
    [CrossRef]
  5. M. D. Robinson and V. Bhakta, “Experimental validation of extended depth-of-field imaging via spherical coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB4.
  6. F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
    [CrossRef]
  7. O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
    [CrossRef]
  8. M. Ben-Ezra, “High resolution large format tile-scan—camera design, calibration, and extended depth of field,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
    [CrossRef]
  9. S. Wang and W. Heidrich, “The design of an inexpensive very high resolution scan camera system,” Comput. Graph. Forum 23, 441–450 (2004).
    [CrossRef]
  10. “The Gigapixl Project,” http://www.gigapixl.org/ (2007).
  11. B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
    [CrossRef]
  12. Y. Nomura, L. Zhang, and S. Nayar, “Scene collages and flexible camera arrays,” in Proceedings of the Eurographics Symposium on Rendering Techniques, Grenoble, France, 2007 (Eurographics Association, 2007), pp. 127–138.
  13. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
    [CrossRef] [PubMed]
  14. R. Kingslake, A History of the Photographic Lens (Academic, 1989).
  15. R. Luneburg, Mathematical Theory of Optics (University of California, 1964).
  16. S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
    [CrossRef]
  17. G. Krishnan and S. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
    [CrossRef]
  18. R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
    [CrossRef]
  19. H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
    [CrossRef] [PubMed]
  20. L. Lee and R. Szema, “Inspirations from biological optics for advanced photonic systems,” Science 310, 1148–1150 (2005).
    [CrossRef] [PubMed]
  21. D. Marks and D. Brady, “Gigagon: a monocentric lens design imaging 40 gigapixels,” in Imaging Systems, OSA Technical Digest (CD) (Optical Society of America, 2010), paper ITuC2.
  22. Similar camera designs are also being pursued by the DARPA MOSAIC project, led by D. J. Brady. See “Terrapixel imaging,” presented at ICCP ’10, Invited Talk, MIT Media Lab, Cambridge, Mass., 29 March 2010.
  23. E. Dowski and J. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
    [CrossRef] [PubMed]
  24. E. Dowski, Jr., R. Cormack, and S. Sarama, “Wavefront coding: jointly optimized optical and digital imaging systems,” Proc. SPIE 4041, 114–120 (2000) .
    [CrossRef]
  25. D. Robinson and D. G. Stork, “Extending depth-of-field: spherical coding versus asymmetric wavefront coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB3.
  26. A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. 28, 4996–4998 (1989).
    [CrossRef] [PubMed]
  27. J. Geary, Introduction to Lens Design: With Practical ZEMAX Examples (Willmann-Bell, 2002).
  28. A. Levin, W. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (2008), pp. 88–101.
  29. A. Levin, S. Hasinoff, P. Green, F. Durand, and W. Freeman, “4d frequency analysis of computational cameras for depth of field extension,” in ACM SIGGRAPH 2009 papers (SIGGRAPH ’09) (Association for Computing Machinery, 2009), pp. 1–14.
  30. O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in ACM SIGGRAPH 2010 papers (SIGGRAPH ’10) (Association for Computing Machinery, 2010), pp. 1–10.
  31. L. J. Slater, Generalized Hypergeometric Functions (Cambridge University, 1966).
  32. ZEMAX Optical Design Software (2010), http://www.zemax.com/.
  33. A. Chakrabarti, K. Hirakawa, and T. Zickler, “Computational color constancy with spatial correlations,” Harvard Technical Report TR-09-10 (2010).
  34. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (Taylor & Francis, 1998).
    [CrossRef]
  35. C. Zhou and S. Nayar, “What are good apertures for defocus deblurring?” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
  36. Lumenera Corporation (2010), http://www.lumenera.com/.
  37. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Proc. SPIE 6064, 354–365 (2006).
    [CrossRef]
  38. Microsoft Image Composite Editor (ICE) website (2010), http://research.microsoft.com/en-us/um/redmond/groups/ivm/ICE/.

2009

M. Robinson, G. Feng, and D. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[CrossRef]

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

G. Krishnan and S. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[CrossRef]

D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009).
[CrossRef] [PubMed]

2008

R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
[CrossRef]

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

2006

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Proc. SPIE 6064, 354–365 (2006).
[CrossRef]

2005

L. Lee and R. Szema, “Inspirations from biological optics for advanced photonic systems,” Science 310, 1148–1150 (2005).
[CrossRef] [PubMed]

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
[CrossRef]

2004

S. Wang and W. Heidrich, “The design of an inexpensive very high resolution scan camera system,” Comput. Graph. Forum 23, 441–450 (2004).
[CrossRef]

2000

E. Dowski, Jr., R. Cormack, and S. Sarama, “Wavefront coding: jointly optimized optical and digital imaging systems,” Proc. SPIE 4041, 114–120 (2000) .
[CrossRef]

1995

1989

Adams, A.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Antunez, E.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Barth, A.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Ben-Ezra, M.

M. Ben-Ezra, “High resolution large format tile-scan—camera design, calibration, and extended depth of field,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
[CrossRef]

Bertero, M.

M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (Taylor & Francis, 1998).
[CrossRef]

Bhakta, V.

M. D. Robinson and V. Bhakta, “Experimental validation of extended depth-of-field imaging via spherical coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB4.

Boccacci, P.

M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (Taylor & Francis, 1998).
[CrossRef]

Brady, D.

D. Marks and D. Brady, “Gigagon: a monocentric lens design imaging 40 gigapixels,” in Imaging Systems, OSA Technical Digest (CD) (Optical Society of America, 2010), paper ITuC2.

Brady, D. J.

Cao, F.

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

Cathey, J.

Catrysse, P.

R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
[CrossRef]

S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
[CrossRef]

Chakrabarti, A.

A. Chakrabarti, K. Hirakawa, and T. Zickler, “Computational color constancy with spatial correlations,” Harvard Technical Report TR-09-10 (2010).

Choi, W.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Cormack, R.

E. Dowski, Jr., R. Cormack, and S. Sarama, “Wavefront coding: jointly optimized optical and digital imaging systems,” Proc. SPIE 4041, 114–120 (2000) .
[CrossRef]

Cossairt, O.

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
[CrossRef]

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in ACM SIGGRAPH 2010 papers (SIGGRAPH ’10) (Association for Computing Machinery, 2010), pp. 1–10.

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Proc. SPIE 6064, 354–365 (2006).
[CrossRef]

Dinyari, R.

R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
[CrossRef]

S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
[CrossRef]

Dowski, E.

E. Dowski, Jr., R. Cormack, and S. Sarama, “Wavefront coding: jointly optimized optical and digital imaging systems,” Proc. SPIE 4041, 114–120 (2000) .
[CrossRef]

E. Dowski and J. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
[CrossRef] [PubMed]

Durand, F.

A. Levin, S. Hasinoff, P. Green, F. Durand, and W. Freeman, “4d frequency analysis of computational cameras for depth of field extension,” in ACM SIGGRAPH 2009 papers (SIGGRAPH ’09) (Association for Computing Machinery, 2009), pp. 1–14.

A. Levin, W. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (2008), pp. 88–101.

Egiazarian, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Proc. SPIE 6064, 354–365 (2006).
[CrossRef]

El Gamal, A.

K. Fife, A. El Gamal, and H. Wong, “A 3 Mpixel multi-aperture image sensor with 0.7 um pixels in 0.11 um CMOS,” in IEEE International Solid State Circuits Conference (IEEE, 2008), p. 48.

Feng, G.

M. Robinson, G. Feng, and D. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[CrossRef]

Fife, K.

K. Fife, A. El Gamal, and H. Wong, “A 3 Mpixel multi-aperture image sensor with 0.7 um pixels in 0.11 um CMOS,” in IEEE International Solid State Circuits Conference (IEEE, 2008), p. 48.

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Proc. SPIE 6064, 354–365 (2006).
[CrossRef]

Freeman, W.

A. Levin, S. Hasinoff, P. Green, F. Durand, and W. Freeman, “4d frequency analysis of computational cameras for depth of field extension,” in ACM SIGGRAPH 2009 papers (SIGGRAPH ’09) (Association for Computing Machinery, 2009), pp. 1–14.

A. Levin, W. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (2008), pp. 88–101.

Geary, J.

J. Geary, Introduction to Lens Design: With Practical ZEMAX Examples (Willmann-Bell, 2002).

Geddes, J.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Goodman, J.

J. Goodman, Introduction to Fourier Optics (Roberts, 2005).

Green, P.

A. Levin, S. Hasinoff, P. Green, F. Durand, and W. Freeman, “4d frequency analysis of computational cameras for depth of field extension,” in ACM SIGGRAPH 2009 papers (SIGGRAPH ’09) (Association for Computing Machinery, 2009), pp. 1–14.

Guichard, F.

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

Hagen, N.

Hasinoff, S.

A. Levin, S. Hasinoff, P. Green, F. Durand, and W. Freeman, “4d frequency analysis of computational cameras for depth of field extension,” in ACM SIGGRAPH 2009 papers (SIGGRAPH ’09) (Association for Computing Machinery, 2009), pp. 1–14.

Heidrich, W.

S. Wang and W. Heidrich, “The design of an inexpensive very high resolution scan camera system,” Comput. Graph. Forum 23, 441–450 (2004).
[CrossRef]

Hirakawa, K.

A. Chakrabarti, K. Hirakawa, and T. Zickler, “Computational color constancy with spatial correlations,” Harvard Technical Report TR-09-10 (2010).

Horowitz, M.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Huang, K.

R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
[CrossRef]

S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
[CrossRef]

Huang, Y.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Joshi, N.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Katkovnik, V.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Proc. SPIE 6064, 354–365 (2006).
[CrossRef]

Kingslake, R.

R. Kingslake, A History of the Photographic Lens (Academic, 1989).

Ko, H.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Krishnan, G.

G. Krishnan and S. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[CrossRef]

Lee, L.

L. Lee and R. Szema, “Inspirations from biological optics for advanced photonic systems,” Science 310, 1148–1150 (2005).
[CrossRef] [PubMed]

Levin, A.

A. Levin, W. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (2008), pp. 88–101.

A. Levin, S. Hasinoff, P. Green, F. Durand, and W. Freeman, “4d frequency analysis of computational cameras for depth of field extension,” in ACM SIGGRAPH 2009 papers (SIGGRAPH ’09) (Association for Computing Machinery, 2009), pp. 1–14.

Levoy, M.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Lohmann, A. W.

Luneburg, R.

R. Luneburg, Mathematical Theory of Optics (University of California, 1964).

Malyarchuk, V.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Marks, D.

D. Marks and D. Brady, “Gigagon: a monocentric lens design imaging 40 gigapixels,” in Imaging Systems, OSA Technical Digest (CD) (Optical Society of America, 2010), paper ITuC2.

Nayar, S.

G. Krishnan and S. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[CrossRef]

C. Zhou and S. Nayar, “What are good apertures for defocus deblurring?” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
[CrossRef]

Y. Nomura, L. Zhang, and S. Nayar, “Scene collages and flexible camera arrays,” in Proceedings of the Eurographics Symposium on Rendering Techniques, Grenoble, France, 2007 (Eurographics Association, 2007), pp. 127–138.

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in ACM SIGGRAPH 2010 papers (SIGGRAPH ’10) (Association for Computing Machinery, 2010), pp. 1–10.

Nguyen, H.-P.

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

Nomura, Y.

Y. Nomura, L. Zhang, and S. Nayar, “Scene collages and flexible camera arrays,” in Proceedings of the Eurographics Symposium on Rendering Techniques, Grenoble, France, 2007 (Eurographics Association, 2007), pp. 127–138.

Peumans, P.

R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
[CrossRef]

S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
[CrossRef]

Pyanet, M.

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

Rim, S.

R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
[CrossRef]

S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
[CrossRef]

Robinson, D.

D. Robinson and D. G. Stork, “Extending depth-of-field: spherical coding versus asymmetric wavefront coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB3.

Robinson, M.

M. Robinson, G. Feng, and D. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[CrossRef]

Robinson, M. D.

M. D. Robinson and V. Bhakta, “Experimental validation of extended depth-of-field imaging via spherical coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB4.

Rogers, J. A.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Sarama, S.

E. Dowski, Jr., R. Cormack, and S. Sarama, “Wavefront coding: jointly optimized optical and digital imaging systems,” Proc. SPIE 4041, 114–120 (2000) .
[CrossRef]

Slater, L. J.

L. J. Slater, Generalized Hypergeometric Functions (Cambridge University, 1966).

Song, J.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Stork, D.

M. Robinson, G. Feng, and D. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[CrossRef]

Stork, D. G.

D. Robinson and D. G. Stork, “Extending depth-of-field: spherical coding versus asymmetric wavefront coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB3.

Stoykovich, M.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Szema, R.

L. Lee and R. Szema, “Inspirations from biological optics for advanced photonic systems,” Science 310, 1148–1150 (2005).
[CrossRef] [PubMed]

Talvala, E.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Tarchouna, I.

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

Tessières, R.

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

Vaish, V.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Wang, S.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

S. Wang and W. Heidrich, “The design of an inexpensive very high resolution scan camera system,” Comput. Graph. Forum 23, 441–450 (2004).
[CrossRef]

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Wong, H.

K. Fife, A. El Gamal, and H. Wong, “A 3 Mpixel multi-aperture image sensor with 0.7 um pixels in 0.11 um CMOS,” in IEEE International Solid State Circuits Conference (IEEE, 2008), p. 48.

Xiao, J.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Yu, C.

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Zhang, L.

Y. Nomura, L. Zhang, and S. Nayar, “Scene collages and flexible camera arrays,” in Proceedings of the Eurographics Symposium on Rendering Techniques, Grenoble, France, 2007 (Eurographics Association, 2007), pp. 127–138.

Zhou, C.

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in ACM SIGGRAPH 2010 papers (SIGGRAPH ’10) (Association for Computing Machinery, 2010), pp. 1–10.

C. Zhou and S. Nayar, “What are good apertures for defocus deblurring?” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Zickler, T.

A. Chakrabarti, K. Hirakawa, and T. Zickler, “Computational color constancy with spatial correlations,” Harvard Technical Report TR-09-10 (2010).

ACM Trans. Graph.

B. Wilburn, N. Joshi, V. Vaish, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776(2005).
[CrossRef]

Appl. Opt.

Appl. Phys. Lett.

R. Dinyari, S. Rim, K. Huang, P. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008).
[CrossRef]

Comput. Graph. Forum

S. Wang and W. Heidrich, “The design of an inexpensive very high resolution scan camera system,” Comput. Graph. Forum 23, 441–450 (2004).
[CrossRef]

Nature

H. Ko, M. Stoykovich, J. Song, V. Malyarchuk, W. Choi, C. Yu, J. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748–753 (2008).
[CrossRef] [PubMed]

Opt. Express

Proc. SPIE

E. Dowski, Jr., R. Cormack, and S. Sarama, “Wavefront coding: jointly optimized optical and digital imaging systems,” Proc. SPIE 4041, 114–120 (2000) .
[CrossRef]

S. Rim, P. Catrysse, R. Dinyari, K. Huang, and P. Peumans, “The optical advantages of curved focal plane arrays,” in Proc. SPIE 5678, 48–58 (2005).
[CrossRef]

G. Krishnan and S. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009).
[CrossRef]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising with block-matching and 3D filtering,” in Proc. SPIE 6064, 354–365 (2006).
[CrossRef]

F. Guichard, H.-P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N(2009).
[CrossRef]

M. Robinson, G. Feng, and D. Stork, “Spherical coded imagers: improving lens speed, depth-of-field, and manufacturing yield through enhanced spherical aberration and compensating image processing,” Proc. SPIE 7429, 74290M (2009).
[CrossRef]

Science

L. Lee and R. Szema, “Inspirations from biological optics for advanced photonic systems,” Science 310, 1148–1150 (2005).
[CrossRef] [PubMed]

Other

D. Marks and D. Brady, “Gigagon: a monocentric lens design imaging 40 gigapixels,” in Imaging Systems, OSA Technical Digest (CD) (Optical Society of America, 2010), paper ITuC2.

Similar camera designs are also being pursued by the DARPA MOSAIC project, led by D. J. Brady. See “Terrapixel imaging,” presented at ICCP ’10, Invited Talk, MIT Media Lab, Cambridge, Mass., 29 March 2010.

J. Geary, Introduction to Lens Design: With Practical ZEMAX Examples (Willmann-Bell, 2002).

A. Levin, W. Freeman, and F. Durand, “Understanding camera trade-offs through a Bayesian analysis of light field projections,” in European Conference on Computer Vision (2008), pp. 88–101.

A. Levin, S. Hasinoff, P. Green, F. Durand, and W. Freeman, “4d frequency analysis of computational cameras for depth of field extension,” in ACM SIGGRAPH 2009 papers (SIGGRAPH ’09) (Association for Computing Machinery, 2009), pp. 1–14.

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in ACM SIGGRAPH 2010 papers (SIGGRAPH ’10) (Association for Computing Machinery, 2010), pp. 1–10.

L. J. Slater, Generalized Hypergeometric Functions (Cambridge University, 1966).

ZEMAX Optical Design Software (2010), http://www.zemax.com/.

A. Chakrabarti, K. Hirakawa, and T. Zickler, “Computational color constancy with spatial correlations,” Harvard Technical Report TR-09-10 (2010).

M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (Taylor & Francis, 1998).
[CrossRef]

C. Zhou and S. Nayar, “What are good apertures for defocus deblurring?” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.

Lumenera Corporation (2010), http://www.lumenera.com/.

M. D. Robinson and V. Bhakta, “Experimental validation of extended depth-of-field imaging via spherical coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB4.

“DARPA at 50” (2010), http://www.darpa.mil/WorkArea/DownloadAsset.aspx?id=2685.

K. Fife, A. El Gamal, and H. Wong, “A 3 Mpixel multi-aperture image sensor with 0.7 um pixels in 0.11 um CMOS,” in IEEE International Solid State Circuits Conference (IEEE, 2008), p. 48.

J. Goodman, Introduction to Fourier Optics (Roberts, 2005).

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
[CrossRef]

M. Ben-Ezra, “High resolution large format tile-scan—camera design, calibration, and extended depth of field,” in 2010 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2010), pp. 1–8.
[CrossRef]

“The Gigapixl Project,” http://www.gigapixl.org/ (2007).

Microsoft Image Composite Editor (ICE) website (2010), http://research.microsoft.com/en-us/um/redmond/groups/ivm/ICE/.

Y. Nomura, L. Zhang, and S. Nayar, “Scene collages and flexible camera arrays,” in Proceedings of the Eurographics Symposium on Rendering Techniques, Grenoble, France, 2007 (Eurographics Association, 2007), pp. 127–138.

R. Kingslake, A History of the Photographic Lens (Academic, 1989).

R. Luneburg, Mathematical Theory of Optics (University of California, 1964).

D. Robinson and D. G. Stork, “Extending depth-of-field: spherical coding versus asymmetric wavefront coding,” in Computational Optical Sensing and Imaging, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CThB3.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1
Fig. 1

1.7 Gpixel image captured using the implementation shown in Fig. 13. The image dimensions are 82,000 × 22,000 pixels, and the scene occupies a 126 ° × 32 ° FOV. From left to right, insets reveal the label of a resistor on a PCB, the stippling print pattern on a dollar bill, a miniature 2D barcode pattern, and the fine ridges of a fingerprint on a remote control. The insets are generated by applying a 60 × 200 × digital zoom to the above gigapixel image. Please visit http://gigapan.org/gigapans/0dca576c3a040561b4371cf1d92c93fe/ to view this example in more detail.

Fig. 2
Fig. 2

(a)  F / 4 75 mm lens design capable of imaging 1 Gpixel onto a 75 mm × 75 mm sensor. This lens requires 11 elements to maintain diffraction-limited performance over a 60 ° FOV. (b) MTF at different field positions on the sensor.

Fig. 3
Fig. 3

Plot showing how SBP increases as a function of lens size for a perfectly diffraction-limited lens ( R diff ), a lens with geometric aberrations ( R geom ), and a conventional lens design whose F / # increases with lens size ( R conv ).

Fig. 4
Fig. 4

OPD W ( ρ ) of a lens is the path difference between an ideal spherical wavefront and the aberrated wavefront propagating from the exit pupil of the lens.

Fig. 5
Fig. 5

(a) Singlet lens with strong spherical aberrations. (b) The ray fan shows ray position on the sensor plane as a function of position in the lens aperture. The PSF has a strong peak because rays are concentrated around the center of the image plane. The PSF support is enclosed in an area of radius α.

Fig. 6
Fig. 6

For conventional lens designs, the F / # typically scales with the cube root of the focal length in millimeters.

Fig. 7
Fig. 7

Comparison of the OTF for a lens with spherical aberration calculated using ZEMAX (the blue curves) and using our analytic formula (red curves). The OTF is calculated at various lens scales corresponding to spherical aberration coefficients of α = { 5 μm , 13 μm , 100 μm } .

Fig. 8
Fig. 8

Comparison of the OTF for a lens with spherical aberration calculated using our analytic formula (red curves) and using the approximation for the OTF given by Eq. (27). The OTF is calculated at various lens scales corresponding to spherical aberration coefficients of α = { 20 μm , 50 μm , 200 μm } . As the amount of spherical aberrations increase, the approximation increases in accuracy.

Fig. 9
Fig. 9

Comparison of the RMS deblurring error σ d as a function of the spherical aberration coefficient (α) with sensor noise σ n = .01 and Nyquist frequency Ω = 100 mm 1 . The red curve shows the error computed numerically using Eqs. (24, 26). The green curve is calculated using the closed form expression for deblurring error given in Eq. (29). The green curve closely approximates the red curve, with accuracy increasing as α increases.

Fig. 10
Fig. 10

Scaling laws for computational imaging systems with spherical aberrations. The R ana , which was analytically derived, shows an improvement upon the aberration limited curve R geom , without requiring F / # to increase with M. Performance is further improved when natural image priors are taken into account, as the R prior curve shows. The R prior curve improves upon the conventional lens design curve R conv , also without requiring F / # to increase with M.

Fig. 11
Fig. 11

RMS deblurring error as a function of spherical aberration (α). As α increases, both the PSF size and the deblurring error increase. While the size of the PSF increases linearly with α, deblurring error increases with α 1 / 3.8 . In this experiment, the Nyquist frequency Ω = 250 mm 1 .

Fig. 12
Fig. 12

(a) Our single-element gigapixel camera, which consists solely of a ball lens with an aperture stop surrounded by an array of planar sensors. (b) Because each sensor occupies a small FOV, the PSF is nearly invariant to field position on the sensor. (c) The PSF is easily invertible because the MTF avoids zero crossings and preserves high frequencies.

Fig. 13
Fig. 13

System used to verify the performance of the design shown in Fig. 12a. An aperture is placed on the surface of the ball lens. A gigapixel image is captured by sequentially translating a single 1 / 2.5 , 5 Mpixel sensor with a pan/tilt motor. A final implementation would require a large array of sensors with no dead space between them.

Fig. 14
Fig. 14

1.6 Gpixel image captured using the implementation shown in Fig. 13. The image dimensions are 65,000 × 25,000 pixels, and the scene occupies a 104 ° × 40 ° FOV. From left to right, the insets reveal fine details in a watch, an eye, a resolution chart, and individual strands of hair. Please visit http://gigapan.org/gigapans/09d557515ee4cc1c8c2e33bf4f27485a/ to view this example in more detail.

Fig. 15
Fig. 15

1.4 Gpixel image captured using the implementation shown in Fig. 13. The image dimensions are 110,000 × 22,000 pixels, and the scene occupies a 170 ° × 20 ° FOV. From left to right, insets reveal a sailboat, a sign advertising apartments for sale, the Empire State Building, and cars and trucks driving on a bridge. Please visit http://gigapan.org/gigapans/7173ad0acace87100a3ca728d40a3772/ to view this example in more detail.

Fig. 16
Fig. 16

(a) Single-element design for a gigapixel camera. Each sensor is coupled with a lens that decreases focal distance, allowing FOV to overlap between adjacent sensors. (b) Design for a gigapixel camera with a 2 π rad FOV. The design is similar to the implementation in Fig. 16a with a large gap between adjacent lens/sensor pairs. Light passes through the gaps on one hemisphere, forming an image on a sensor located on the opposite hemisphere.

Fig. 17
Fig. 17

MTF for spherical optical systems with varying amounts of complexity. Complexity is measured as the number of optical surfaces, which increases from left to right as one to six surfaces. The six surface design is the Gigagon lens designed by Marks and Brady [21]. Each design is a F / 2.8 280 mm focal length lens optimized using ZEMAX. As the number of surfaces increases, the MTF improves, improving the SNR as well.

Fig. 18
Fig. 18

SNR versus complexity for the lens designs shown in Fig. 17, assuming a computational approach is taken. SNR increases by a factor of 19 when complexity increases from one shell to two shells, while SNR only increases by a factor of 4 when complexity increases from two shells to six shells.

Equations (56)

Equations on this page are rendered with MathJax. Learn more.

R diff ( M ) = M 2 Δ x Δ y ( λ F / # ) 2 .
W ( ρ , ϕ , r ) = i , j , k W i j k r i ρ j cos k ϕ .
W ( ρ ) = i , j , k W i j k ρ j ,
W 040 = σ I 512 D F / # 3 ,
T ( ρ ) = 2 F / # d W d ρ .
T ( ρ ) = σ I 64 D F / # 2 ρ 3 ,
= α ρ 3 ,
L ( r , ρ ) = 1 π ( ρ ) δ ( r T ( ρ ) ) π | r | ,
( ρ ) = { 1 if     | ρ | < 1 0 otherwise
P ( r ) = π L ( r , ρ ) | ρ | d ρ
= 1 π n α 2 / n ( r α ) | r | 2 / n 2 .
P ( r ) = 3 2 π α 2 / 3 ( r α ) | r | 4 / 3 .
R geom ( M ) = M 2 Δ x Δ y ( λ F / # ) 2 + M 2 δ g 2 .
α = σ I 64 D F / # 2 = σ I 64 f F / # 3 .
R conv ( M ) = M 2 Δ x Δ y λ 2 M 2 / 3 + δ g 2 .
R conv ( M ) M 4 / 3 ,
y = A x + ν ,
y ^ = Λ x ^ + ν ^ ,
x ^ * = Λ 1 y ^ .
x ^ i * = y ^ i / p ^ i .
σ d 2 = 1 N E [ x * x 2 ]
= σ n 2 N i = 1 N 1 p ^ i 2 ,
p ^ ( q ) = 2 π 0 J 0 ( q r ) p ( r ) r d r ,
p ^ ( q ) = 2 α 2 / 3 0 α J 0 ( q r ) r 1 / 3 d r
= F 2 1 ( { 1 3 } , { 1 , 4 3 } , α 2 q 2 4 ) ,
σ d 2 = 2 σ n 2 Ω 2 0 Ω 1 p ^ 2 ( q ) q d q ,
p ^ ( q ) = 2 α 2 / 3 0 J 0 ( q r ) r 1 / 3 d r
= 2 Γ ( 7 / 6 ) π α 2 / 3 ,
σ d = σ n 3 π 2 ( Ω α ) 2 / 3 2 Γ ( 7 / 6 ) .
σ d = k σ n M 2 / 3 ,
σ d = σ n f ( M ) ,
ξ ( M ) 1 σ n ( M ) .
R comp ( M ) = M 2 Δ x Δ y ( λ F / # ) 2 + ξ ( M ) 2 .
R ana ( M ) = M 2 Δ x Δ y ( λ F / # ) 2 + k 2 2 M 4 / 3 ,
R ana ( M ) M 2 / 3 .
P ( y ^ | x ^ ) = exp y ^ Λ x ^ 2 .
x ^ * = arg max x P ( y ^ | x ^ ) P ( x ^ )
= arg max x ( y ^ Λ x ^ 2 + B x ^ 2 )
= ( Λ 2 + B 2 ) 1 Λ t y ^ ,
x ^ i * = p ^ ¯ i p ^ i 2 + b ^ i 2 y ^ i .
σ d 2 = σ n 2 i = 1 N 1 p ^ i 2 + σ n 2 / a ^ i .
σ d σ n α 1 / 3.8
σ n M 1 / 3.8 .
R prior ( M ) = M 2 Δ x Δ y ( λ F / # ) 2 + k 3 2 M 2 / 3.8 ,
P ( r ) = ( ρ ) δ ( r α ρ n ) π | r | | ρ | d ρ .
ρ = ( | z | α ) 1 / n ,
d ρ = 1 n α ( | z | α ) 1 / n 1 d z .
P ( r ) = 1 π | r | ( z α ) δ ( r z ) 1 n α ( | z | α ) 1 / n ( | z | α ) 1 / n 1 d z
= 1 π | r | n α ( z α ) δ ( r z ) ( | z | α ) 2 / n 1 d z
= 1 π | r | n α ( r α ) ( | r | α ) 2 / n 1
= 1 π n α 2 / n ( r α ) | r | 2 / n 2 .
e = π P r ( r ) | r | d r
= π 1 π n α 2 / n ( r α ) | r | 2 / n 2 | r | d r
= 1 n α 2 / n α α | r | 2 / n 1 d r
= 1 n α 2 / n [ n 2 r | r | | r | 2 / n ] α α
= 1 n α 2 / n ( n α 2 / n ) = 1 ,

Metrics