Abstract

Diffractive achromats (DAs) promise ultra-thin and light-weight form factors for full-color computational imaging systems. However, designing DAs with the optimal optical transfer function (OTF) distribution suitable for image reconstruction algorithms has been a difficult challenge. Emerging end-to-end optimization paradigms of diffractive optics and processing algorithms have achieved impressive results, but these approaches require immense computational resources and solve non-convex inverse problems with millions of parameters. Here, we propose a learned rotational symmetric DA design using a concentric ring decomposition that reduces the computational complexity and memory requirements by one order of magnitude compared with conventional end-to-end optimization procedures, which simplifies the optimization significantly. With this approach, we realize the joint learning of a DA with an aperture size of 8 mm and an image recovery neural network, i.e., Res-Unet, in an end-to-end manner across the full visible spectrum (429–699 nm). The peak signal-to-noise ratio of the recovered images of our learned DA is 1.3 dB higher than that of DAs designed by conventional sequential approaches. This is because the learned DA exhibits higher amplitudes of the OTF at high frequencies over the full spectrum. We fabricate the learned DA using imprinting lithography. Experiments show that it resolves both fine details and color fidelity of diverse real-world scenes under natural illumination. The proposed design paradigm paves the way for incorporating DAs for thinner, lighter, and more compact full-spectrum imaging systems.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
    [Crossref]
  2. K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
    [Crossref]
  3. F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
    [Crossref]
  4. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
    [Crossref]
  5. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 663–666.
  6. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4, 1117–1125 (2017).
    [Crossref]
  7. Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
    [Crossref]
  8. K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27, 28075–28090 (2019).
    [Crossref]
  9. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
    [Crossref]
  10. P. R. Gill and D. G. Stork, “Lensless ultra-miniature imagers using odd-symmetry spiral phase gratings,” in Imaging and Applied Optics (2013), paper CW4C.3.
  11. S. Banerji and B. Sensale-Rodriguez, “A computational design framework for efficient, fabrication error-tolerant, planar THz diffractive optical elements,” Sci. Rep. 9, 5801 (2019).
    [Crossref]
  12. S. Banerji, M. Meem, A. Majumder, F. G. Vasquez, B. Sensale-Rodriguez, and R. Menon, “Imaging with flat optics: metalenses or diffractive lenses?” Optica 6, 805–810 (2019).
    [Crossref]
  13. M. Meem, S. Banerji, C. Pies, T. Oberbiermann, A. Majumder, B. Sensale-Rodriguez, and R. Menon, “Large-area, high-numerical-aperture multi-level diffractive lens via inverse design,” Optica 7, 252–253 (2020).
    [Crossref]
  14. S. Banerji, M. Meem, A. Majumder, B. Sensale-Rodriguez, and R. Menon, “Extreme-depth-of-focus imaging with a flat lens,” Optica 7, 214–217 (2020).
    [Crossref]
  15. P. Wang, N. Mohammad, and R. Menon, “Chromatic aberration corrected diffractive lenses for ultra broadband focusing,” Sci. Rep. 6, 21545 (2016).
    [Crossref]
  16. Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, “Computational imaging using lightweight diffractive-refractive optics,” Opt. Express 23, 31393–31407 (2015).
    [Crossref]
  17. F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016).
    [Crossref]
  18. D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
    [Crossref]
  19. Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, “Focal sweep imaging with multi-focal diffractive optics,” in IEEE International Conference on Computational Photography (ICCP) (2018), pp. 1–8.
  20. S. Colburn, A. Zhan, and A. Majumdar, “Metasurface optics for full-color computational imaging,” Sci. Adv. 4, eaar2114 (2018).
    [Crossref]
  21. S. Colburn, A. Zhan, and A. Majumdar, “Varifocal zoom imaging with large area focal length adjustable metalenses,” Optica 5, 825–831 (2018).
    [Crossref]
  22. Y. Peng, F. Qiang, F. Heide, and W. Heidrich, “The diffractive achromat full spectrum computational imaging with diffractive optics,” ACM Trans. Graph. 35, 31 (2016).
    [Crossref]
  23. N. Mohammad, M. Meem, B. Shen, P. Wang, and R. Menon, “Broadband imaging with one planar diffractive lens,” Sci. Rep. 8, 2799 (2018).
    [Crossref]
  24. V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
    [Crossref]
  25. J. Chang and G. Wetzstein, “Deep optics for monocular depth estimation and 3D object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 10193–10202.
  26. Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D—learning phase masks for passive single view depth estimation,” in IEEE International Conference on Computational Photography (ICCP) (2019), pp. 1–12.
  27. C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, “Deep optics for single-shot high-dynamic-range imaging,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2020).
  28. J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
    [Crossref]
  29. H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
    [Crossref]
  30. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).
  31. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
    [Crossref]
  32. K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.
  33. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3929–3938.
  34. S. Elmalem, R. Giryes, and E. Marom, “Learned phase coded aperture for the benefit of depth of field extension,” Opt. Express 26, 15316–15331 (2018).
    [Crossref]
  35. S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3883–3891.
  36. O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.
  37. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.
  38. X. Xia and B. Kulis, “W-net: a deep model for fully unsupervised image segmentation,” arXiv preprint arXiv:1711.08506 (2017).
  39. Google, “TensorFlow: large-scale machine learning on heterogeneous systems,” 2015, https://tensorflow.org .
  40. A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2011), pp. 193–200.
  41. F. Yasuma, T. Mitsunaga, D. Iso, and S. Nayar, “Generalized assorted pixel camera: post-capture control of resolution, dynamic range and spectrum,” Technical Report CUCS-061-08 (Columbia University, 2008).
  42. R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in European Conference on Computer Vision (ECCV) (2014), pp. 186–201.
  43. F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
    [Crossref]
  44. Y. Xia and G. M. Whitesides, “Soft lithography,” Annu. Rev. Mater. Sci. 28, 153–184 (1998).
    [Crossref]
  45. E. Samei, M. J. Flynn, and D. A. Reimann, “A method for measuring the presampled MTF of digital radiographic systems using an edge test device,” Med. Phys. 25, 102–113 (1998).
    [Crossref]

2020 (2)

2019 (6)

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
[Crossref]

K. Monakhova, J. Yurtsever, G. Kuo, N. Antipa, K. Yanny, and L. Waller, “Learned reconstructions for practical mask-based lensless imaging,” Opt. Express 27, 28075–28090 (2019).
[Crossref]

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6, 921–943 (2019).
[Crossref]

S. Banerji and B. Sensale-Rodriguez, “A computational design framework for efficient, fabrication error-tolerant, planar THz diffractive optical elements,” Sci. Rep. 9, 5801 (2019).
[Crossref]

S. Banerji, M. Meem, A. Majumder, F. G. Vasquez, B. Sensale-Rodriguez, and R. Menon, “Imaging with flat optics: metalenses or diffractive lenses?” Optica 6, 805–810 (2019).
[Crossref]

2018 (8)

N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5, 1–9 (2018).
[Crossref]

S. Colburn, A. Zhan, and A. Majumdar, “Metasurface optics for full-color computational imaging,” Sci. Adv. 4, eaar2114 (2018).
[Crossref]

S. Colburn, A. Zhan, and A. Majumdar, “Varifocal zoom imaging with large area focal length adjustable metalenses,” Optica 5, 825–831 (2018).
[Crossref]

N. Mohammad, M. Meem, B. Shen, P. Wang, and R. Menon, “Broadband imaging with one planar diffractive lens,” Sci. Rep. 8, 2799 (2018).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
[Crossref]

S. Elmalem, R. Giryes, and E. Marom, “Learned phase coded aperture for the benefit of depth of field extension,” Opt. Express 26, 15316–15331 (2018).
[Crossref]

2017 (1)

2016 (3)

Y. Peng, F. Qiang, F. Heide, and W. Heidrich, “The diffractive achromat full spectrum computational imaging with diffractive optics,” ACM Trans. Graph. 35, 31 (2016).
[Crossref]

P. Wang, N. Mohammad, and R. Menon, “Chromatic aberration corrected diffractive lenses for ultra broadband focusing,” Sci. Rep. 6, 21545 (2016).
[Crossref]

F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016).
[Crossref]

2015 (1)

2014 (1)

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref]

2013 (2)

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

2012 (1)

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

1998 (2)

Y. Xia and G. M. Whitesides, “Soft lithography,” Annu. Rev. Mater. Sci. 28, 153–184 (1998).
[Crossref]

E. Samei, M. J. Flynn, and D. A. Reimann, “A method for measuring the presampled MTF of digital radiographic systems using an edge test device,” Med. Phys. 25, 102–113 (1998).
[Crossref]

1993 (1)

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Amata, H.

Antipa, N.

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 663–666.

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 663–666.

Backer, A. S.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref]

Baek, S.-H.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Banerji, S.

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 663–666.

Barbastathis, G.

Barloon, P. J.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Boardman, J. W.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Boominathan, V.

Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D—learning phase masks for passive single view depth estimation,” in IEEE International Conference on Computational Photography (ICCP) (2019), pp. 1–12.

Bostan, E.

Boyd, S.

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

Brady, D. J.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Bronstein, A. M.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
[Crossref]

Brown, M. S.

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in European Conference on Computer Vision (ECCV) (2014), pp. 186–201.

Brox, T.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

Chakrabarti, A.

A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2011), pp. 193–200.

Chang, J.

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

J. Chang and G. Wetzstein, “Deep optics for monocular depth estimation and 3D object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 10193–10202.

Chatterjee, P.

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

Chen, H.

Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D—learning phase masks for passive single view depth estimation,” in IEEE International Conference on Computational Photography (ICCP) (2019), pp. 1–12.

Colburn, S.

S. Colburn, A. Zhan, and A. Majumdar, “Metasurface optics for full-color computational imaging,” Sci. Adv. 4, eaar2114 (2018).
[Crossref]

S. Colburn, A. Zhan, and A. Majumdar, “Varifocal zoom imaging with large area focal length adjustable metalenses,” Optica 5, 825–831 (2018).
[Crossref]

Dan, L.

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

Diamond, S.

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

Dun, X.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, “Focal sweep imaging with multi-focal diffractive optics,” in IEEE International Conference on Computational Photography (ICCP) (2018), pp. 1–8.

Elmalem, S.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
[Crossref]

S. Elmalem, R. Giryes, and E. Marom, “Learned phase coded aperture for the benefit of depth of field extension,” Opt. Express 26, 15316–15331 (2018).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

Flynn, M. J.

E. Samei, M. J. Flynn, and D. A. Reimann, “A method for measuring the presampled MTF of digital radiographic systems using an edge test device,” Med. Phys. 25, 102–113 (1998).
[Crossref]

Ford, J. E.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Frangi, A. F.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

Fu, Q.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016).
[Crossref]

Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, “Computational imaging using lightweight diffractive-refractive optics,” Opt. Express 23, 31393–31407 (2015).
[Crossref]

Gehm, M. E.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Gill, P. R.

P. R. Gill and D. G. Stork, “Lensless ultra-miniature imagers using odd-symmetry spiral phase gratings,” in Imaging and Applied Optics (2013), paper CW4C.3.

Giryes, R.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
[Crossref]

S. Elmalem, R. Giryes, and E. Marom, “Learned phase coded aperture for the benefit of depth of field extension,” Opt. Express 26, 15316–15331 (2018).
[Crossref]

Goetz, A. F. H.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Golish, D. R.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).

Gu, S.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3929–3938.

Haim, H.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Heckel, R.

Heide, F.

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016).
[Crossref]

Y. Peng, F. Qiang, F. Heide, and W. Heidrich, “The diffractive achromat full spectrum computational imaging with diffractive optics,” ACM Trans. Graph. 35, 31 (2016).
[Crossref]

Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, “Computational imaging using lightweight diffractive-refractive optics,” Opt. Express 23, 31393–31407 (2015).
[Crossref]

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, “Focal sweep imaging with multi-focal diffractive optics,” in IEEE International Conference on Computational Photography (ICCP) (2018), pp. 1–8.

Heidebrecht, K. B.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Heidrich, W.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016).
[Crossref]

Y. Peng, F. Qiang, F. Heide, and W. Heidrich, “The diffractive achromat full spectrum computational imaging with diffractive optics,” ACM Trans. Graph. 35, 31 (2016).
[Crossref]

Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, “Computational imaging using lightweight diffractive-refractive optics,” Opt. Express 23, 31393–31407 (2015).
[Crossref]

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, “Focal sweep imaging with multi-focal diffractive optics,” in IEEE International Conference on Computational Photography (ICCP) (2018), pp. 1–8.

Hornegger, J.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

Hui, S. S.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Hullin, M. B.

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

Hyun Kim, T.

S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3883–3891.

Ikoma, H.

C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, “Deep optics for single-shot high-dynamic-range imaging,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2020).

Iso, D.

F. Yasuma, T. Mitsunaga, D. Iso, and S. Nayar, “Generalized assorted pixel camera: post-capture control of resolution, dynamic range and spectrum,” Technical Report CUCS-061-08 (Columbia University, 2008).

Jeon, D. S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Kim, J.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Kim, M. H.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Kittle, D. S.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Kolb, A.

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

Kruse, F. A.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Kulis, B.

X. Xia and B. Kulis, “W-net: a deep model for fully unsupervised image segmentation,” arXiv preprint arXiv:1711.08506 (2017).

Kuo, G.

Labitzke, B.

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

Lee, J.

Lefkoff, A. B.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Leibe, B.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

Li, S.

Majumdar, A.

S. Colburn, A. Zhan, and A. Majumdar, “Metasurface optics for full-color computational imaging,” Sci. Adv. 4, eaar2114 (2018).
[Crossref]

S. Colburn, A. Zhan, and A. Majumdar, “Varifocal zoom imaging with large area focal length adjustable metalenses,” Optica 5, 825–831 (2018).
[Crossref]

Majumder, A.

Marks, D. L.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Marom, E.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
[Crossref]

S. Elmalem, R. Giryes, and E. Marom, “Learned phase coded aperture for the benefit of depth of field extension,” Opt. Express 26, 15316–15331 (2018).
[Crossref]

Matas, J.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

Mcmahon, A.

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

Meem, M.

Menon, R.

Metzler, C. A.

C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, “Deep optics for single-shot high-dynamic-range imaging,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2020).

Mildenhall, B.

Mitsunaga, T.

F. Yasuma, T. Mitsunaga, D. Iso, and S. Nayar, “Generalized assorted pixel camera: post-capture control of resolution, dynamic range and spectrum,” Technical Report CUCS-061-08 (Columbia University, 2008).

Moerner, W. E.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref]

Mohammad, N.

N. Mohammad, M. Meem, B. Shen, P. Wang, and R. Menon, “Broadband imaging with one planar diffractive lens,” Sci. Rep. 8, 2799 (2018).
[Crossref]

P. Wang, N. Mohammad, and R. Menon, “Chromatic aberration corrected diffractive lenses for ultra broadband focusing,” Sci. Rep. 6, 21545 (2016).
[Crossref]

Molina, G.

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

Monakhova, K.

Mu Lee, K.

S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3883–3891.

Mullis, R.

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

Nah, S.

S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3883–3891.

Navab, N.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

Nayar, S.

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

F. Yasuma, T. Mitsunaga, D. Iso, and S. Nayar, “Generalized assorted pixel camera: post-capture control of resolution, dynamic range and spectrum,” Technical Report CUCS-061-08 (Columbia University, 2008).

Ng, R.

Nguyen, R. M.

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in European Conference on Computer Vision (ECCV) (2014), pp. 186–201.

Oberbiermann, T.

Ozcan, A.

Peng, Y.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

Y. Peng, F. Qiang, F. Heide, and W. Heidrich, “The diffractive achromat full spectrum computational imaging with diffractive optics,” ACM Trans. Graph. 35, 31 (2016).
[Crossref]

F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016).
[Crossref]

Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, “Computational imaging using lightweight diffractive-refractive optics,” Opt. Express 23, 31393–31407 (2015).
[Crossref]

Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, “Focal sweep imaging with multi-focal diffractive optics,” in IEEE International Conference on Computational Photography (ICCP) (2018), pp. 1–8.

C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, “Deep optics for single-shot high-dynamic-range imaging,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2020).

Pies, C.

Prasad, D. K.

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in European Conference on Computer Vision (ECCV) (2014), pp. 186–201.

Qiang, F.

Y. Peng, F. Qiang, F. Heide, and W. Heidrich, “The diffractive achromat full spectrum computational imaging with diffractive optics,” ACM Trans. Graph. 35, 31 (2016).
[Crossref]

Reimann, D. A.

E. Samei, M. J. Flynn, and D. A. Reimann, “A method for measuring the presampled MTF of digital radiographic systems using an edge test device,” Med. Phys. 25, 102–113 (1998).
[Crossref]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

Ronneberger, O.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

Rouf, M.

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

Sahl, S. J.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref]

Samei, E.

E. Samei, M. J. Flynn, and D. A. Reimann, “A method for measuring the presampled MTF of digital radiographic systems using an edge test device,” Med. Phys. 25, 102–113 (1998).
[Crossref]

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 663–666.

Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D—learning phase masks for passive single view depth estimation,” in IEEE International Conference on Computational Photography (ICCP) (2019), pp. 1–12.

Sebe, N.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

Sensale-Rodriguez, B.

Shapiro, A. T.

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Shechtman, Y.

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref]

Shen, B.

N. Mohammad, M. Meem, B. Shen, P. Wang, and R. Menon, “Broadband imaging with one planar diffractive lens,” Sci. Rep. 8, 2799 (2018).
[Crossref]

Sinha, A.

Situ, G.

Sitzmann, V.

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

Stork, D. G.

P. R. Gill and D. G. Stork, “Lensless ultra-miniature imagers using odd-symmetry spiral phase gratings,” in Imaging and Applied Optics (2013), paper CW4C.3.

Su, S.

Sun, J.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

Sun, Q.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
[Crossref]

Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, “Focal sweep imaging with multi-focal diffractive optics,” in IEEE International Conference on Computational Photography (ICCP) (2018), pp. 1–8.

Tremblay, E. J.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Vasquez, F. G.

Veeraraghavan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 663–666.

Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D—learning phase masks for passive single view depth estimation,” in IEEE International Conference on Computational Photography (ICCP) (2019), pp. 1–12.

Venkataraman, K.

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

Vera, E. M.

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Waller, L.

Wang, P.

N. Mohammad, M. Meem, B. Shen, P. Wang, and R. Menon, “Broadband imaging with one planar diffractive lens,” Sci. Rep. 8, 2799 (2018).
[Crossref]

P. Wang, N. Mohammad, and R. Menon, “Chromatic aberration corrected diffractive lenses for ultra broadband focusing,” Sci. Rep. 6, 21545 (2016).
[Crossref]

Welling, M.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

Wells, W. M.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

Wetzstein, G.

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, “Deep optics for single-shot high-dynamic-range imaging,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2020).

J. Chang and G. Wetzstein, “Deep optics for monocular depth estimation and 3D object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 10193–10202.

Whitesides, G. M.

Y. Xia and G. M. Whitesides, “Soft lithography,” Annu. Rev. Mater. Sci. 28, 153–184 (1998).
[Crossref]

Wu, Y.

Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D—learning phase masks for passive single view depth estimation,” in IEEE International Conference on Computational Photography (ICCP) (2019), pp. 1–12.

Xia, X.

X. Xia and B. Kulis, “W-net: a deep model for fully unsupervised image segmentation,” arXiv preprint arXiv:1711.08506 (2017).

Xia, Y.

Y. Xia and G. M. Whitesides, “Soft lithography,” Annu. Rev. Mater. Sci. 28, 153–184 (1998).
[Crossref]

Yanny, K.

Yasuma, F.

F. Yasuma, T. Mitsunaga, D. Iso, and S. Nayar, “Generalized assorted pixel camera: post-capture control of resolution, dynamic range and spectrum,” Technical Report CUCS-061-08 (Columbia University, 2008).

Yi, S.

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Yurtsever, J.

Zhan, A.

S. Colburn, A. Zhan, and A. Majumdar, “Varifocal zoom imaging with large area focal length adjustable metalenses,” Optica 5, 825–831 (2018).
[Crossref]

S. Colburn, A. Zhan, and A. Majumdar, “Metasurface optics for full-color computational imaging,” Sci. Adv. 4, eaar2114 (2018).
[Crossref]

Zhang, K.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3929–3938.

Zhang, L.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3929–3938.

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

Zickler, T.

A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2011), pp. 193–200.

Zuo, W.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3929–3938.

ACM Trans. Graph. (6)

K. Venkataraman, L. Dan, A. Mcmahon, G. Molina, P. Chatterjee, R. Mullis, and S. Nayar, “PiCam: an ultra-thin high performance monolithic camera array,” ACM Trans. Graph. 32, 1–13 (2013).
[Crossref]

F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, “High-quality computational imaging through simple lenses,”ACM Trans. Graph. 32, 149 (2013).
[Crossref]

Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, “Learned large field-of-view imaging with thin-plate optics,” ACM Trans. Graph. 38, 219 (2019).
[Crossref]

Y. Peng, F. Qiang, F. Heide, and W. Heidrich, “The diffractive achromat full spectrum computational imaging with diffractive optics,” ACM Trans. Graph. 35, 31 (2016).
[Crossref]

V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, “End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging,” ACM Trans. Graph. 37, 1–13 (2018).
[Crossref]

D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph. 38, 117 (2019).
[Crossref]

Annu. Rev. Mater. Sci. (1)

Y. Xia and G. M. Whitesides, “Soft lithography,” Annu. Rev. Mater. Sci. 28, 153–184 (1998).
[Crossref]

IEEE Trans. Comput. Imaging (1)

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Trans. Comput. Imaging 4, 298–310 (2018).
[Crossref]

Med. Phys. (1)

E. Samei, M. J. Flynn, and D. A. Reimann, “A method for measuring the presampled MTF of digital radiographic systems using an edge test device,” Med. Phys. 25, 102–113 (1998).
[Crossref]

Opt. Express (3)

Opt. Photon. News (1)

D. L. Marks, D. R. Golish, D. J. Brady, D. S. Kittle, E. J. Tremblay, E. M. Vera, S. S. Hui, J. E. Ford, J. Kim, and M. E. Gehm, “Gigapixel imaging with the aware multiscale camera,” Opt. Photon. News 23(12), 31 (2012).
[Crossref]

Optica (7)

Phys. Rev. Lett. (1)

Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3D imaging,” Phys. Rev. Lett. 113, 133902 (2014).
[Crossref]

Remote. Sens. Environ. (1)

F. A. Kruse, A. B. Lefkoff, J. W. Boardman, K. B. Heidebrecht, A. T. Shapiro, P. J. Barloon, and A. F. H. Goetz, “The spectral image processing system (SIPS)-interactive visualization and analysis of imaging spectrometer data,” Remote. Sens. Environ. 44, 145–163 (1993).
[Crossref]

Sci. Adv. (1)

S. Colburn, A. Zhan, and A. Majumdar, “Metasurface optics for full-color computational imaging,” Sci. Adv. 4, eaar2114 (2018).
[Crossref]

Sci. Rep. (5)

F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016).
[Crossref]

J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8, 12324 (2018).
[Crossref]

N. Mohammad, M. Meem, B. Shen, P. Wang, and R. Menon, “Broadband imaging with one planar diffractive lens,” Sci. Rep. 8, 2799 (2018).
[Crossref]

S. Banerji and B. Sensale-Rodriguez, “A computational design framework for efficient, fabrication error-tolerant, planar THz diffractive optical elements,” Sci. Rep. 9, 5801 (2019).
[Crossref]

P. Wang, N. Mohammad, and R. Menon, “Chromatic aberration corrected diffractive lenses for ultra broadband focusing,” Sci. Rep. 6, 21545 (2016).
[Crossref]

Other (17)

P. R. Gill and D. G. Stork, “Lensless ultra-miniature imagers using odd-symmetry spiral phase gratings,” in Imaging and Applied Optics (2013), paper CW4C.3.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: replacing lenses with masks and computation,” in IEEE International Conference on Computer Vision (ICCV) (2015), pp. 663–666.

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2005).

J. Chang and G. Wetzstein, “Deep optics for monocular depth estimation and 3D object detection,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 10193–10202.

Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraraghavan, “PhaseCam3D—learning phase masks for passive single view depth estimation,” in IEEE International Conference on Computational Photography (ICCP) (2019), pp. 1–12.

C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, “Deep optics for single-shot high-dynamic-range imaging,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2020).

Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, “Focal sweep imaging with multi-focal diffractive optics,” in IEEE International Conference on Computational Photography (ICCP) (2018), pp. 1–8.

K. He, X. Zhang, S. Ren, J. Sun, B. Leibe, J. Matas, N. Sebe, and M. Welling, “Identity mappings in deep residual networks,” in European Conference on Computer Vision (ECCV) (2016), pp. 630–645.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3929–3938.

S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 3883–3891.

O. Ronneberger, P. Fischer, T. Brox, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, “U-net: convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2015), pp. 234–241.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

X. Xia and B. Kulis, “W-net: a deep model for fully unsupervised image segmentation,” arXiv preprint arXiv:1711.08506 (2017).

Google, “TensorFlow: large-scale machine learning on heterogeneous systems,” 2015, https://tensorflow.org .

A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) (2011), pp. 193–200.

F. Yasuma, T. Mitsunaga, D. Iso, and S. Nayar, “Generalized assorted pixel camera: post-capture control of resolution, dynamic range and spectrum,” Technical Report CUCS-061-08 (Columbia University, 2008).

R. M. Nguyen, D. K. Prasad, and M. S. Brown, “Training-based spectral reconstruction from a single RGB image,” in European Conference on Computer Vision (ECCV) (2014), pp. 186–201.

Supplementary Material (1)

NameDescription
» Supplement 1       Supplemental material

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Overview of proposed end-to-end learning. The parameters of the diffractive achromat (DA) and image recovery algorithm are learned jointly using the end-to-end optimization paradigm. In each forward pass, the spectrally varying scene is convolved with the spectrally varying PSFs of the rotationally symmetric parametrized DA. Then, Gaussian noise is added to the simulated sensor image after integrating over the color response of the RGB sensor for each channel. A neural network, e.g., a Res-Unet consisting of two base network units, is applied as the image recovery unit to resolve a high-fidelity color image. Finally, a differentiable loss, such as the mean squared error for the ground truth image, is defined on the recovered image. An extra energy regularization is added to force light rays to hit within the designated sensor area. In the backward pass, the error is backpropagated to the learned parameters of the image recovery network and height profile of the DA.
Fig. 2.
Fig. 2. Principle illustration of the rotationally symmetric imaging model. (a) DOE parameterization in traditional 2D manners is used as a reference. (b) The dimension of optimization parameters can be shrunk to 1D by applying the rotationally symmetric parameterization. (c) The complex transmittance function of the rotationally symmetric DOE is superimposed with a sequence of discrete concentric rings, that can be further decomposed to a 1D sum of series of circ functions. Each PSF of circ function can be represented by the 1D order Bessel function of the first kind. Using this rotationally symmetric imaging model, the calculation dimension of PSFs and that of optimization parameters can both be reduced to 1D.
Fig. 3.
Fig. 3. PSFs of the DA designed with and without energy regularization. We present the cross section of the 2D PSFs that are normalized to the input energy.
Fig. 4.
Fig. 4. Selected examples of the assessment of the DA designs in simulation. We assess the performance of a conventional Fresnel lens, a reference DA, and a DA optimized using the proposed framework. We show both the original sensor measurements and the recovery results of the Res-Unet. The inset values indicate the PSNR (dB) and SSIM.
Fig. 5.
Fig. 5. Simulated performance of a Fresnel lens and two DAs. For each design, we show a stack of the focused light intensity profiles (a), (d), (g) along the optical axis at multiple wavelengths, where the white dashed line indicates the focal plane, and the scale bar corresponds to 200 µm. We also show the simulated sensor measurements and the normalized PSFs (b), (e), (h) of selected scene points. The normalized PSFs are shown in ${\rm log}$ scale for visualization purpose. In addition, we show the OTFs of three lenses (c), (f), (i) at 31 design wavelengths, respectively. The black dashed line shows the averaged OTF of the 31 design wavelengths.
Fig. 6.
Fig. 6. Measurement of the proposed DA: (a) microscopy image of the fabricated DA (scale bar indicates 0.5 mm); (b) designed and measured PSFs with a Canon T5i DSLR camera body (scale bar indicates 60 µm); and (c) cross section of the PSFs of (b). The PSFs shown here are gamma-corrected for visualization purpose.
Fig. 7.
Fig. 7. Evaluation of full field-of-view (FOV) imaging behavior: (a) degraded and recovered checkerboard image pair and (b) MTFs estimated from the grayscale slant edges inside (a); on-axis and off-axis represent the 0° and 17.5° FOVs, respectively.
Fig. 8.
Fig. 8. Experimental results of the fabricated DA. For each pair, we show the degraded sensor measurement and the recovery result. The exposure times for these images are 2.5, 125, 76, 600, 25, and 600 ms (respectively, from left to right, top to bottom) at ISO 100. The images are center-cropped regions ($3,000 \times 2,000$) of the original camera measurement. The processing time at this image size on an NVIDIA 1080Ti GPU is around 4 s.

Tables (1)

Tables Icon

Table 1. Quantitative Evaluation of Averaged PSNR (dB), SSIM, and SAM over 15 Test Images Resolved Using Different Lens Designs and Recovery Algorithms

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

P S F ( x , y , λ ) = | 1 λ f e i k 2 f ( x 2 + y 2 ) P ( s , t , λ ) e i k 2 f ( s 2 + t 2 ) × e i k f ( x s + y t ) d s d t | 2 ,
P ( r , λ ) e i k 2 f r 2 P ( r 1 , λ ) e i k 2 f r 1 2 c i r c ( r r 1 ) + m = 2 P ( r m , λ ) × e i k 2 f r m 2 [ c i r c ( r r m ) c i r c ( r r m 1 ) ] ,
P S F ( ρ , λ ) = | 2 π λ f e i k 2 f ( λ f ρ ) 2 m = 1 P ( r m , λ ) e i k 2 f r m 2 H ( r m , ρ ) | 2 ,
H ( r m , ρ ) = { 1 2 π ρ [ r m J 1 ( 2 π ρ r 1 ) r m 1 J 1 ( 2 π ρ r m 1 ) ] , m > 1 1 2 π ρ r 1 J 1 ( 2 π ρ r 1 ) , m = 1 ,
Y c ( x , y ) = λ min λ max [ P S F ( x , y , λ ) I ( x , y , λ ) ] R c ( λ ) d λ + η ,
R ( P S F ) = λ min λ max W ( x , y ) P S F ( x , y , λ ) d x d y d λ ,
L = c | | G c ~ ( x , y ) G c ( x , y ) | | 2 2 + α R ( P S F ) ,
G c ( x , y ) = λ min λ max I ( x , y , λ ) R c ( λ ) d λ .