Abstract

Modern consumer electronics market dictates the need for small-scale and high-performance cameras. Such designs involve trade-offs between various system parameters. In such trade-offs, Depth Of Field (DOF) is a significant issue very often. We propose a computational imaging-based technique to overcome DOF limitations. Our approach is based on the synergy between a simple phase aperture coding element and a convolutional neural network (CNN). The phase element, designed for DOF extension using color diversity in the imaging system response, causes chromatic variations by creating a different defocus blur for each color channel of the image. The phase-mask is designed such that the CNN model is able to restore from the coded image an all-in-focus image easily. This is achieved by using a joint end-to-end training of both the phase element and the CNN parameters using backpropagation. The proposed approach provides superior performance to other methods in simulations as well as in real-world scenes.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Increasing aperture and depth of field simultaneously with wavefront coding technology

Haoyuan Du, Liquan Dong, Ming Liu, Yuejin Zhao, Yijian Wu, Xueyan Li, Wei Jia, Xiaohua Liu, Mei Hui, and Lingqin Kong
Appl. Opt. 58(17) 4746-4752 (2019)

Extension ratio of depth of field by wavefront coding method

Chao Pan, Jiabi Chen, Rongfu Zhang, and Songlin Zhuang
Opt. Express 16(17) 13364-13371 (2008)

Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery

Yichen Wu, Yair Rivenson, Yibo Zhang, Zhensong Wei, Harun Günaydin, Xing Lin, and Aydogan Ozcan
Optica 5(6) 704-710 (2018)

References

  • View by:
  • |
  • |
  • |

  1. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
    [Crossref] [PubMed]
  2. O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2010), pp. 1–8.
  3. O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2010), pp. 31:1–31:10.
  4. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2007), pp. 70:1–70:10.
  5. F. Zhou, R. Ye, G. Li, H. Zhang, and D. Wang, “Optimized circularly symmetric phase mask to extend the depth of focus,” J. Opt. Soc. Am. A 26, 1889–1895 (2009).
    [Crossref]
  6. C. J. R. Sheppard, “Binary phase filters with a maximally-flat response,” Opt. Lett. 36, 1386–1388 (2011).
    [Crossref] [PubMed]
  7. C. J. Sheppard and S. Mehta, “Three-level filter for increased depth of focus and bessel beam generation,” Opt. Express 20, 27212–27221 (2012).
    [Crossref] [PubMed]
  8. C. Zhou, S. Lin, and S. K. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
    [Crossref]
  9. R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2006), pp. 795–804.
  10. G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.
  11. H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Transactions on Comput. Imaging (to be published).
  12. H. Haim, A. Bronstein, and E. Marom, “Computational multi-focus imaging combining sparse model with color dependent phase mask,” Opt. Express 23, 24547–24556 (2015).
    [Crossref] [PubMed]
  13. P. A. Shedligeri, S. Mohan, and K. Mitra, “Data driven coded aperture design for depth recovery,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 56–60.
  14. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.
  15. H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.
  16. S. Lefkimmiatis, “Non-local color image denoising with convolutional neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 5882–5891.
  17. T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep class-aware image denoising,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 138–142.
  18. M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
    [Crossref]
  19. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 3929–3938.
  20. C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .
  21. N. K. Kalantari and R. Ramamoorthi, “Deep high dynamic range imaging of dynamic scenes,” ACM Trans. Graph. 36, 144 (2017).
    [Crossref]
  22. J. Ojeda-Castaneda and C. M. Gómez-Sarabia, “Tuning field depth at high resolution by pupil engineering,” Adv. Opt. Photon. 7, 814–880 (2015).
    [Crossref]
  23. E. E. García-Guerrero, E. R. Méndez, H. M. Escamilla, T. A. Leskova, and A. A. Maradudin, “Design and fabrication of random phase diffusers for extending the depth of focus,” Opt. Express 15, 910–923 (2007).
    [Crossref] [PubMed]
  24. F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).
  25. A. Chakrabarti, “Learning sensor multiplexing design through back-propagation,” in Proceedings of Advances in Neural Information Processing Systems 29, (Curran Associates, Inc., 2016), pp. 3081–3089.
  26. H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.
  27. G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
    [Crossref] [PubMed]
  28. M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deepbinarymask: Learning a binary mask for video compressive sensing,” CoRR abs/1607.03343 (2016).
  29. B. Milgrom, N. Konforti, M. A. Golub, and E. Marom, “Novel approach for extending the depth of field of barcode decoders by using rgb channels of information,” Opt. express 18, 17027–17039 (2010).
    [Crossref] [PubMed]
  30. E. Ben-Eliezer, N. Konforti, B. Milgrom, and E. Marom, “An optimal binary amplitude-phase mask for hybrid imaging systems that exhibit high resolution and extended depth of field,” Opt. Express 16, 20540–20561 (2008).
    [Crossref] [PubMed]
  31. S. Ryu and C. Joo, “Design of binary phase filters for depth-of-focus extension via binarization of axisymmetric aberrations,” Opt. Express 25, 30312–30326 (2017).
    [Crossref] [PubMed]
  32. J. Goodman, Introduction to Fourier Optics (MaGraw-Hill, 1996), 2nd ed.
  33. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
    [Crossref]
  34. M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3606–3613.
  35. H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Comput. Imaging 3, 47–57 (2017).
    [Crossref]
  36. Y. Ma and V. N. Borovytsky, “Design of a 16.5 megapixel camera lens for a mobile phone,” OALib 2, 1–9 (2015).
  37. D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.
  38. J. Mairal, F. Bach, J. Ponce, and G. Sapiro., “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res. 11, 19–60 (2010).
  39. A. Vedaldi and K. Lenc, “MatConvNet – convolutional neural networks for MATLAB,” in Proceedings of Proceeding of the ACM Int. Conf. on Multimedia, (ACM, 2015), pp. 689–692.

2017 (4)

N. K. Kalantari and R. Ramamoorthi, “Deep high dynamic range imaging of dynamic scenes,” ACM Trans. Graph. 36, 144 (2017).
[Crossref]

G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
[Crossref] [PubMed]

S. Ryu and C. Joo, “Design of binary phase filters for depth-of-focus extension via binarization of axisymmetric aberrations,” Opt. Express 25, 30312–30326 (2017).
[Crossref] [PubMed]

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Comput. Imaging 3, 47–57 (2017).
[Crossref]

2016 (1)

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

2015 (3)

2012 (1)

2011 (2)

C. Zhou, S. Lin, and S. K. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
[Crossref]

C. J. R. Sheppard, “Binary phase filters with a maximally-flat response,” Opt. Lett. 36, 1386–1388 (2011).
[Crossref] [PubMed]

2010 (2)

J. Mairal, F. Bach, J. Ponce, and G. Sapiro., “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res. 11, 19–60 (2010).

B. Milgrom, N. Konforti, M. A. Golub, and E. Marom, “Novel approach for extending the depth of field of barcode decoders by using rgb channels of information,” Opt. express 18, 17027–17039 (2010).
[Crossref] [PubMed]

2009 (2)

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

F. Zhou, R. Ye, G. Li, H. Zhang, and D. Wang, “Optimized circularly symmetric phase mask to extend the depth of focus,” J. Opt. Soc. Am. A 26, 1889–1895 (2009).
[Crossref]

2008 (1)

2007 (1)

1995 (1)

1986 (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Agrawal, A.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2006), pp. 795–804.

Agrawala, M.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.

Aitken, A. P.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Bach, F.

J. Mairal, F. Bach, J. Ponce, and G. Sapiro., “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res. 11, 19–60 (2010).

Ben-Eliezer, E.

Borovytsky, V. N.

Y. Ma and V. N. Borovytsky, “Design of a 16.5 megapixel camera lens for a mobile phone,” OALib 2, 1–9 (2015).

Brédif, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.

Bronstein, A.

Bronstein, A. M.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Transactions on Comput. Imaging (to be published).

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep class-aware image denoising,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 138–142.

Burger, H. C.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

Caballero, J.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Cao, F.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

Cathey, W. T.

Chakrabarti, A.

A. Chakrabarti, “Learning sensor multiplexing design through back-propagation,” in Proceedings of Advances in Neural Information Processing Systems 29, (Curran Associates, Inc., 2016), pp. 3081–3089.

Chaurasia, G.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

Chen, H. G.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

Cimpoi, M.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3606–3613.

Cohen, M.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.

Cossairt, O.

O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2010), pp. 1–8.

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2010), pp. 31:1–31:10.

Dowski, E. R.

Durand, F.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2007), pp. 70:1–70:10.

Duval, G.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.

Elmalem, S.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Transactions on Comput. Imaging (to be published).

Escamilla, H. M.

Fergus, R.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2007), pp. 70:1–70:10.

Freeman, W. T.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2007), pp. 70:1–70:10.

Frosio, I.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Comput. Imaging 3, 47–57 (2017).
[Crossref]

Gallo, O.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Comput. Imaging 3, 47–57 (2017).
[Crossref]

García-Guerrero, E. E.

Gharbi, M.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

Giryes, R.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep class-aware image denoising,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 138–142.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Transactions on Comput. Imaging (to be published).

Golub, M. A.

Gómez-Sarabia, C. M.

Goodman, J.

J. Goodman, Introduction to Fourier Optics (MaGraw-Hill, 1996), 2nd ed.

Gu, S.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 3929–3938.

Guichard, F.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

Gupta, O.

Haim, H.

H. Haim, A. Bronstein, and E. Marom, “Computational multi-focus imaging combining sparse model with color dependent phase mask,” Opt. Express 23, 24547–24556 (2015).
[Crossref] [PubMed]

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Transactions on Comput. Imaging (to be published).

Hanrahan, P.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.

Harmeling, S.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

Heshmat, B.

Hinton, G. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Hoppe, H.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.

Horowitz, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.

Huszar, F.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Iliadis, M.

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deepbinarymask: Learning a binary mask for video compressive sensing,” CoRR abs/1607.03343 (2016).

Jayasuriya, S.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

Joo, C.

Kalantari, N. K.

N. K. Kalantari and R. Ramamoorthi, “Deep high dynamic range imaging of dynamic scenes,” ACM Trans. Graph. 36, 144 (2017).
[Crossref]

Katsaggelos, A. K.

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deepbinarymask: Learning a binary mask for video compressive sensing,” CoRR abs/1607.03343 (2016).

Kautz, J.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Comput. Imaging 3, 47–57 (2017).
[Crossref]

Kokkinos, I.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3606–3613.

Konforti, N.

Krishnan, D.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

Ledig, C.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Lefkimmiatis, S.

S. Lefkimmiatis, “Non-local color image denoising with convolutional neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 5882–5891.

Lenc, K.

A. Vedaldi and K. Lenc, “MatConvNet – convolutional neural networks for MATLAB,” in Proceedings of Proceeding of the ACM Int. Conf. on Multimedia, (ACM, 2015), pp. 689–692.

Leskova, T. A.

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2007), pp. 70:1–70:10.

Levoy, M.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.

Li, G.

Lin, S.

C. Zhou, S. Lin, and S. K. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
[Crossref]

Litany, O.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep class-aware image denoising,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 138–142.

Ma, Y.

Y. Ma and V. N. Borovytsky, “Design of a 16.5 megapixel camera lens for a mobile phone,” OALib 2, 1–9 (2015).

Mairal, J.

J. Mairal, F. Bach, J. Ponce, and G. Sapiro., “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res. 11, 19–60 (2010).

Maji, S.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3606–3613.

Maradudin, A. A.

Marom, E.

Mehta, S.

Méndez, E. R.

Milgrom, B.

Mitra, K.

P. A. Shedligeri, S. Mohan, and K. Mitra, “Data driven coded aperture design for depth recovery,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 56–60.

Mohamed, S.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3606–3613.

Mohan, S.

P. A. Shedligeri, S. Mohan, and K. Mitra, “Data driven coded aperture design for depth recovery,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 56–60.

Molnar, A. C.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

Nayar, S.

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2010), pp. 31:1–31:10.

O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2010), pp. 1–8.

Nayar, S. K.

C. Zhou, S. Lin, and S. K. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
[Crossref]

Ng, R.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.

Nguyen, H. P.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

Ojeda-Castaneda, J.

Paris, S.

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

Petschnigg, G.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.

Ponce, J.

J. Mairal, F. Bach, J. Ponce, and G. Sapiro., “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res. 11, 19–60 (2010).

Pyanet, M.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

Ramamoorthi, R.

N. K. Kalantari and R. Ramamoorthi, “Deep high dynamic range imaging of dynamic scenes,” ACM Trans. Graph. 36, 144 (2017).
[Crossref]

Raskar, R.

G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
[Crossref] [PubMed]

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2006), pp. 795–804.

Remez, T.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep class-aware image denoising,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 138–142.

Rumelhart, D. E.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Ryu, S.

Sapiro., G.

J. Mairal, F. Bach, J. Ponce, and G. Sapiro., “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res. 11, 19–60 (2010).

Satat, G.

Schuler, C. J.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

Shedligeri, P. A.

P. A. Shedligeri, S. Mohan, and K. Mitra, “Data driven coded aperture design for depth recovery,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 56–60.

Sheppard, C. J.

Sheppard, C. J. R.

Shi, W.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Sivaramakrishnan, S.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

Spinoulas, L.

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deepbinarymask: Learning a binary mask for video compressive sensing,” CoRR abs/1607.03343 (2016).

Stephen, J.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

Szeliski, R.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.

Tancik, M.

Tarchouna, I.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

Tay, T.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

Tejani, A.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Tessières, R.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

Theis, L.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Totz, J.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Toyama, K.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.

Tumblin, J.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2006), pp. 795–804.

Vedaldi, A.

A. Vedaldi and K. Lenc, “MatConvNet – convolutional neural networks for MATLAB,” in Proceedings of Proceeding of the ACM Int. Conf. on Multimedia, (ACM, 2015), pp. 689–692.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3606–3613.

Veeraraghavan, A.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

Wang, D.

Wang, Z.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

Williams, R. J.

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

Yang, J.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

Ye, R.

Zhang, H.

Zhang, K.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 3929–3938.

Zhang, L.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 3929–3938.

Zhao, H.

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Comput. Imaging 3, 47–57 (2017).
[Crossref]

Zhou, C.

C. Zhou, S. Lin, and S. K. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
[Crossref]

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2010), pp. 31:1–31:10.

Zhou, F.

Zuo, W.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 3929–3938.

ACM Trans. Graph. (2)

M. Gharbi, G. Chaurasia, S. Paris, and F. Durand, “Deep joint demosaicking and denoising,” ACM Trans. Graph. 35, 191 (2016).
[Crossref]

N. K. Kalantari and R. Ramamoorthi, “Deep high dynamic range imaging of dynamic scenes,” ACM Trans. Graph. 36, 144 (2017).
[Crossref]

Adv. Opt. Photon. (1)

Appl. Opt. (1)

IEEE Transactions on Comput. Imaging (1)

H. Zhao, O. Gallo, I. Frosio, and J. Kautz, “Loss functions for image restoration with neural networks,” IEEE Transactions on Comput. Imaging 3, 47–57 (2017).
[Crossref]

Int. J. Comput. Vis. (1)

C. Zhou, S. Lin, and S. K. Nayar, “Coded aperture pairs for depth from defocus and defocus deblurring,” Int. J. Comput. Vis. 93, 53–72 (2011).
[Crossref]

J. Mach. Learn. Res. (1)

J. Mairal, F. Bach, J. Ponce, and G. Sapiro., “Online learning for matrix factorization and sparse coding,” J. Mach. Learn. Res. 11, 19–60 (2010).

J. Opt. Soc. Am. A (1)

Nature (1)

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature 323, 533–536 (1986).
[Crossref]

OALib (1)

Y. Ma and V. N. Borovytsky, “Design of a 16.5 megapixel camera lens for a mobile phone,” OALib 2, 1–9 (2015).

Opt. Express (6)

Opt. Lett. (1)

Proc. SPIE (1)

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 725012 (2009).

Other (20)

A. Chakrabarti, “Learning sensor multiplexing design through back-propagation,” in Proceedings of Advances in Neural Information Processing Systems 29, (Curran Associates, Inc., 2016), pp. 3081–3089.

H. G. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan, and A. C. Molnar, “ASP vision: Optically computing the first layer of convolutional neural networks using angle sensitive pixels,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 903–912.

M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, “Deepbinarymask: Learning a binary mask for video compressive sensing,” CoRR abs/1607.03343 (2016).

J. Goodman, Introduction to Fourier Optics (MaGraw-Hill, 1996), 2nd ed.

K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 3929–3938.

C. Ledig, L. Theis, F. Huszar, J. Caballero, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 105–114 .

O. Cossairt and S. Nayar, “Spectral focal sweep: Extended depth of field from chromatic aberrations,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2010), pp. 1–8.

O. Cossairt, C. Zhou, and S. Nayar, “Diffusion coded photography for extended depth of field,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2010), pp. 31:1–31:10.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2007), pp. 70:1–70:10.

R. Raskar, A. Agrawal, and J. Tumblin, “Coded exposure photography: Motion deblurring using fluttered shutter,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2006), pp. 795–804.

G. Petschnigg, R. Szeliski, M. Agrawala, M. Cohen, H. Hoppe, and K. Toyama, “Digital photography with flash and no-flash image pairs,” in Proceedings of ACM SIGGRAPH, (ACM, New York, NY, USA, 2004), pp. 664–672.

H. Haim, S. Elmalem, R. Giryes, A. M. Bronstein, and E. Marom, “Depth estimation from a single image using deep learned phase coded mask,” IEEE Transactions on Comput. Imaging (to be published).

P. A. Shedligeri, S. Mohan, and K. Mitra, “Data driven coded aperture design for depth recovery,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 56–60.

R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in “Computer Science Technical Report CSTR 2 (11),” (2005), pp. 1–11.

H. C. Burger, C. J. Schuler, and S. Harmeling, “Image denoising: Can plain neural networks compete with BM3D?” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 2392–2399.

S. Lefkimmiatis, “Non-local color image denoising with convolutional neural networks,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2017), pp. 5882–5891.

T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep class-aware image denoising,” in Proceedings of IEEE International Conference on Image Processing (IEEE, 2017), pp. 138–142.

A. Vedaldi and K. Lenc, “MatConvNet – convolutional neural networks for MATLAB,” in Proceedings of Proceeding of the ACM Int. Conf. on Multimedia, (ACM, 2015), pp. 689–692.

D. Krishnan, T. Tay, and R. Fergus, “Blind deconvolution using a normalized sparsity measure,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2011), pp. 233–240.

M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, “Describing textures in the wild,” in Proceedings of IEEE Conf. on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3606–3613.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 All in focus Imaging system: Our system consists of a phase coded aperture lens followed by a convolutional neural network (CNN) that provides an all-in-focus image. The parameters of the phase-mask and the weights of the CNN are jointly trained in an end-to-end fashion, which leads to an improved performance compared to optimizing each part alone.
Fig. 2
Fig. 2 Phase-mask pattern and MTF curves: The add-on phase-mask pattern contains a phase ring (red). The phase ring parameters (r, ϕ) are optimized along with the CNN training. When incorporated in the aperture stop of a lens, the phase-mask modulates the PSF/MTF of the imaging system for the different colors in the various defocus conditions.
Fig. 3
Fig. 3 All-in-focus CNN architecture: The full architecture being trained, including the optical imaging layer whose inputs are both the image and the current defocus condition. After the training phase, the corresponding learned phase-mask is fabricated and incorporated in the lens, and only the ’conventional’ CNN block is being inferred (yellow colored). ’d’ stands for dilation parameter in the CONV layers.
Fig. 4
Fig. 4 Simulation results: All-in-focus example of a simulated scene with intermediate images. Accuracy is presented in PSNR [db] / SSIM. (a) Original all-in focus-scene; its reconstruction (using imaging simulation with the proper mask followed by a post-processing stage) by: (b) Dowski and Cathey method [1] - imaging with phase-mask result; (c) The original processing result of [1] - Wiener filtering; (d) Dowski and Cathey mask image with the deblurring algorithm of [19]; (e) our initial mask (without training) imaging result; (f) deblurring of (e) using the method of Haim et al. [12]; (g) deblurring of (e) using our CNN, trained for the initial mask; (h) our trained (along with CNN) mask imaging result; (i) deblurring of (h) using the method of Haim et al. [12]; (j) our final result - trained mask imaging and corresponding CNN.
Fig. 5
Fig. 5 Experimental scenes: Several scenes captured with and without our mask, for performance evaluation (the presented images are the clear aperture version). Magnifications of interesting areas appear in Figs. 69.
Fig. 6
Fig. 6 Experimental results: Examples of different depths from the first scene in Fig. 5. From left to right: (a) a clear aperture image; (b) blind deblurring of (a) using Krishnan’s algorithm [37]; (c) Our mask image with Haim et al. [12] processing ; and (d) Our method.
Fig. 7
Fig. 7 Experimental results: Examples of different depths from the second scene in Fig. 5. From left to right: (a) a clear aperture image; (b) blind deblurring of (a) using Krishnan’s algorithm [37]; (c) Our mask image with Haim et al. [12] processing ; and (d) Our method.
Fig. 8
Fig. 8 Experimental results: Examples of different depths from the third scene in Fig. 5. From left to right: (a) a clear aperture image; (b) blind deblurring of (a) using Krishnan’s algorithm [37]; (c) Our mask image with Haim et al. [12] processing ; and (d) Our method.
Fig. 9
Fig. 9 Experimental results: Examples of different depths from the fourth scene in Fig. 5. From left to right: (a) a clear aperture image; (b) blind deblurring of (a) using Krishnan’s algorithm [37]; (c) Our mask image with Haim et al. [12] processing ; and (d) Our method.

Tables (1)

Tables Icon

Table 1 Runtime comparison for a 1024× 512 image

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

ψ = π R 2 λ ( 1 z o + 1 z img 1 f ) = π R 2 λ ( 1 z img 1 z i ) = π R 2 λ ( 1 z o 1 z n ) ,
ϕ = 2 π λ [ n 1 ] h
PSF = | h c | 2 = | { P ( ρ , θ ) } | 2 ,
ψ = π R 2 λ ( 1 z o + 1 z img 1 f ) = π R 2 λ ( 1 z img 1 z i ) = π R 2 λ ( 1 z o 1 z n ) ,
P OOF = P ( ρ , θ ) exp { j ψ ρ 2 } ,
P CA = P ( ρ , θ ) CA ( ρ , θ ) ,
CA ( r , ϕ ) = { exp { j ϕ } r 1 < ρ < r 2 1 otherwise
P ( ψ ) = P ( ρ , θ ) CA ( r , ϕ ) exp { j ψ ρ 2 } .
I out = I in * PSF ( ψ ) .
I out = I in * PSF ( ψ ) .
I out r i / ϕ = r i / ϕ [ I in * PSF ( ψ , r , ϕ ) ] = I in * r i / ϕ PSF ( ψ , r , ϕ ) .
ϕ PSF ( ψ , r , ϕ ) = ϕ [ { P ( ψ , r , ϕ ) { P ( ψ , r , ϕ ) } ¯ ] = [ ϕ { P ( ψ , r , ϕ ) ] { P ( ψ , r , ϕ ) } ¯ + { P ( ψ , r , ϕ ) [ ϕ { P ( ψ , r , ϕ ) } ¯ ] .
ϕ P ( ψ , r , ϕ ) = ϕ [ P ( ρ , θ ) CA ( r , ϕ ) exp { j ψ ρ 2 } ] = P ( ρ , θ ) exp { j ψ ρ 2 } ϕ [ CA ( r , ϕ ) ] = { j P ( ψ , r , ϕ ) r 1 < ρ < r 2 0 otherwise .
r i P ( ψ , r , ϕ ) = r i [ P ( ρ , θ ) CA ( r , ϕ ) exp { j ψ ρ 2 } ] = P ( ρ , θ ) exp { j ψ ρ 2 } r i [ CA ( r , ϕ ) ] .

Metrics