Abstract

Recently, light field cameras have drawn much attraction for their innovative performance in photographic and scientific applications. However, narrow baselines and constrained spatial resolution of current light field cameras impose restrictions on their usability. Therefore, we design a hybrid imaging system containing a light field camera and a high-resolution digital single lens reflex camera, and these two kinds of cameras share the same optical path with a beam splitter so as to achieve the reconstruction of high-resolution light fields. The high-resolution 4D light fields are reconstructed with a phase-based perspective variation strategy. First, we apply complex steerable pyramid decomposition on the high-resolution image from the digital single lens reflex camera. Then, we perform phase-based perspective-shift processing with the disparity value, which is extracted from the upsampled light field depth map, to create high-resolution synthetic light field images. High-resolution digital refocused images and high-resolution depth maps can be generated in this way. Furthermore, controlling the magnitude of the perspective shift enables us to change the depth of field rendering in the digital refocused images. We show several experimental results to demonstrate the effectiveness of our approach.

© 2016 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Reconstruction of perspective shifts and refocusing of a three-dimensional scene from a multi-focus image stack

Julia R. Alonso, Ariel Fernández, and José A. Ferrari
Appl. Opt. 55(9) 2380-2386 (2016)

Image formation analysis and high resolution image reconstruction for plenoptic imaging systems

Sapna A. Shroff and Kathrin Berkner
Appl. Opt. 52(10) D22-D31 (2013)

Iterative reconstruction of scene depth with fidelity based on light field data

Chang Liu, Jun Qiu, and Songnian Zhao
Appl. Opt. 56(11) 3185-3192 (2017)

References

  • View by:
  • |
  • |
  • |

  1. F. Ives, “Parallax stereogram and process of making same,” U.S. patent725,567 (14April1903).
  2. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
    [Crossref]
  3. A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
    [Crossref]
  4. E. H. Adelson and Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
    [Crossref]
  5. M. Levoy and P. Hanrahan, Light Field Rendering (ACM, 1996), pp. 31–42.
  6. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).
  7. Lytro, http:www.lytro.com .
  8. S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light field,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 41–48.
  9. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2013), pp. 1027–1034.
  10. Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2015), pp. 3800–3809.
  11. E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of IEEE International Conference on Image Processing (Institute of Electrical and Electronics Engineers, 1995), pp. 444–447.
  12. C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 3017–3024.
  13. V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2014), pp. 1–10.
  14. J. C. Yang, M. Everett, C. Bueliler, and L. Mcmillan, “A real-time distributed light field camera,” in Proceedings of the 13th Eurographics Workshop on Rendering (Eurographics Association, 2002), pp. 77–86.
  15. C. Zhang and T. Chen, A Self-Reconfigurable Camera Array (ACM, 2004), pp. 151–162.
  16. B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
    [Crossref]
  17. V. Vaish, B. Wilbum, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in Proceedings of IEEE Conference of Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2004), pp. 2–9.
  18. X. Cao, Z. Geng, and T. Li, “Dictionary-based light field acquisition using sparse camera array,” Opt. Express 22, 24081–24095 (2014).
    [Crossref]
  19. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009). pp. 1–8.
  20. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
    [Crossref]
  21. C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
    [Crossref]
  22. S. Gortler, R. Grzeszczuk, R. Szelinski, and M. Cohen, The Lumigraph (ACM, 1996), pp. 43–54.
  23. K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1 (2013).
    [Crossref]
  24. X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, “Coded focal stack photography,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2013), pp. 1–9.
  25. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6, 3179–3189 (2015).
    [Crossref]
  26. T. E. Bisho, S. Zanetti, and P. Favaro, “Light field superresolution,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009), pp. 1–9.
  27. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52, D22–D31 (2013).
    [Crossref]
  28. J. M. Trujillo-Sevilla, L. F. Rodriguez-Ramos, I. Montilla, and J. M. Rodriguez-Ramos, “High resolution imaging and wavefront aberration correction in plenoptic systems,” Opt. Lett. 39, 5030–5033 (2014).
    [Crossref]
  29. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3d deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
    [Crossref]
  30. A. Junker, T. Stenau, and K. H. Brenner, “Scalar wave-optical reconstruction of plenoptic camera images,” Appl. Opt. 53, 5784–5790 (2014).
    [Crossref]
  31. Z. Yu, J. Yu, A. Lumsdaine, and T. Georgiev, “An analysis on color demosaicing in plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 901–908.
  32. S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” in Proceedings of European Conference on Computer Vision (Springer, 2012). pp. 608–621.
  33. K. Mitra and A. Veeraraghavan, “Light field denoising, light field super-resolution and stereo camera based refocusing using a GMM light field patch prior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (Institute of Electrical and Electronics Engineers, 2012), pp. 22–28.
  34. F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, “Fourier slice super-resolution in plenoptic cameras,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2012), pp. 1–11.
  35. D. Cho, M. Lee, S. Kim, and Y. W. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, 2013), pp. 3280–3287.
  36. C. K. Liang and R. Ramamoorthi, “A light transport framework for lenslet light field cameras,” ACM Trans. Graph. 34, 16–21 (2015).
    [Crossref]
  37. H. S. Sawhney, Y. Guo, K. Hanna, and R. Kumar, “Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences,” in Proceedings of ACM SIGGRAPH (ACM, 2001), pp. 451–460.
  38. M. Ben-Ezra and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2003), pp. 657–664.
  39. M. Ben-Ezra and S. K. Nayar, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
    [Crossref]
  40. F. Li, J. Yu, and J. Chai, “A hybrid camera for motion deblurring and depth map super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2008), pp. 1–8.
  41. Y. W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
    [Crossref]
  42. R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.
  43. X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multi-spectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 297–304.
  44. E. Tola, C. Zhang, Q. Cai, and Z. Zhang, “Virtual view generation with a hybrid camera array,” (Ecole Polytechnique F´ed´erale de Lausanne, 2009).
  45. C. H. Lu, S. Muenzel, and J. Fleischer, High-Resolution Light-field Microscopy (Optical Society of America, 2013), paper CTh3B.2.
  46. B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).
  47. P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
    [Crossref]
  48. N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph. 32, 1 (2013).
    [Crossref]
  49. J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26, 96 (2007).
    [Crossref]
  50. R. Bolles, H. Baker, and D. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
    [Crossref]

2015 (2)

X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6, 3179–3189 (2015).
[Crossref]

C. K. Liang and R. Ramamoorthi, “A light transport framework for lenslet light field cameras,” ACM Trans. Graph. 34, 16–21 (2015).
[Crossref]

2014 (3)

2013 (5)

P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
[Crossref]

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3d deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref]

S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52, D22–D31 (2013).
[Crossref]

2010 (1)

Y. W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[Crossref]

2008 (1)

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
[Crossref]

2007 (2)

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
[Crossref]

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26, 96 (2007).
[Crossref]

2005 (1)

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

2004 (1)

M. Ben-Ezra and S. K. Nayar, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[Crossref]

2001 (1)

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

1992 (1)

E. H. Adelson and Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

1987 (1)

R. Bolles, H. Baker, and D. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

1939 (1)

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
[Crossref]

1908 (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

Adams, A.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Adelson, E. H.

E. H. Adelson and Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

Agrawal, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
[Crossref]

Andalman, A.

Antunez, E.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Baker, H.

R. Bolles, H. Baker, and D. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

Bando, Y.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

Barth, A.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Ben-Ezra, M.

M. Ben-Ezra and S. K. Nayar, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[Crossref]

M. Ben-Ezra and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2003), pp. 657–664.

R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.

Berkner, K.

Bisho, T. E.

T. E. Bisho, S. Zanetti, and P. Favaro, “Light field superresolution,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009), pp. 1–9.

Bleyer, M.

C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 3017–3024.

Bolles, R.

R. Bolles, H. Baker, and D. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

Boominathan, V.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2014), pp. 1–10.

Bredif, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).

Brenner, K. H.

Brown, M. S.

Y. W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[Crossref]

Broxton, M.

Bueliler, C.

J. C. Yang, M. Everett, C. Bueliler, and L. Mcmillan, “A real-time distributed light field camera,” in Proceedings of the 13th Eurographics Workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Cai, Q.

E. Tola, C. Zhang, Q. Cai, and Z. Zhang, “Virtual view generation with a hybrid camera array,” (Ecole Polytechnique F´ed´erale de Lausanne, 2009).

Cao, X.

X. Cao, Z. Geng, and T. Li, “Dictionary-based light field acquisition using sparse camera array,” Opt. Express 22, 24081–24095 (2014).
[Crossref]

X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multi-spectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 297–304.

Chai, J.

F. Li, J. Yu, and J. Chai, “A hybrid camera for motion deblurring and depth map super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2008), pp. 1–8.

Chen, H. H.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
[Crossref]

Chen, T.

C. Zhang and T. Chen, A Self-Reconfigurable Camera Array (ACM, 2004), pp. 151–162.

Cho, D.

D. Cho, M. Lee, S. Kim, and Y. W. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, 2013), pp. 3280–3287.

Cohen, M.

S. Gortler, R. Grzeszczuk, R. Szelinski, and M. Cohen, The Lumigraph (ACM, 1996), pp. 43–54.

Cohen, M. F.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26, 96 (2007).
[Crossref]

Cohen, N.

Dai, Q.

X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6, 3179–3189 (2015).
[Crossref]

X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multi-spectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 297–304.

X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, “Coded focal stack photography,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2013), pp. 1–9.

Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2015), pp. 3800–3809.

Dansereau, D. G.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2013), pp. 1027–1034.

Deisseroth, K.

Didyk, P.

P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
[Crossref]

Du, H.

Y. W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[Crossref]

Durand, F.

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
[Crossref]

Duval, G.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).

Everett, M.

J. C. Yang, M. Everett, C. Bueliler, and L. Mcmillan, “A real-time distributed light field camera,” in Proceedings of the 13th Eurographics Workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Favaro, P.

T. E. Bisho, S. Zanetti, and P. Favaro, “Light field superresolution,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009), pp. 1–9.

Fleischer, J.

C. H. Lu, S. Muenzel, and J. Fleischer, High-Resolution Light-field Microscopy (Optical Society of America, 2013), paper CTh3B.2.

Freeman, W. T.

P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
[Crossref]

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of IEEE International Conference on Image Processing (Institute of Electrical and Electronics Engineers, 1995), pp. 444–447.

Gelautz, M.

C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 3017–3024.

Geng, Z.

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009). pp. 1–8.

Z. Yu, J. Yu, A. Lumsdaine, and T. Georgiev, “An analysis on color demosaicing in plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 901–908.

Gershun, A.

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
[Crossref]

Goldluecke, B.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light field,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 41–48.

S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” in Proceedings of European Conference on Computer Vision (Springer, 2012). pp. 608–621.

Gortler, S.

S. Gortler, R. Grzeszczuk, R. Szelinski, and M. Cohen, The Lumigraph (ACM, 1996), pp. 43–54.

Grosenick, L.

Grzeszczuk, R.

S. Gortler, R. Grzeszczuk, R. Szelinski, and M. Cohen, The Lumigraph (ACM, 1996), pp. 43–54.

Guo, Y.

H. S. Sawhney, Y. Guo, K. Hanna, and R. Kumar, “Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences,” in Proceedings of ACM SIGGRAPH (ACM, 2001), pp. 451–460.

Hanna, K.

H. S. Sawhney, Y. Guo, K. Hanna, and R. Kumar, “Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences,” in Proceedings of ACM SIGGRAPH (ACM, 2001), pp. 451–460.

Hanrahan, P.

M. Levoy and P. Hanrahan, Light Field Rendering (ACM, 1996), pp. 31–42.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).

Horowitz, M.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).

Hosni, A.

C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 3017–3024.

Ikeuchi, K.

R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.

Ives, F.

F. Ives, “Parallax stereogram and process of making same,” U.S. patent725,567 (14April1903).

Joshi, N.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

V. Vaish, B. Wilbum, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in Proceedings of IEEE Conference of Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2004), pp. 2–9.

Junker, A.

Kawakami, R.

R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.

Kim, S.

D. Cho, M. Lee, S. Kim, and Y. W. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, 2013), pp. 3280–3287.

Kopf, J.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26, 96 (2007).
[Crossref]

Kumar, R.

H. S. Sawhney, Y. Guo, K. Hanna, and R. Kumar, “Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences,” in Proceedings of ACM SIGGRAPH (ACM, 2001), pp. 451–460.

Lee, M.

D. Cho, M. Lee, S. Kim, and Y. W. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, 2013), pp. 3280–3287.

Levoy, M.

M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3d deconvolution for the light field microscope,” Opt. Express 21, 25418–25439 (2013).
[Crossref]

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

V. Vaish, B. Wilbum, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in Proceedings of IEEE Conference of Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2004), pp. 2–9.

M. Levoy and P. Hanrahan, Light Field Rendering (ACM, 1996), pp. 31–42.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).

Li, F.

F. Li, J. Yu, and J. Chai, “A hybrid camera for motion deblurring and depth map super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2008), pp. 1–8.

Li, T.

Liang, C. K.

C. K. Liang and R. Ramamoorthi, “A light transport framework for lenslet light field cameras,” ACM Trans. Graph. 34, 16–21 (2015).
[Crossref]

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
[Crossref]

Lin, S.

Y. W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[Crossref]

X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multi-spectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 297–304.

Lin, T. H.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
[Crossref]

Lin, X.

X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6, 3179–3189 (2015).
[Crossref]

X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, “Coded focal stack photography,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2013), pp. 1–9.

Lippmann, G.

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

Lischinski, D.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26, 96 (2007).
[Crossref]

Liu, C.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
[Crossref]

Liu, Y.

Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2015), pp. 3800–3809.

Lu, C. H.

C. H. Lu, S. Muenzel, and J. Fleischer, High-Resolution Light-field Microscopy (Optical Society of America, 2013), paper CTh3B.2.

Lumsdaine, A.

Z. Yu, J. Yu, A. Lumsdaine, and T. Georgiev, “An analysis on color demosaicing in plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 901–908.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009). pp. 1–8.

Magdaleno, E.

F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, “Fourier slice super-resolution in plenoptic cameras,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2012), pp. 1–11.

Marimont, D.

R. Bolles, H. Baker, and D. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

Marwah, K.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

Matsushita, Y.

R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.

Matusik, W.

P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
[Crossref]

Mcmillan, L.

J. C. Yang, M. Everett, C. Bueliler, and L. Mcmillan, “A real-time distributed light field camera,” in Proceedings of the 13th Eurographics Workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Mitra, K.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2014), pp. 1–10.

K. Mitra and A. Veeraraghavan, “Light field denoising, light field super-resolution and stereo camera based refocusing using a GMM light field patch prior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (Institute of Electrical and Electronics Engineers, 2012), pp. 22–28.

Mohan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
[Crossref]

Montilla, I.

Muenzel, S.

C. H. Lu, S. Muenzel, and J. Fleischer, High-Resolution Light-field Microscopy (Optical Society of America, 2013), paper CTh3B.2.

Nayar, S. K.

M. Ben-Ezra and S. K. Nayar, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[Crossref]

M. Ben-Ezra and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2003), pp. 657–664.

Ng, R.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).

Pérez, A.

F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, “Fourier slice super-resolution in plenoptic cameras,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2012), pp. 1–11.

Pérez, F.

F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, “Fourier slice super-resolution in plenoptic cameras,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2012), pp. 1–11.

Pizarro, O.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2013), pp. 1027–1034.

Platt, B. C.

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

Ramamoorthi, R.

C. K. Liang and R. Ramamoorthi, “A light transport framework for lenslet light field cameras,” ACM Trans. Graph. 34, 16–21 (2015).
[Crossref]

Raskar, R.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
[Crossref]

X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, “Coded focal stack photography,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2013), pp. 1–9.

Rhemann, C.

C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 3017–3024.

Rodríguez, M.

F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, “Fourier slice super-resolution in plenoptic cameras,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2012), pp. 1–11.

Rodriguez-Ramos, J. M.

Rodriguez-Ramos, L. F.

Rother, C.

C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 3017–3024.

Rubinstein, M.

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

Sawhney, H. S.

H. S. Sawhney, Y. Guo, K. Hanna, and R. Kumar, “Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences,” in Proceedings of ACM SIGGRAPH (ACM, 2001), pp. 451–460.

Shack, R.

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

Shroff, S. A.

Simoncelli, E. P.

E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of IEEE International Conference on Image Processing (Institute of Electrical and Electronics Engineers, 1995), pp. 444–447.

Sitthi-Amorn, P.

P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
[Crossref]

Stenau, T.

Suo, J.

X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, “Coded focal stack photography,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2013), pp. 1–9.

Szelinski, R.

S. Gortler, R. Grzeszczuk, R. Szelinski, and M. Cohen, The Lumigraph (ACM, 1996), pp. 43–54.

Tai, Y. W.

Y. W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[Crossref]

D. Cho, M. Lee, S. Kim, and Y. W. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, 2013), pp. 3280–3287.

R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.

Talvala, E.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Tola, E.

E. Tola, C. Zhang, Q. Cai, and Z. Zhang, “Virtual view generation with a hybrid camera array,” (Ecole Polytechnique F´ed´erale de Lausanne, 2009).

Tong, X.

X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multi-spectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 297–304.

Trujillo-Sevilla, J. M.

Tumblin, J.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
[Crossref]

Uyttendaele, M.

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26, 96 (2007).
[Crossref]

Vaish, V.

V. Vaish, B. Wilbum, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in Proceedings of IEEE Conference of Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2004), pp. 2–9.

Vaisli, V.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Veeraraghavan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
[Crossref]

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2014), pp. 1–10.

K. Mitra and A. Veeraraghavan, “Light field denoising, light field super-resolution and stereo camera based refocusing using a GMM light field patch prior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (Institute of Electrical and Electronics Engineers, 2012), pp. 22–28.

Wadhwa, N.

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

Wang, Y. A.

E. H. Adelson and Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

Wanner, S.

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light field,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 41–48.

S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” in Proceedings of European Conference on Computer Vision (Springer, 2012). pp. 608–621.

Wetzstein, G.

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, “Coded focal stack photography,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2013), pp. 1–9.

Wilbum, B.

V. Vaish, B. Wilbum, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in Proceedings of IEEE Conference of Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2004), pp. 2–9.

Wilburn, B.

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

Williams, S. B.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2013), pp. 1027–1034.

Wong, B. Y.

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
[Crossref]

Wright, J.

R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.

Wu, J.

Yang, J. C.

J. C. Yang, M. Everett, C. Bueliler, and L. Mcmillan, “A real-time distributed light field camera,” in Proceedings of the 13th Eurographics Workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

Yang, S.

Yu, J.

Z. Yu, J. Yu, A. Lumsdaine, and T. Georgiev, “An analysis on color demosaicing in plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 901–908.

F. Li, J. Yu, and J. Chai, “A hybrid camera for motion deblurring and depth map super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2008), pp. 1–8.

Yu, Z.

Z. Yu, J. Yu, A. Lumsdaine, and T. Georgiev, “An analysis on color demosaicing in plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 901–908.

Zanetti, S.

T. E. Bisho, S. Zanetti, and P. Favaro, “Light field superresolution,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009), pp. 1–9.

Zhang, C.

E. Tola, C. Zhang, Q. Cai, and Z. Zhang, “Virtual view generation with a hybrid camera array,” (Ecole Polytechnique F´ed´erale de Lausanne, 2009).

C. Zhang and T. Chen, A Self-Reconfigurable Camera Array (ACM, 2004), pp. 151–162.

Zhang, Z.

Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2015), pp. 3800–3809.

E. Tola, C. Zhang, Q. Cai, and Z. Zhang, “Virtual view generation with a hybrid camera array,” (Ecole Polytechnique F´ed´erale de Lausanne, 2009).

Zheng, G.

ACM Trans. Graph. (8)

B. Wilburn, N. Joshi, V. Vaisli, E. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005).
[Crossref]

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69 (2007).
[Crossref]

C. K. Liang, T. H. Lin, B. Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” ACM Trans. Graph. 27, 1 (2008).
[Crossref]

K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

C. K. Liang and R. Ramamoorthi, “A light transport framework for lenslet light field cameras,” ACM Trans. Graph. 34, 16–21 (2015).
[Crossref]

P. Didyk, P. Sitthi-Amorn, W. T. Freeman, F. Durand, and W. Matusik, “Joint view expansion and filtering for automultiscopic 3D displays,” ACM Trans. Graph. 32, 1–8 (2013).
[Crossref]

N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman, “Phase-based video motion processing,” ACM Trans. Graph. 32, 1 (2013).
[Crossref]

J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Graph. 26, 96 (2007).
[Crossref]

Appl. Opt. (2)

Biomed. Opt. Express (1)

IEEE Trans. Pattern Anal. Mach. Intell. (3)

M. Ben-Ezra and S. K. Nayar, “Motion-based motion deblurring,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 689–698 (2004).
[Crossref]

E. H. Adelson and Y. A. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 99–106 (1992).
[Crossref]

Y. W. Tai, H. Du, M. S. Brown, and S. Lin, “Correction of spatially varying image and video motion blur using a hybrid camera,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1012–1028 (2010).
[Crossref]

Int. J. Comput. Vis. (1)

R. Bolles, H. Baker, and D. Marimont, “Epipolar-plane image analysis: an approach to determining structure from motion,” Int. J. Comput. Vis. 1(1), 7–55 (1987).
[Crossref]

J. Math. Phys. (1)

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
[Crossref]

J. Phys. Theor. Appl. (1)

G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7, 821–825 (1908).
[Crossref]

J. Refract. Surg. (1)

B. C. Platt and R. Shack, “History and principles of Shack–Hartmann wavefront sensing,” J. Refract. Surg. 17, S573–S577 (2001).

Opt. Express (2)

Opt. Lett. (1)

Other (29)

T. E. Bisho, S. Zanetti, and P. Favaro, “Light field superresolution,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009), pp. 1–9.

X. Lin, J. Suo, G. Wetzstein, Q. Dai, and R. Raskar, “Coded focal stack photography,” in Proceedings of IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2013), pp. 1–9.

F. Li, J. Yu, and J. Chai, “A hybrid camera for motion deblurring and depth map super-resolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2008), pp. 1–8.

H. S. Sawhney, Y. Guo, K. Hanna, and R. Kumar, “Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences,” in Proceedings of ACM SIGGRAPH (ACM, 2001), pp. 451–460.

M. Ben-Ezra and S. K. Nayar, “Motion deblurring using hybrid imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2003), pp. 657–664.

Z. Yu, J. Yu, A. Lumsdaine, and T. Georgiev, “An analysis on color demosaicing in plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 901–908.

S. Wanner and B. Goldluecke, “Spatial and angular variational super-resolution of 4D light fields,” in Proceedings of European Conference on Computer Vision (Springer, 2012). pp. 608–621.

K. Mitra and A. Veeraraghavan, “Light field denoising, light field super-resolution and stereo camera based refocusing using a GMM light field patch prior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop (Institute of Electrical and Electronics Engineers, 2012), pp. 22–28.

F. Pérez, A. Pérez, M. Rodríguez, and E. Magdaleno, “Fourier slice super-resolution in plenoptic cameras,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2012), pp. 1–11.

D. Cho, M. Lee, S. Kim, and Y. W. Tai, “Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction,” in Proceedings of IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, 2013), pp. 3280–3287.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2009). pp. 1–8.

S. Gortler, R. Grzeszczuk, R. Szelinski, and M. Cohen, The Lumigraph (ACM, 1996), pp. 43–54.

F. Ives, “Parallax stereogram and process of making same,” U.S. patent725,567 (14April1903).

V. Vaish, B. Wilbum, N. Joshi, and M. Levoy, “Using plane + parallax for calibrating dense camera arrays,” in Proceedings of IEEE Conference of Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2004), pp. 2–9.

M. Levoy and P. Hanrahan, Light Field Rendering (ACM, 1996), pp. 31–42.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” (Stanford University, 2005).

Lytro, http:www.lytro.com .

S. Wanner and B. Goldluecke, “Globally consistent depth labeling of 4D light field,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2012), pp. 41–48.

D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, calibration and rectification for lenselet-based plenoptic cameras,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2013), pp. 1027–1034.

Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2015), pp. 3800–3809.

E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of IEEE International Conference on Image Processing (Institute of Electrical and Electronics Engineers, 1995), pp. 444–447.

C. Rhemann, A. Hosni, M. Bleyer, C. Rother, and M. Gelautz, “Fast cost-volume filtering for visual correspondence and beyond,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 3017–3024.

V. Boominathan, K. Mitra, and A. Veeraraghavan, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” in Proceedings of the IEEE International Conference on Computational Photography (Institute of Electrical and Electronics Engineers, 2014), pp. 1–10.

J. C. Yang, M. Everett, C. Bueliler, and L. Mcmillan, “A real-time distributed light field camera,” in Proceedings of the 13th Eurographics Workshop on Rendering (Eurographics Association, 2002), pp. 77–86.

C. Zhang and T. Chen, A Self-Reconfigurable Camera Array (ACM, 2004), pp. 151–162.

R. Kawakami, J. Wright, Y. W. Tai, Y. Matsushita, M. Ben-Ezra, and K. Ikeuchi, “High-resolution hyperspectral imaging via matrix factorization,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 2329–2336.

X. Cao, X. Tong, Q. Dai, and S. Lin, “High resolution multi-spectral video capture with a hybrid camera system,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, 2011), pp. 297–304.

E. Tola, C. Zhang, Q. Cai, and Z. Zhang, “Virtual view generation with a hybrid camera array,” (Ecole Polytechnique F´ed´erale de Lausanne, 2009).

C. H. Lu, S. Muenzel, and J. Fleischer, High-Resolution Light-field Microscopy (Optical Society of America, 2013), paper CTh3B.2.

Supplementary Material (2)

NameDescription
» Visualization 1: AVI (892 KB)      Video corresponds to the high-resolution reconstructed light fields in Fig. 9(a).
» Visualization 2: AVI (751 KB)      Video corresponds to the high-resolution reconstructed light fields in Fig. 9(b).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1.

Structure of the plenoptic camera.

Fig. 2.
Fig. 2.

(a) The shifted signal (blue) is obtained through shifting the original non-bandlimited signal (red) with the complex steerable pyramid. (b) We use the Fourier shift theorem to shift the original signal (red) to achieve the shifted signal (blue).

Fig. 3.
Fig. 3.

Pipeline of the phase-based perspective variation strategy. (a) The high-resolution digital SLR image is decomposed by complex steerable pyramid filters. (b) The amplitude and phase of the sub-bands. (c) We use the depth estimation method [8] to work on the low-resolution light fields to achieve the low-resolution depth map, and then employ a joint bilateral filter to upsample this depth map. Then the disparity value among the sub-aperture images can be extracted from the upsampled depth map to formulate the translation function. The perspective-shift phase term containing the translation function is used to modify the phase of these sub-bands. (d) The new-perspective high-resolution sub-aperture images are obtained through collapsing the complex steerable pyramid.

Fig. 4.
Fig. 4.

Proposed hybrid imaging system, in which a beam splitter is used to split the light rays emitted from the object, containing a Lytro Illum camera and a high-resolution digital SLR camera (Nikon D800E).

Fig. 5.
Fig. 5.

Schematic diagram of the proposed hybrid imaging system.

Fig. 6.
Fig. 6.

(From left to right in the first row) The proposed method, bicubic interpolation and Liang and Ramamoorthi [36] produce the reconstructed high-resolution sub-aperture images of view (1, 0) with a magnification factor of 2 in the stimulation experiment. The patches in the color boxes of the images are zoomed and shown in the bottom row.

Fig. 7.
Fig. 7.

(From left to right in the first row) The reconstructed high-resolution sub-aperture images of view (1, 0) are generated by our method, bicubic interpolation, and Liang and Ramamoorthi [36] with a magnification of 3 in the stimulation experiment. The enlarged patches in the color boxes of the images are yielded in the bottom row.

Fig. 8.
Fig. 8.

(From left to right in the first row) Our method, bicubic interpolation, and Liang and Ramamoorthi [36] process the stimulation sub-aperture images with a magnification factor of 4 in the stimulation experiment and provide their corresponding reconstructed high-resolution sub-aperture images of view (1,0). The patches of the color boxes are enlarged and yielded in the bottom row.

Fig. 9.
Fig. 9.

The central view of the reconstructed high-resolution light field images (see Visualization 1 and Visualization 2). The red box corresponds to the EPI image of the red line in the reconstruction, while the EPI image of the blue line in the reconstruction is presented in the blue box.

Fig. 10.
Fig. 10.

Comparison with the results of 10 × super-resolving the central sub-aperture image corresponding to Fig. 9(a). The results provided by (a) our proposed method, (b) bicubic interpolation, (c) Cho et al. [35], and (d) Liang and Ramamoorthi [36] are shown.

Fig. 11.
Fig. 11.

We use the results of 10 × super-resolving the central sub-aperture image corresponding to Fig. 9(b) derived from (a) our proposed method to compare with that provided by (b) bicubic interpolation, (c) Cho et al. [35], and (d) Liang and Ramamoorthi [36].

Fig. 12.
Fig. 12.

We show the near-refocused and far-refocused images corresponding to the reconstructed light field images in Fig. 9(a) with our proposed method in (a) and (c). The near-refocused and far-refocused images extracted from Lytro built-in software are shown in (b) and (d). The enlarged insets reveal that our reconstructed refocused images can keep more details with less noise.

Fig. 13.
Fig. 13.

Our proposed approach provides the near-refocused and far-refocused images corresponding to the reconstructed high-resolution light field images in Fig. 9(b) in (a) and (c). The images in (b) and (d) are the near-refocused and far-refocused images extracted from Lytro built-in software. More details are kept in our reconstructed refocused images with noise reduction, as shown in the enlarged insets.

Fig. 14.
Fig. 14.

(a),(c) High-resolution depth maps that correspond to the reconstructed high-resolution light field images in Figs. 9(a) and 9(b), respectively. (b),(d) Low-resolution depth maps with depth estimation on the low-resolution light field images. It can be seen from the insets that the high-resolution depth map can show a smoother edge and less serrated structure than that in the low-resolution depth map.

Fig. 15.
Fig. 15.

(a) Refocusing using the output high-resolution synthetic sub-aperture images with the perspective-shift factor N x = N y = 0.5 . (b) Digital refocused image produced by using the output high-resolution synthetic sub-aperture images with the perspective-shift factor N x = N y = 3.5 . (c) Digital refocused image derived from the output synthetic sub-aperture images with the perspective-shift factor N x = N y = 7 .

Tables (1)

Tables Icon

Table 1. RMS Errors of the Reconstructed Sub-Aperture Image of View (1,0) from Bicubic Interpolation, Liang and Ramamoorthi [36], and the Proposed Method with Different Magnification Factors

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

S m ( f ) = I ( f ) G m ( f ) ,
O ( f ) = m = 1 N S m ( f ) G m ( f ) = m = 1 N I ( f ) G m 2 ( f ) ,
D S ( p ) = 1 K p q W D L ( q ) F ( p q ) E ( H ( p ) H ( q ) ) ,
i ( x ) = f = A f exp ( j 2 π f x ) ,
s f ( x ) = A f exp ( j 2 π f x ) ,
s f ( x ) = s f ( x ) exp ( j 2 π f · t ( x ) ) = A f exp ( j 2 π f ( x + t ( x ) ) ) .
i ( x ) = f = s f ( x ) = f = A f exp ( j 2 π f ( x + t ( x ) ) )
B n ( f x , f y ) = H ( f x , f y ) G n ( f x , f y ) ,
B n ( f x , f y ) = B n ( f x , f y ) · exp ( j 2 π f x · m x · N x · t x ( x , y ) ) = H ( f x , f y ) G n ( f x , f y ) · exp ( j 2 π f x · m x · N x · t x ( x , y ) ) .
b n ( x , y ) = f x f y B n ( f x , f y ) · exp ( j 2 π ( f x x + f y y ) ) = f x f y H ( f x , f y ) G n ( f x , f y ) · exp ( j 2 π f x · m x · N x · t x ( x , y ) ) · exp ( j 2 π ( f x x + f y y ) ) = f x f y H ( f x , f y ) G n ( f x , f y ) · exp ( j 2 π f x ( m x · N x · t x ( x , y ) + x ) ) · exp ( j 2 π f y y ) .
o u ( x , y ) = f x f y ( n = 1 N B n ( f x , f y ) G n ( f x , f y ) ) · exp ( j 2 π ( f x x + f y y ) ) ,
B n ( f x , f y ) = x y b n ( x , y ) exp ( j 2 π ( f x x + f y y ) ) ,
b n ( x , y ) = f x f y H ( f x , f y ) G n ( f x , f y ) · exp ( j 2 π f x ( m x · N x · t x ( x , y ) + x ) ) · exp ( j 2 π f y ( m y · N y · t y ( x , y ) + y ) ) ,
o u v ( x , y ) = f x f y ( n = 1 N B n ( f x , f y ) G n ( f x , f y ) ) · exp ( j 2 π ( f x x + f y y ) ) ,
B n ( f x , f y ) = x y b n ( x , y ) exp ( j 2 π ( f x x + f y y ) ) .

Metrics