Abstract

Conventional cameras obscure the scene that is being recorded. Here, we place an image sensor (with no lens) on the edge of a transparent window and form images of the object seen through that window. This is enabled first, by the collection of scattered light by the image sensor, and second, by the solution of an inverse problem that represents the light scattering process. Thereby, we were able to form simple images, and demonstrate a spatial resolution of about 0.1 line-pairs/mm at an object distance of 150mm with depth-of-focus of at least 10mm. We further show imaging of two types of objects: an LED array and a conventional LCD screen. Finally, we also demonstrate color and video imaging.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Lensless photography with only an image sensor

Ganghun Kim, Kyle Isaacson, Rachael Palmer, and Rajesh Menon
Appl. Opt. 56(23) 6450-6456 (2017)

Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser

Michael A. Golub, Amir Averbuch, Menachem Nathan, Valery A. Zheludev, Jonathan Hauser, Shay Gurevitch, Roman Malinsky, and Asaf Kagan
Appl. Opt. 55(3) 432-443 (2016)

Digitally designed holographic optical element for light field displays

Boaz Jessie Jackin, Lode Jorissen, Ryutaro Oi, Jui Yi Wu, Koki Wakunami, Makoto Okui, Yasuyuki Ichihashi, Philippe Bekaert, Yi Pai Huang, and Kenji Yamamoto
Opt. Lett. 43(15) 3738-3741 (2018)

References

  • View by:
  • |
  • |
  • |

  1. A. R. Travis, T. A. Large, N. Emerton, and S. N. Bathiche, “Wedge optics in flat panel displays,” Proc. IEEE 101(1), 45–60 (2013).
    [Crossref]
  2. A. Koppelhuber and O. Bimber, “LumiConSense a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Opt. Express 21(4), 4796–4810 (2013).
    [Crossref] [PubMed]
  3. A. Koppelhuber and O. Bimber, “Lumiconsense: A transparent, flexible, and scalable thin-film sensor,” IEEE Comput. Graph. Appl. 34(5), 98–102 (2014).
    [Crossref] [PubMed]
  4. D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation.” Computational Photography (ICCP), 2016 IEEE International Conference on. IEEE, 2016.
    [Crossref]
  5. G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless Photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
    [Crossref] [PubMed]
  6. G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
    [Crossref] [PubMed]
  7. G. Kim and R. Menon, “Numerical analysis of computational-cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017).
    [Crossref] [PubMed]
  8. G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
    [Crossref]
  9. G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105, 061114 (2014).
    [Crossref]
  10. P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” Proceedings of the 24th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 1997.
    [Crossref]
  11. P. C. Hansen, Discrete Inverse Problems: Insight and Algorithms (SIAM, 2010).
  12. N. Antipa, “Single-shot diffuser-encoded light field imaging.” Computational Photography (ICCP), 2016 IEEE International Conference on. IEEE, 2016.
    [Crossref]
  13. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018).
    [Crossref]
  14. P. Wang and R. Menon, “Computational spectroscopy via singular-value decomposition and regularization,” Opt. Express 22(18), 21541–21550 (2014).
    [Crossref] [PubMed]
  15. G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, Lensless-camera based machine learning for image classification,” arXiv:1709.00408 [cs.CV] (2017).

2018 (1)

2017 (3)

2015 (1)

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

2014 (3)

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105, 061114 (2014).
[Crossref]

P. Wang and R. Menon, “Computational spectroscopy via singular-value decomposition and regularization,” Opt. Express 22(18), 21541–21550 (2014).
[Crossref] [PubMed]

A. Koppelhuber and O. Bimber, “Lumiconsense: A transparent, flexible, and scalable thin-film sensor,” IEEE Comput. Graph. Appl. 34(5), 98–102 (2014).
[Crossref] [PubMed]

2013 (2)

Antipa, N.

Bathiche, S. N.

A. R. Travis, T. A. Large, N. Emerton, and S. N. Bathiche, “Wedge optics in flat panel displays,” Proc. IEEE 101(1), 45–60 (2013).
[Crossref]

Bimber, O.

A. Koppelhuber and O. Bimber, “Lumiconsense: A transparent, flexible, and scalable thin-film sensor,” IEEE Comput. Graph. Appl. 34(5), 98–102 (2014).
[Crossref] [PubMed]

A. Koppelhuber and O. Bimber, “LumiConSense a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Opt. Express 21(4), 4796–4810 (2013).
[Crossref] [PubMed]

Bostan, E.

Capecchi, M.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

Debevec, P. E.

P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” Proceedings of the 24th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 1997.
[Crossref]

Emerton, N.

A. R. Travis, T. A. Large, N. Emerton, and S. N. Bathiche, “Wedge optics in flat panel displays,” Proc. IEEE 101(1), 45–60 (2013).
[Crossref]

Heckel, R.

Isaacson, K.

Jenks, K.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

Kim, G.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

G. Kim and R. Menon, “Numerical analysis of computational-cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017).
[Crossref] [PubMed]

G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless Photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
[Crossref] [PubMed]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105, 061114 (2014).
[Crossref]

Koppelhuber, A.

A. Koppelhuber and O. Bimber, “Lumiconsense: A transparent, flexible, and scalable thin-film sensor,” IEEE Comput. Graph. Appl. 34(5), 98–102 (2014).
[Crossref] [PubMed]

A. Koppelhuber and O. Bimber, “LumiConSense a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Opt. Express 21(4), 4796–4810 (2013).
[Crossref] [PubMed]

Kuo, G.

Large, T. A.

A. R. Travis, T. A. Large, N. Emerton, and S. N. Bathiche, “Wedge optics in flat panel displays,” Proc. IEEE 101(1), 45–60 (2013).
[Crossref]

Malik, J.

P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” Proceedings of the 24th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 1997.
[Crossref]

Menon, R.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

G. Kim and R. Menon, “Numerical analysis of computational-cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017).
[Crossref] [PubMed]

G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless Photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017).
[Crossref] [PubMed]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

P. Wang and R. Menon, “Computational spectroscopy via singular-value decomposition and regularization,” Opt. Express 22(18), 21541–21550 (2014).
[Crossref] [PubMed]

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105, 061114 (2014).
[Crossref]

Mildenhall, B.

Nagarajan, N.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

Ng, R.

Palmer, R.

Pastuzyn, E.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

Shepherd, J.

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

Travis, A. R.

A. R. Travis, T. A. Large, N. Emerton, and S. N. Bathiche, “Wedge optics in flat panel displays,” Proc. IEEE 101(1), 45–60 (2013).
[Crossref]

Waller, L.

Wang, P.

Appl. Opt. (2)

Appl. Phys. Lett. (2)

G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015).
[Crossref]

G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105, 061114 (2014).
[Crossref]

IEEE Comput. Graph. Appl. (1)

A. Koppelhuber and O. Bimber, “Lumiconsense: A transparent, flexible, and scalable thin-film sensor,” IEEE Comput. Graph. Appl. 34(5), 98–102 (2014).
[Crossref] [PubMed]

Opt. Express (2)

Optica (1)

Proc. IEEE (1)

A. R. Travis, T. A. Large, N. Emerton, and S. N. Bathiche, “Wedge optics in flat panel displays,” Proc. IEEE 101(1), 45–60 (2013).
[Crossref]

Sci. Rep. (1)

G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017).
[Crossref] [PubMed]

Other (5)

G. Kim, S. Kapetanovic, R. Palmer, and R. Menon, Lensless-camera based machine learning for image classification,” arXiv:1709.00408 [cs.CV] (2017).

D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation.” Computational Photography (ICCP), 2016 IEEE International Conference on. IEEE, 2016.
[Crossref]

P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” Proceedings of the 24th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co., 1997.
[Crossref]

P. C. Hansen, Discrete Inverse Problems: Insight and Algorithms (SIAM, 2010).

N. Antipa, “Single-shot diffuser-encoded light field imaging.” Computational Photography (ICCP), 2016 IEEE International Conference on. IEEE, 2016.
[Crossref]

Supplementary Material (12)

NameDescription
» Visualization 1       LED video of green jumping stick-man.
» Visualization 2       LED video of white one arrow.
» Visualization 3       LED video of red one arrow.
» Visualization 4       LED video of green one arrow.
» Visualization 5       LED video of red two arrows.
» Visualization 6       LED video of blue one arrow.
» Visualization 7       LED video of blue two arrows.
» Visualization 8       LED video of red jumping stick-man.
» Visualization 9       LED video of blue jumping stickman
» Visualization 10       LED video of white one arrow.
» Visualization 11       LED video of white two arrows.
» Visualization 12       LED video of white jumping stick-man.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 (a) Schematic of “see-through” camera. (b) Schematic showing signal capture by image sensor. (c) Side-view of our experimental setup without the image sensor. (d) Front-view of the setup. An LED array is used as the calibration and test object. Parts 1(e)-1(g) show measured PSF images for LED locations (0,0), (16,16), and (32,32), respectively.
Fig. 2
Fig. 2 Experimental results from an LED array. Left column: Reference images placed on LED array. Center column: Raw data captured by image sensor (frame size = 640X480 pixels). Right column: Reconstructed images. The distance between the LED array and the window was 150mm. Video imaging is also possible (see Visualization 1, Visualization 2, and Visualization 3).
Fig. 3
Fig. 3 Experimental results from an LED array in red (see Visualization 4, Visualization 5, and Visualization 6), blue (see Visualization 7, Visualization 8, and Visualization 9) and white (see Visualization 10, Visualization 11, and Visualization 12) colors. The distance between the LED array and the window was 150mm and the calibration was performed using green color.
Fig. 4
Fig. 4 Optimal distance between object and transparent window. Reconstructed images at various distances are shown. Note that calibration was performed at each distance separately.
Fig. 5
Fig. 5 Quantitative analysis of the optimal distance between object and transparent window. (a) The contrast of a line-space pattern and (b) the condition number of A are plotted as a function of D, the distance between the LED array and the transparent window. (c) Singular values of A as a function of D. (d) Modulation-transfer function using line-space patterns on an LCD at D = 150mm.
Fig. 6
Fig. 6 Imaging an LCD. (a) Photograph of our setup. (b) Green images were displayed on the LCD (left column), recorded on the image sensor (middle column) and reconstructed (right column). (c) Images of red and blue colors were displayed on the LCD, recorded on the image sensor and reconstructed. The sizes of “U”, arrow and heart are approximately (width X height) 72mmX68mm, 84mmX48mm and 66mmX54mm, respectively.
Fig. 7
Fig. 7 Model to compute the acceptance angle ( Δθ) of object point.
Fig. 8
Fig. 8 Acceptance angle of one object point as a function of its location in the 2D (XZ) plane.
Fig. 9
Fig. 9 Reconstructed images of horizontal (top) and vertical (bottom) line pairs. Area inside red box of each line pair reconstruction is cropped to evaluate the modulation of line pair object. Line pair crop image and its averaged cross section plot is shown below.
Fig. 10
Fig. 10 Reconstructed images with window (left) and without window (right) made with distance between LED array and window to be 25mm.
Fig. 11
Fig. 11 Schematic illustrating the role of rough surface to increase signal on sensor. CRA stands for Chief Ray Angle (or acceptance anlge) of the sensor pixel.

Metrics