Abstract

We wrap a thin-film luminescent concentrator (LC) - a flexible and transparent plastic foil doped with fluorescent dye particles - around an object to obtain images of the object under varying synthetic lighting conditions and without lenses. These images can then be used for computational relighting and depth reconstruction. An LC is an efficient two-dimensional light guide that allows photons to be collected over a wide solid angle, and through multiple overlapping integration areas simultaneously. We show that conventional photodetectors achieve a higher signal-to-noise ratio when equipped with an LC than in direct measurements. Efficient light guidance in combination with computational imaging approaches, such as presented in this article, can lead to novel optical sensors that collect light in a structured way and within a wide solid angle rather than unstructured through narrow apertures. This enables flexible, scalable, transparent, and lens-less thin-film image and depth sensors.

© 2017 Optical Society of America

Introduction

In recent decades, imaging has shifted from using photosensitive film to computational approaches that enable many novel imaging systems. For example, coded aperture arrays together with advanced reconstruction methods make flat camera designs possible [1]. New materials support flexible, scalable, and transparent thin-film image sensors [2,3]. Efficient single-pixel cameras [4–11] are enabled by compressive sensing [12, 13] and controlled spatial-temporal illumination modulation. Computational reconstruction methods can use raw data from a greater family of sensor devices, such as metamaterial apertures [14], and allow advanced image processing after data acquisition, such as refocussing from light fields.

In this work, we present a prototype and reconstruction methods for computational imaging, relighting, and depth reconstruction using a flexible thin-film optical sensor. The sensor consists of a luminescent concentrator (LC) film (i.e. a transparent and flexible polycarbonate foil doped with fluorescent dyes) that is wrapped around an object, as shown in Fig. 1(A). Light of a specific spectral sub-band scattered from the object onto the LC is absorbed and emitted at a longer wavelength by the fluorescent dye particles inside the LC foil and transported to the foil edges by total internal reflection. With a simple aperture structure cut into the LC material, we record the transported light within a limited integration angle at various directions and positions along the LC edges (cf. Fig. 1(B)). Through a hole in the foil, we illuminate the object with a sequence of random speckle patterns projected by a digital micromirror device (DMD) spatial light modulator, as shown in Fig. 1(C).

 

Fig. 1 Thin-film LC sensor prototype. (A) Luminescent concentrator foil wrapped around an object. (B) The aperture structure cut into the LC material at the edges of the foil (blue is removed from the LC material and blocks light transport) allows measurement over limited integration areas (gray) in various positions and directions. Optical fibers transport the integral signals to line scan cameras for measurement. (C) Random speckle illumination is projected onto the object through a hole in the film (the LC foil is covered by an opaque film to block stray light from the environment). See Visualization 1.

Download Full Size | PPT Slide | PDF

From the measurements of the light back-scattered onto the LC foil, we reconstruct images of the object as seen from the perspective of the spatial light modulator for varying synthetic lighting conditions. This supports computational relighting [15–17] and depth reconstruction from shading [7, 9]. In contrast to single-pixel detectors that collect light through a narrow aperture only, our sensor receives light scattered over a wide solid angle and a much larger area on the LC foil. This leads to a higher signal-to-noise ratio (S/N) that does not require highly sensitive photodetectors or photomultipliers.

Computing illumination bases

The measurements si over a single integration area i (gray in Fig. 1(B)) obtained from projected speckle patterns P can be described with:

si=Pbi+ei,
where bi is an image of the object and ei an unknown error term. Each row of the matrix P contains the speckle pattern that leads to the corresponding value in the vector si. Solving for bi yields an image in the resolution of the speckle pattern that shows the object at an illumination caused by the integration area on the sensor as a synthetic area light source. For the same set of projected speckle patterns a different illumination image can be computed simultaneously for each sampled integration area (i.e. each aperture at the edges) in the same way. We refer to these images as basis illumination images.

Equation (1) is solved with the biconjugate gradient stabilized method (BiCGStab) [18]. Since BiCGStab requires a square matrix, both sides of the equation are multiplied with the transpose of P, which brings two further advantages. First, the size of the equation system is reduced (PT P is of size p × p, where p is the number of pixels of a pattern in P, instead of size n × p with n > p different speckle patterns in P) and reconstruction is faster. Second, newly projected patterns can be added iteratively to the product PT P by multiplying the pattern with its transpose. This allows the image to be reconstructed while patterns are being projected onto the object.

Our prototype samples over 16 integration areas at each of the sensor’s four edges simultaneously, which yielded the 64 basis illumination images shown in Fig. 2 (i.e. by applying Eq. (1)). For an image resolution of 128 × 96 pixels, 200k speckle patterns were projected. By using Hadamard codes instead of random patterns [9], the number of projections can be reduced to about 12k, which results in an overall scanning time of less than one second (assuming a ≈20 kHz modulation rate of the DMD, as in [9]). The minimum scanning time, however, is limited to the level of present photon noise.

 

Fig. 2 Reconstructed basis illumination images. Each image is reconstructed with Eq. (1) from the measurements of one integration area that causes the synthetic shading on the object. The center images (blue frame) are reconstructions of the integration area cut off by the hole in the LC foil through which the speckle patterns are projected. See Visualization 2.

Download Full Size | PPT Slide | PDF

Rectifying illumination bases

Each computed basis illumination image bi shows the object at a shading caused by the synthetic area light sources that has the shape of the corresponding triangular integration area on the sensor. These illumination bases, however, are not suitable for common image-based relighting or depth-reconstruction techniques, as such methods require point lights or small area light sources. Storing the images bi as columns in matrix B, we obtain rectified basis illumination images b′j that result from synthetic square area light sources positioned on a uniform grid on the cylindrical sensor surface by solving

B=BTT
for the matrix B′, whose columns are the rectified basis illumination images b′j. Note that T is the transport matrix, which describes the light transport from each square area on the LC foil over each integral measurement [2]. The non-rectified basis illumination images B can be represented as a linear combination of the rectified basis illumination images B′ weighed by TT (Eq. (2)).

With only a small number of basis illuminations in B, Eq. (2) would quickly become underdetermined when computing rectified basis illuminations for higher-resolution sampling grids. We approximate such solutions with multiple iterations of the following back-projection:

bj=BTlj,[bj1bj2bj3bj4bjn]=[B11B21Bs1B12B22Bs2B13B23Bs3B14B24Bs4B1nB2nBsn][T11T21Tm1T12T22Tm2T1sT2sTms][lj1lj2ljm],
where lj is a weight vector of illumination contributions within the light source grid, n the number of pixels in the rectified basis illumination image, s the number of integral measurements, and m the lighting grid size.

Equation (3) is solved employing the algebraic reconstruction technique (ART) [19], which is an iterative approach to tomographic reconstruction that is based on series expansion. It begins with estimating the solution vector, which is projected orthogonally onto the first hyperplane (the first equation) of the linear system. This process is repeated for the remaining equations of the system, which yields a solution vector that approximates the overall solution. One iterative step of ART is repeated n times, each time using the solution vector of the previous iteration as the initial estimate. We apply a faster variant of ART called simultaneous algebraic reconstruction technique (SART) [20]. Instead of calculating each value of the solution vector sequentially for each equation, SART calculates them for all equations of the linear system within one iteration. In practice, we found that two iterations are sufficient, and that more iterations will not lead to visually detectable improvements.

Figure 3 shows the rectified basis illumination images for a uniform lighting grid of 16 × 16 (i.e. m=16 × 16) synthetic light sources defined in lj and computed with Eq. (3). In reality, each of these light sources would have a size of 7 mm × 7 mm and are evenly distributed on the cylindrical shape of the LC surface.

 

Fig. 3 Rectified basis illumination images. Images are computed using Eq. (3) from the images shown in Fig. 2 and for a synthetic 7 mm × 7 mm square lighting area moving from top left to bottom right on a uniform 16 × 16 grid across the cylindrical sensor surface. See Visualization 3.

Download Full Size | PPT Slide | PDF

Relighting and depth reconstruction

As explained in [15], the rectified basis illumination images can be linearly combined to compute more complex synthetic shading effects. In our case, this is achieved with B′l, where the matrix B′ contains the rectified basis illumination images b′j in its columns, and the vector l contains the image of the synthetic light source. Figure 4(A) shows results for synthetic area light sources (colored strips reshaped on the LC surface, shown on the right) in l.

 

Fig. 4 Relighting and depth reconstruction results. (A) Computational relighting examples for synthetic area light sources (colored strips reshaped on the LC surface, shown on the right). See Visualization 4. (B) Depth map reconstructed from the rectified basis illumination images in Fig. 3. (C) Ground-truth depth map reconstructed from the ideal (simulated) rectified basis illumination images. The same shape-from-shading algorithm was applied in both reconstructions. See Visualization 5. (D) Depth reconstruction error visualized as 3D surface overlay (top: yellow is ground truth, blue is reconstructed) and color map (bottom).

Download Full Size | PPT Slide | PDF

Furthermore, classical shape-from-shading algorithms [21] can be applied to the rectified basis illumination images for depth reconstruction (Figs. 4(B,C)). As illustrated in Fig. 4(D), we achieve a ≈1.5 mm mean error (≈10%) for depth estimation in our experiment, when compared to the ground truth that was computed from ideal rectified basis images calculated with a physical lighting simulation of the object’s 3D-scanned proxy (using the Blender Cycles engine). The scannable dimensions of the object were ≈30×30×15 mm (height×width×depth).

S/N efficiency

Assuming that identical photodetectors (size, sensitivity, S/N) are either installed in front of the object, as in classical single-pixel detector approaches, or connected to the LC foil (e.g. via optical fibers) as in our prototype, the increase in S/N is proportional to the gain in light collected and transported over the larger integration areas:

αϕ=0βδ=0deμδdtdϕ=α2πβ360°1eμdμ,
where α and μ are the absorption and attenuation coefficients of the LC material, and β and d the integration angle (in degrees) and distance (foil diameter) that are determined by the sensor design. Equation (4) was derived as follows:

We know from previous work [22] that the intensity of light travelling through the LC foil material decreases in proportion to

IaIe=eμδδ,
where Ie is the emitted light intensity at the entrance point of the film, Ia the attenuation light intensity at travel distance δ, and μ the attenuation coefficient of the LC material (experimentally determined to be μ = 0.008 [22]). Assuming the emission Ie to be identical at all entrance points within an arc area with radius d and angle β, the integrated amount of light transported to the arc origin is proportional to
IaIe=αϕ=0βδ=0dδeμδδdtdϕ=αϕ=0βδ=0deμδdtdϕ,
where α is the absorption coefficient of the LC material (i.e. the fraction of light that enters the foil and is absorbed and emitted by the fluorescent dye particles, assuming a 100% quantum yield). Solving for the inner integral yields
IaIe=αϕ=0β[eμδμ]0ddϕ=αϕ=0β1eμsμdϕ,
and solving for the outer integral yields
IaIe=2πβ360°1eμδμ,
if β is given in degrees. Consequently, the same photodetector (same size, sensitivity, S/N) that receives the amount Ie of light directly from an object would collect the amount Ia over the integration area of the LC foil, and the S/N increase is given by Eq. (8).

For the prototype shown in Fig. 1, α is 11.51% (the measured fraction of the blue sub-spectrum absorbed from white light and emitted as a green sub-spectrum, assuming a 100% quantum yield), μ is 0.008, and β and d were chosen to be 11° and 108 mm, respectively. This leads to a theoretical S/N increase of 59.8% (×1.598), compared to a classical single-pixel detector setup capturing the full white-light spectrum (detectors located at the same distance from the object as the LC foil).

To confirm our theoretical results, we took direct measurements using the same photodetectors as those connected to the LC foil in our prototype (also employing the same optical fibers) on a uniform grid of 5 × 4 samples at positions where the cylindrically shaped LC foil was located (i.e. at the same distance to the object). We compared these (spectrally unfiltered) measurements with the (spectrally filtered) values we measured with our prototype over the 16 integration areas on one side of the LC foil (assuming symmetry of the other three sides). For both sets of measurements, the same illumination condition was applied. On average, the experimental S/N increase was 63.6% (×1.636) for read-noise limited conditions. The configuration of the experiment and the acquired measurement values are shown in Fig. 5.

 

Fig. 5 S/N gain experiment. The average of 16 integral measurements (blue) taken through the LC foil are compared to the average of 20 point measurements (green) taken on the LC surfaces under identical conditions (same photodetectors, optical fibers, and lighting condition). Their ratio (1.636) represents the S/N gain for read-noise limited conditions. All values are normalized to the measured maximum.

Download Full Size | PPT Slide | PDF

This slightly larger increase in S/N (compared to the theoretical result) can be explained by a marginal difference between the attenuation coefficient μ = 0.008 determined in [22] and the physical attenuation in our prototype.

By stacking multiple LC layers that absorb different parts of the white-light spectrum and collecting the sum of emitted light (e.g. with larger-diameter optical fibers connected across all layers), the S/N increases further in proportion with the increase in α. Assuming that we could absorb the full spectrum (i.e. α=100%) then the S/N would increase to a theoretical maximum of ×13.884.

Experimental setup

For our hardware prototype (cf. Fig. 1), the luminescent concentrator foil (Bayer Makrofol® LISA Green LC films in 108 mm × 108 mm × 300 μm) was glued onto a transparent support cylinder. The aperture structure at the edges of the LC and the center hole were cut into the foil by means of a GraphRobo Graphtec cutting plotter. Jiangxi Daishing POF Co., Ltd polymethylmethacrylat (PMMA) step-index multi-mode optical fibers with a 250 μm diameter and a numerical aperture of 0.5 were used to guide the integrated light signal of each aperture to photodetectors of line scan cameras for measurement. As line scan cameras we used four CMOS Sensor Inc. M106-A4-R1 CIS (contact image sensor) equipped with four programmable USB controllers (USB-Board-M106A4 from Spectronic Devices Ltd). The CIS modules record 10-bit gray scales with a signal-to-noise ratio of S/NdB=10log10(PsignalPnoise)=50.8dB, where Psignal and Pnoise is the mean-square signal and mean-square noise level respectively. The spatial light modulator used for projecting the speckle patterns was a Texas Instruments Pico Projector Development Kit v2.0.

Conclusion and limitations

Use of a luminescent concentrator as a light guide has two advantages over direct photodetector measurement. First, for the same photosensor, the S/N increases when light is collected through the LC surface (e.g. by a factor of ×1.636 for the single band-limited LC foil layer in our experiments, reaching a theoretical maximum of ×13.884 if the entire spectrum is covered). Second, light collection can be multiplexed over multiple overlapping integration areas within the LC foil to support the simultaneous reconstruction of many lighting conditions (256 in our example) while requiring only a low number of parallel measurements (64 for our prototype). In principle, a high S/N could also be achieved in single measurements using flexible large-area photodetectors [23] or by combining the measurements from multiple small photodetectors (e.g. from a flexible grid of organic photodiodes [24, 25]). While the former would not allow multiplexing over multiple integration areas (thereby preventing the reconstruction of multiple lighting bases), the latter would require a significantly larger number of photodetectors on the sensor surface to resolve the individual and overlapping integration areas necessary.

As part of future work, we plan to investigate how the S/N and the reconstruction quality can be further improved in our approach. Reducing the concentration of the fluorescent dye or using a fluorophore with smaller particles would decrease the attenuation coefficient μ, but would also decrease the emission. Applying a larger integration angle β would increase the light collected, but would also lower the variance of the reconstructed lighting bases. An optimal balance must to be found for both. Furthermore, scanning at real-time rates and in the invisible part of the spectrum (i.e. using infrared light) is an important technical extension.

One limitation of our approach is that, with multiple LC layers, it can either maximize S/N by integrating the full spectrum of the reflected light for gray scale image reconstruction (e.g. ×13.884 in our example), or support color image reconstruction for multiple spectral sub-bands at a lower S/N (e.g. ×1.636 per channel in our example).

Efficient light guidance in combination with computational imaging approaches, such as presented in this article, can lead to novel optical sensors that collect light in a structured way and within a wide solid angle rather than unstructured through narrow apertures. This enables flexible, scalable, transparent, and lens-less thin-film image- and depth-sensors that hold potential for application domains, such as human-computer interfaces.

Acknowledgments

We thank Robert Koeppe from isiQiri interface technologies GmbH for fruitful discussions and for providing LC samples. We also thank Wood K Plus Kompetenzzentrum Holz GmbH for a 3D scan of our test object.

References and links

1. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

2. A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Optics express 21, 4796–4810 (2013). [CrossRef]   [PubMed]  

3. A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Optics Express 23, 33713–33720 (2015). [CrossRef]  

4. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

5. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008). [CrossRef]  

6. M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008). [CrossRef]  

7. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013). [CrossRef]   [PubMed]  

8. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013). [CrossRef]   [PubMed]  

9. Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016). [CrossRef]  

10. G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016). [CrossRef]   [PubMed]  

11. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Optics letters 41, 2497–2500 (2016). [CrossRef]   [PubMed]  

12. D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on 52, 1289–1306 (2006). [CrossRef]  

13. R. G. Baraniuk, “Compressive sensing,” IEEE signal processing magazine 24, 118–121 (2007). [CrossRef]  

14. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013). [CrossRef]   [PubMed]  

15. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

16. V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.

17. P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

18. H. A. Van der Vorst, “Bi-cgstab: A fast and smoothly converging variant of bi-cg for the solution of nonsymmetric linear systems,” SIAM Journal on scientific and Statistical Computing 13, 631–644 (1992). [CrossRef]  

19. M. Slaney and A. Kak, “Principles of computerized tomographic imaging,” SIAM, Philadelphia (1988).

20. A. Andersen and A. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging 6, 81–94 (1984). [CrossRef]   [PubMed]  

21. R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999). [CrossRef]  

22. R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010). [CrossRef]   [PubMed]  

23. Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016). [CrossRef]   [PubMed]  

24. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005). [CrossRef]  

25. T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.
  2. A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Optics express 21, 4796–4810 (2013).
    [Crossref] [PubMed]
  3. A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Optics Express 23, 33713–33720 (2015).
    [Crossref]
  4. D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.
  5. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
    [Crossref]
  6. M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
    [Crossref]
  7. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
    [Crossref] [PubMed]
  8. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
    [Crossref] [PubMed]
  9. Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
    [Crossref]
  10. G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
    [Crossref] [PubMed]
  11. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Optics letters 41, 2497–2500 (2016).
    [Crossref] [PubMed]
  12. D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on 52, 1289–1306 (2006).
    [Crossref]
  13. R. G. Baraniuk, “Compressive sensing,” IEEE signal processing magazine 24, 118–121 (2007).
    [Crossref]
  14. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
    [Crossref] [PubMed]
  15. P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.
  16. V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.
  17. P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.
  18. H. A. Van der Vorst, “Bi-cgstab: A fast and smoothly converging variant of bi-cg for the solution of nonsymmetric linear systems,” SIAM Journal on scientific and Statistical Computing 13, 631–644 (1992).
    [Crossref]
  19. M. Slaney and A. Kak, “Principles of computerized tomographic imaging,” SIAM, Philadelphia (1988).
  20. A. Andersen and A. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging 6, 81–94 (1984).
    [Crossref] [PubMed]
  21. R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999).
    [Crossref]
  22. R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010).
    [Crossref] [PubMed]
  23. Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
    [Crossref] [PubMed]
  24. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
    [Crossref]
  25. T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
    [Crossref]

2016 (4)

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
[Crossref] [PubMed]

Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Optics letters 41, 2497–2500 (2016).
[Crossref] [PubMed]

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

2015 (1)

A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Optics Express 23, 33713–33720 (2015).
[Crossref]

2013 (4)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Optics express 21, 4796–4810 (2013).
[Crossref] [PubMed]

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

2010 (1)

R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010).
[Crossref] [PubMed]

2008 (3)

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
[Crossref]

2007 (1)

R. G. Baraniuk, “Compressive sensing,” IEEE signal processing magazine 24, 118–121 (2007).
[Crossref]

2006 (1)

D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on 52, 1289–1306 (2006).
[Crossref]

2005 (1)

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

1999 (1)

R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999).
[Crossref]

1992 (1)

H. A. Van der Vorst, “Bi-cgstab: A fast and smoothly converging variant of bi-cg for the solution of nonsymmetric linear systems,” SIAM Journal on scientific and Statistical Computing 13, 631–644 (1992).
[Crossref]

1984 (1)

A. Andersen and A. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging 6, 81–94 (1984).
[Crossref] [PubMed]

Andersen, A.

A. Andersen and A. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging 6, 81–94 (1984).
[Crossref] [PubMed]

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

Baraniuk, R. G.

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

R. G. Baraniuk, “Compressive sensing,” IEEE signal processing magazine 24, 118–121 (2007).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Baron, D.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Bartu, P.

R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010).
[Crossref] [PubMed]

Bauer, S.

R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010).
[Crossref] [PubMed]

Bimber, O.

A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Optics Express 23, 33713–33720 (2015).
[Crossref]

A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Optics express 21, 4796–4810 (2013).
[Crossref] [PubMed]

Bowman, A.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Bowman, R.

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Brady, D.

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

Chabinyc, M. L.

T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
[Crossref]

Chan, W. L.

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

Charan, K.

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

Chen, B.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

Cryer, J. E.

R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999).
[Crossref]

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

Debevec, P.

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on 52, 1289–1306 (2006).
[Crossref]

Driscoll, T.

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Duiker, H.-P.

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

Dutré, P.

V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.

Edgar, M. P.

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

Garg, G.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

Gibson, G. M.

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

Hawkins, T.

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

Horowitz, M.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

Hunt, J.

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

Iba, S.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

Jonathan, P.

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

Kak, A.

A. Andersen and A. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging 6, 81–94 (1984).
[Crossref] [PubMed]

M. Slaney and A. Kak, “Principles of computerized tomographic imaging,” SIAM, Philadelphia (1988).

Kato, Y.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

Kawaguchi, H.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Koeppe, R.

R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010).
[Crossref] [PubMed]

Koppelhuber, A.

A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Optics Express 23, 33713–33720 (2015).
[Crossref]

A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Optics express 21, 4796–4810 (2013).
[Crossref] [PubMed]

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Lensch, H.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

Levoy, M.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

Li, G.

G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
[Crossref] [PubMed]

Lipworth, G.

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

Liu, L.

G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
[Crossref] [PubMed]

Marschner, S. R.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

Masselus, V.

V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.

Mittleman, D. M.

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

Mrozack, A.

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

Neulinger, A.

R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010).
[Crossref] [PubMed]

Ng, T. N.

T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
[Crossref]

Noguchi, Y.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

Padgett, M.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Padgett, M. J.

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

Peers, P.

V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.

Radwell, N.

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

Reynolds, M.

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

Sagar, M.

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

Sakurai, T.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

Sambandan, S.

T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
[Crossref]

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

Sarokin, W.

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

Sarvotham, S.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Sekitani, T.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

Sen, P.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

Shah, M.

R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999).
[Crossref]

Slaney, M.

M. Slaney and A. Kak, “Principles of computerized tomographic imaging,” SIAM, Philadelphia (1988).

Smith, D. R.

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

Someya, T.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

Street, R. A.

T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
[Crossref]

Sun, B.

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

Takbar, D.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

Takhar, D.

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Tchou, C.

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

Tsai, P.-S.

R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999).
[Crossref]

Van der Vorst, H. A.

H. A. Van der Vorst, “Bi-cgstab: A fast and smoothly converging variant of bi-cg for the solution of nonsymmetric linear systems,” SIAM Journal on scientific and Statistical Computing 13, 631–644 (1992).
[Crossref]

Veeraraghavan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

Vittert, L. E.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Wakin, M. B.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

Wang, W.

G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
[Crossref] [PubMed]

Wang, Y.

G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
[Crossref] [PubMed]

Welsh, S.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

Welsh, S. S.

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

Willems, Y. D.

V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.

Wong, W. S.

T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
[Crossref]

Xu, J.

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

Yang, G.

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

Yang, W.

G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
[Crossref] [PubMed]

Yao, J.

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

Zhang, R.

R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999).
[Crossref]

Zhang, T.

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

Zhang, Y.

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

Zhang, Z.

Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Optics letters 41, 2497–2500 (2016).
[Crossref] [PubMed]

Zheng, Z.

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

Zhong, J.

Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Optics letters 41, 2497–2500 (2016).
[Crossref] [PubMed]

Applied Physics Letters (2)

W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters 93, 121105 (2008).
[Crossref]

T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters 92, 213303 (2008).
[Crossref]

Electron Devices, IEEE Transactions on (1)

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on 52, 2502–2511 (2005).
[Crossref]

IEEE signal processing magazine (2)

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25, 83–91 (2008).
[Crossref]

R. G. Baraniuk, “Compressive sensing,” IEEE signal processing magazine 24, 118–121 (2007).
[Crossref]

IEEE transactions on pattern analysis and machine intelligence (1)

R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence 21, 690–706 (1999).
[Crossref]

Information Theory, IEEE Transactions on (1)

D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on 52, 1289–1306 (2006).
[Crossref]

Journal of Optics (1)

Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics 18, 035203 (2016).
[Crossref]

Nanotechnology (1)

Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology 27, 225501 (2016).
[Crossref] [PubMed]

Optics Express (2)

R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express 18, 2209–2218 (2010).
[Crossref] [PubMed]

G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express 24, 400–408 (2016).
[Crossref] [PubMed]

S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express 21, 23068–23074 (2013).
[Crossref] [PubMed]

A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Optics express 21, 4796–4810 (2013).
[Crossref] [PubMed]

A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Optics Express 23, 33713–33720 (2015).
[Crossref]

Optics letters (1)

Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Optics letters 41, 2497–2500 (2016).
[Crossref] [PubMed]

Science (2)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340, 844–847 (2013).
[Crossref] [PubMed]

J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013).
[Crossref] [PubMed]

SIAM Journal on scientific and Statistical Computing (1)

H. A. Van der Vorst, “Bi-cgstab: A fast and smoothly converging variant of bi-cg for the solution of nonsymmetric linear systems,” SIAM Journal on scientific and Statistical Computing 13, 631–644 (1992).
[Crossref]

Ultrasonic imaging (1)

A. Andersen and A. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging 6, 81–94 (1984).
[Crossref] [PubMed]

Other (6)

M. Slaney and A. Kak, “Principles of computerized tomographic imaging,” SIAM, Philadelphia (1988).

P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.

P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

Supplementary Material (5)

NameDescription
» Visualization 1: MP4 (183 KB)      Random speckle illumination projected onto the object through a hole in the LC film.
» Visualization 2: MP4 (290 KB)      Reconstructed basis illumination images (left) with corresponding integration area (right).
» Visualization 3: MP4 (503 KB)      Rectified basis illumination images (left) with corresponding square lighting area (right).
» Visualization 4: MP4 (117 KB)      Computational relighting results (left) with corresponding lighting images (right).
» Visualization 5: MP4 (327 KB)      Depth reconstruction results (right) with ground through (left).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Thin-film LC sensor prototype. (A) Luminescent concentrator foil wrapped around an object. (B) The aperture structure cut into the LC material at the edges of the foil (blue is removed from the LC material and blocks light transport) allows measurement over limited integration areas (gray) in various positions and directions. Optical fibers transport the integral signals to line scan cameras for measurement. (C) Random speckle illumination is projected onto the object through a hole in the film (the LC foil is covered by an opaque film to block stray light from the environment). See Visualization 1.
Fig. 2
Fig. 2 Reconstructed basis illumination images. Each image is reconstructed with Eq. (1) from the measurements of one integration area that causes the synthetic shading on the object. The center images (blue frame) are reconstructions of the integration area cut off by the hole in the LC foil through which the speckle patterns are projected. See Visualization 2.
Fig. 3
Fig. 3 Rectified basis illumination images. Images are computed using Eq. (3) from the images shown in Fig. 2 and for a synthetic 7 mm × 7 mm square lighting area moving from top left to bottom right on a uniform 16 × 16 grid across the cylindrical sensor surface. See Visualization 3.
Fig. 4
Fig. 4 Relighting and depth reconstruction results. (A) Computational relighting examples for synthetic area light sources (colored strips reshaped on the LC surface, shown on the right). See Visualization 4. (B) Depth map reconstructed from the rectified basis illumination images in Fig. 3. (C) Ground-truth depth map reconstructed from the ideal (simulated) rectified basis illumination images. The same shape-from-shading algorithm was applied in both reconstructions. See Visualization 5. (D) Depth reconstruction error visualized as 3D surface overlay (top: yellow is ground truth, blue is reconstructed) and color map (bottom).
Fig. 5
Fig. 5 S/N gain experiment. The average of 16 integral measurements (blue) taken through the LC foil are compared to the average of 20 point measurements (green) taken on the LC surfaces under identical conditions (same photodetectors, optical fibers, and lighting condition). Their ratio (1.636) represents the S/N gain for read-noise limited conditions. All values are normalized to the measured maximum.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

s i = P b i + e i ,
B = B T T
b j = B T l j , [ b j 1 b j 2 b j 3 b j 4 b j n ] = [ B 11 B 21 B s 1 B 12 B 22 B s 2 B 13 B 23 B s 3 B 14 B 24 B s 4 B 1 n B 2 n B s n ] [ T 11 T 21 T m 1 T 12 T 22 T m 2 T 1 s T 2 s T m s ] [ l j 1 l j 2 l j m ] ,
α ϕ = 0 β δ = 0 d e μ δ d t d ϕ = α 2 π β 360 ° 1 e μ d μ ,
I a I e = e μ δ δ ,
I a I e = α ϕ = 0 β δ = 0 d δ e μ δ δ d t d ϕ = α ϕ = 0 β δ = 0 d e μ δ d t d ϕ ,
I a I e = α ϕ = 0 β [ e μ δ μ ] 0 d d ϕ = α ϕ = 0 β 1 e μ s μ d ϕ ,
I a I e = 2 π β 360 ° 1 e μ δ μ ,

Metrics