## Abstract

We wrap a thin-film luminescent concentrator (LC) - a flexible and transparent plastic foil doped with fluorescent dye particles - around an object to obtain images of the object under varying synthetic lighting conditions and without lenses. These images can then be used for computational relighting and depth reconstruction. An LC is an efficient two-dimensional light guide that allows photons to be collected over a wide solid angle, and through multiple overlapping integration areas simultaneously. We show that conventional photodetectors achieve a higher signal-to-noise ratio when equipped with an LC than in direct measurements. Efficient light guidance in combination with computational imaging approaches, such as presented in this article, can lead to novel optical sensors that collect light in a structured way and within a wide solid angle rather than unstructured through narrow apertures. This enables flexible, scalable, transparent, and lens-less thin-film image and depth sensors.

© 2017 Optical Society of America

## Introduction

In recent decades, imaging has shifted from using photosensitive film to computational approaches that enable many novel imaging systems. For example, coded aperture arrays together with advanced reconstruction methods make flat camera designs possible [1]. New materials support flexible, scalable, and transparent thin-film image sensors [2,3]. Efficient single-pixel cameras [4–11] are enabled by compressive sensing [12, 13] and controlled spatial-temporal illumination modulation. Computational reconstruction methods can use raw data from a greater family of sensor devices, such as metamaterial apertures [14], and allow advanced image processing after data acquisition, such as refocussing from light fields.

In this work, we present a prototype and reconstruction methods for computational imaging, relighting, and depth reconstruction using a flexible thin-film optical sensor. The sensor consists of a luminescent concentrator (LC) film (i.e. a transparent and flexible polycarbonate foil doped with fluorescent dyes) that is wrapped around an object, as shown in Fig. 1(A). Light of a specific spectral sub-band scattered from the object onto the LC is absorbed and emitted at a longer wavelength by the fluorescent dye particles inside the LC foil and transported to the foil edges by total internal reflection. With a simple aperture structure cut into the LC material, we record the transported light within a limited integration angle at various directions and positions along the LC edges (cf. Fig. 1(B)). Through a hole in the foil, we illuminate the object with a sequence of random speckle patterns projected by a digital micromirror device (DMD) spatial light modulator, as shown in Fig. 1(C).

From the measurements of the light back-scattered onto the LC foil, we reconstruct images of the object as seen from the perspective of the spatial light modulator for varying synthetic lighting conditions. This supports computational relighting [15–17] and depth reconstruction from shading [7, 9]. In contrast to single-pixel detectors that collect light through a narrow aperture only, our sensor receives light scattered over a wide solid angle and a much larger area on the LC foil. This leads to a higher signal-to-noise ratio (S/N) that does not require highly sensitive photodetectors or photomultipliers.

## Computing illumination bases

The measurements *s _{i}* over a single integration area

*i*(gray in Fig. 1(B)) obtained from projected speckle patterns

*P*can be described with:

*b*is an image of the object and

_{i}*e*an unknown error term. Each row of the matrix

_{i}*P*contains the speckle pattern that leads to the corresponding value in the vector

*s*. Solving for

_{i}*b*yields an image in the resolution of the speckle pattern that shows the object at an illumination caused by the integration area on the sensor as a synthetic area light source. For the same set of projected speckle patterns a different illumination image can be computed simultaneously for each sampled integration area (i.e. each aperture at the edges) in the same way. We refer to these images as

_{i}*basis illumination images*.

Equation (1) is solved with the biconjugate gradient stabilized method (BiCGStab) [18]. Since BiCGStab requires a square matrix, both sides of the equation are multiplied with the transpose of *P*, which brings two further advantages. First, the size of the equation system is reduced (*P ^{T}*

*P*is of size

*p*×

*p*, where

*p*is the number of pixels of a pattern in

*P*, instead of size

*n*×

*p*with

*n*>

*p*different speckle patterns in

*P*) and reconstruction is faster. Second, newly projected patterns can be added iteratively to the product

*P*

^{T}*P*by multiplying the pattern with its transpose. This allows the image to be reconstructed while patterns are being projected onto the object.

Our prototype samples over 16 integration areas at each of the sensor’s four edges simultaneously, which yielded the 64 basis illumination images shown in Fig. 2 (i.e. by applying Eq. (1)). For an image resolution of 128 × 96 pixels, 200k speckle patterns were projected. By using Hadamard codes instead of random patterns [9], the number of projections can be reduced to about 12k, which results in an overall scanning time of less than one second (assuming a ≈20 kHz modulation rate of the DMD, as in [9]). The minimum scanning time, however, is limited to the level of present photon noise.

## Rectifying illumination bases

Each computed basis illumination image *b _{i}* shows the object at a shading caused by the synthetic area light sources that has the shape of the corresponding triangular integration area on the sensor. These illumination bases, however, are not suitable for common image-based relighting or depth-reconstruction techniques, as such methods require point lights or small area light sources. Storing the images

*b*as columns in matrix

_{i}*B*, we obtain

*rectified basis illumination images b′*that result from synthetic square area light sources positioned on a uniform grid on the cylindrical sensor surface by solving

_{j}*B′*, whose columns are the rectified basis illumination images

*b′*. Note that

_{j}*T*is the transport matrix, which describes the light transport from each square area on the LC foil over each integral measurement [2]. The non-rectified basis illumination images

*B*can be represented as a linear combination of the rectified basis illumination images

*B′*weighed by

*T*(Eq. (2)).

^{T}With only a small number of basis illuminations in *B*, Eq. (2) would quickly become underdetermined when computing rectified basis illuminations for higher-resolution sampling grids. We approximate such solutions with multiple iterations of the following back-projection:

*l*is a weight vector of illumination contributions within the light source grid,

_{j}*n*the number of pixels in the rectified basis illumination image,

*s*the number of integral measurements, and

*m*the lighting grid size.

Equation (3) is solved employing the algebraic reconstruction technique (ART) [19], which is an iterative approach to tomographic reconstruction that is based on series expansion. It begins with estimating the solution vector, which is projected orthogonally onto the first hyperplane (the first equation) of the linear system. This process is repeated for the remaining equations of the system, which yields a solution vector that approximates the overall solution. One iterative step of ART is repeated *n* times, each time using the solution vector of the previous iteration as the initial estimate. We apply a faster variant of ART called simultaneous algebraic reconstruction technique (SART) [20]. Instead of calculating each value of the solution vector sequentially for each equation, SART calculates them for all equations of the linear system within one iteration. In practice, we found that two iterations are sufficient, and that more iterations will not lead to visually detectable improvements.

Figure 3 shows the rectified basis illumination images for a uniform lighting grid of 16 × 16 (i.e. *m*=16 × 16) synthetic light sources defined in *l _{j}* and computed with Eq. (3). In reality, each of these light sources would have a size of 7 mm × 7 mm and are evenly distributed on the cylindrical shape of the LC surface.

## Relighting and depth reconstruction

As explained in [15], the rectified basis illumination images can be linearly combined to compute more complex synthetic shading effects. In our case, this is achieved with *B′l*, where the matrix *B′* contains the rectified basis illumination images *b′ _{j}* in its columns, and the vector

*l*contains the image of the synthetic light source. Figure 4(A) shows results for synthetic area light sources (colored strips reshaped on the LC surface, shown on the right) in

*l*.

Furthermore, classical shape-from-shading algorithms [21] can be applied to the rectified basis illumination images for depth reconstruction (Figs. 4(B,C)). As illustrated in Fig. 4(D), we achieve a ≈1.5 mm mean error (≈10%) for depth estimation in our experiment, when compared to the ground truth that was computed from ideal rectified basis images calculated with a physical lighting simulation of the object’s 3D-scanned proxy (using the Blender Cycles engine). The scannable dimensions of the object were ≈30×30×15 mm (height×width×depth).

## S/N efficiency

Assuming that identical photodetectors (size, sensitivity, S/N) are either installed in front of the object, as in classical single-pixel detector approaches, or connected to the LC foil (e.g. via optical fibers) as in our prototype, the increase in S/N is proportional to the gain in light collected and transported over the larger integration areas:

*α*and

*μ*are the absorption and attenuation coefficients of the LC material, and

*β*and

*d*the integration angle (in degrees) and distance (foil diameter) that are determined by the sensor design. Equation (4) was derived as follows:

We know from previous work [22] that the intensity of light travelling through the LC foil material decreases in proportion to

where*I*is the emitted light intensity at the entrance point of the film,

_{e}*I*the attenuation light intensity at travel distance

_{a}*δ*, and

*μ*the attenuation coefficient of the LC material (experimentally determined to be

*μ*= 0.008 [22]). Assuming the emission

*I*to be identical at all entrance points within an arc area with radius

_{e}*d*and angle

*β*, the integrated amount of light transported to the arc origin is proportional to

*α*is the absorption coefficient of the LC material (i.e. the fraction of light that enters the foil and is absorbed and emitted by the fluorescent dye particles, assuming a 100% quantum yield). Solving for the inner integral yields

*β*is given in degrees. Consequently, the same photodetector (same size, sensitivity, S/N) that receives the amount

*I*of light directly from an object would collect the amount

_{e}*I*over the integration area of the LC foil, and the S/N increase is given by Eq. (8).

_{a}For the prototype shown in Fig. 1, *α* is 11.51% (the measured fraction of the blue sub-spectrum absorbed from white light and emitted as a green sub-spectrum, assuming a 100% quantum yield), *μ* is 0.008, and *β* and *d* were chosen to be 11° and 108 mm, respectively. This leads to a theoretical S/N increase of 59.8% (×1.598), compared to a classical single-pixel detector setup capturing the full white-light spectrum (detectors located at the same distance from the object as the LC foil).

To confirm our theoretical results, we took direct measurements using the same photodetectors as those connected to the LC foil in our prototype (also employing the same optical fibers) on a uniform grid of 5 × 4 samples at positions where the cylindrically shaped LC foil was located (i.e. at the same distance to the object). We compared these (spectrally unfiltered) measurements with the (spectrally filtered) values we measured with our prototype over the 16 integration areas on one side of the LC foil (assuming symmetry of the other three sides). For both sets of measurements, the same illumination condition was applied. On average, the experimental S/N increase was 63.6% (×1.636) for read-noise limited conditions. The configuration of the experiment and the acquired measurement values are shown in Fig. 5.

This slightly larger increase in S/N (compared to the theoretical result) can be explained by a marginal difference between the attenuation coefficient *μ* = 0.008 determined in [22] and the physical attenuation in our prototype.

By stacking multiple LC layers that absorb different parts of the white-light spectrum and collecting the sum of emitted light (e.g. with larger-diameter optical fibers connected across all layers), the S/N increases further in proportion with the increase in *α*. Assuming that we could absorb the full spectrum (i.e. *α*=100%) then the S/N would increase to a theoretical maximum of ×13.884.

## Experimental setup

For our hardware prototype (cf. Fig. 1), the luminescent concentrator foil (Bayer Makrofol^{®} LISA Green LC films in 108 mm × 108 mm × 300 *μ*m) was glued onto a transparent support cylinder. The aperture structure at the edges of the LC and the center hole were cut into the foil by means of a GraphRobo Graphtec cutting plotter. Jiangxi Daishing POF Co., Ltd polymethylmethacrylat (PMMA) step-index multi-mode optical fibers with a 250 *μ*m diameter and a numerical aperture of 0.5 were used to guide the integrated light signal of each aperture to photodetectors of line scan cameras for measurement. As line scan cameras we used four CMOS Sensor Inc. M106-A4-R1 CIS (contact image sensor) equipped with four programmable USB controllers (USB-Board-M106A4 from Spectronic Devices Ltd). The CIS modules record 10-bit gray scales with a signal-to-noise ratio of
$\text{S}/{\text{N}}_{\text{dB}}=10{\text{log}}_{10}\left(\frac{{P}_{\text{signal}}}{{P}_{\text{noise}}}\right)=50.8dB$, where *P*_{signal} and *P*_{noise} is the mean-square signal and mean-square noise level respectively. The spatial light modulator used for projecting the speckle patterns was a Texas Instruments Pico Projector Development Kit v2.0.

## Conclusion and limitations

Use of a luminescent concentrator as a light guide has two advantages over direct photodetector measurement. First, for the same photosensor, the S/N increases when light is collected through the LC surface (e.g. by a factor of ×1.636 for the single band-limited LC foil layer in our experiments, reaching a theoretical maximum of ×13.884 if the entire spectrum is covered). Second, light collection can be multiplexed over multiple overlapping integration areas within the LC foil to support the simultaneous reconstruction of many lighting conditions (256 in our example) while requiring only a low number of parallel measurements (64 for our prototype). In principle, a high S/N could also be achieved in single measurements using flexible large-area photodetectors [23] or by combining the measurements from multiple small photodetectors (e.g. from a flexible grid of organic photodiodes [24, 25]). While the former would not allow multiplexing over multiple integration areas (thereby preventing the reconstruction of multiple lighting bases), the latter would require a significantly larger number of photodetectors on the sensor surface to resolve the individual and overlapping integration areas necessary.

As part of future work, we plan to investigate how the S/N and the reconstruction quality can be further improved in our approach. Reducing the concentration of the fluorescent dye or using a fluorophore with smaller particles would decrease the attenuation coefficient *μ*, but would also decrease the emission. Applying a larger integration angle *β* would increase the light collected, but would also lower the variance of the reconstructed lighting bases. An optimal balance must to be found for both. Furthermore, scanning at real-time rates and in the invisible part of the spectrum (i.e. using infrared light) is an important technical extension.

One limitation of our approach is that, with multiple LC layers, it can either maximize S/N by integrating the full spectrum of the reflected light for gray scale image reconstruction (e.g. ×13.884 in our example), or support color image reconstruction for multiple spectral sub-bands at a lower S/N (e.g. ×1.636 per channel in our example).

Efficient light guidance in combination with computational imaging approaches, such as presented in this article, can lead to novel optical sensors that collect light in a structured way and within a wide solid angle rather than unstructured through narrow apertures. This enables flexible, scalable, transparent, and lens-less thin-film image- and depth-sensors that hold potential for application domains, such as human-computer interfaces.

## Acknowledgments

We thank Robert Koeppe from *isiQiri interface technologies GmbH* for fruitful discussions and for providing LC samples. We also thank *Wood K Plus Kompetenzzentrum Holz GmbH* for a 3D scan of our test object.

## References and links

**1. **M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in “2015 IEEE International Conference on Computer Vision Workshop (ICCVW),” (2015), pp. 663–666.

**2. **A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Optics express **21**, 4796–4810 (2013). [CrossRef] [PubMed]

**3. **A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Optics Express **23**, 33713–33720 (2015). [CrossRef]

**4. **D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in “Electronic Imaging 2006,” (International Society for Optics and Photonics, 2006), pp. 606509.

**5. **W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Applied Physics Letters **93**, 121105 (2008). [CrossRef]

**6. **M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine **25**, 83–91 (2008). [CrossRef]

**7. **B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science **340**, 844–847 (2013). [CrossRef] [PubMed]

**8. **S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Optics express **21**, 23068–23074 (2013). [CrossRef] [PubMed]

**9. **Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3d single-pixel video,” Journal of Optics **18**, 035203 (2016). [CrossRef]

**10. **G. Li, W. Wang, Y. Wang, W. Yang, and L. Liu, “Single-pixel camera with one graphene photodetector,” Optics express **24**, 400–408 (2016). [CrossRef] [PubMed]

**11. **Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Optics letters **41**, 2497–2500 (2016). [CrossRef] [PubMed]

**12. **D. L. Donoho, “Compressed sensing,” Information Theory, IEEE Transactions on **52**, 1289–1306 (2006). [CrossRef]

**13. **R. G. Baraniuk, “Compressive sensing,” IEEE signal processing magazine **24**, 118–121 (2007). [CrossRef]

**14. **J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science **339**, 310–313 (2013). [CrossRef] [PubMed]

**15. **P. Debevec, T. Hawkins, C. Tchou, H.-P. Duiker, W. Sarokin, and M. Sagar, “Acquiring the reflectance field of a human face,” in “Proceedings of the 27th annual conference on Computer graphics and interactive techniques,” (ACM Press/Addison-Wesley Publishing Co., 2000), pp. 145–156.

**16. **V. Masselus, P. Peers, P. Dutré, and Y. D. Willems, “Relighting with 4d incident light fields,” in “ACM Transactions on Graphics (TOG),”, vol. 22 (ACM, 2003), vol. 22, pp. 613–620.

**17. **P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” in “ACM Transactions on Graphics (TOG),”, vol. 24 (ACM, 2005), vol. 24, pp. 745–755.

**18. **H. A. Van der Vorst, “Bi-cgstab: A fast and smoothly converging variant of bi-cg for the solution of nonsymmetric linear systems,” SIAM Journal on scientific and Statistical Computing **13**, 631–644 (1992). [CrossRef]

**19. **M. Slaney and A. Kak, “*Principles of computerized tomographic imaging*,” SIAM, Philadelphia (1988).

**20. **A. Andersen and A. Kak, “Simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm,” Ultrasonic imaging **6**, 81–94 (1984). [CrossRef] [PubMed]

**21. **R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah, “Shape-from-shading: a survey,” IEEE transactions on pattern analysis and machine intelligence **21**, 690–706 (1999). [CrossRef]

**22. **R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Optics Express **18**, 2209–2218 (2010). [CrossRef] [PubMed]

**23. **Z. Zheng, T. Zhang, J. Yao, Y. Zhang, J. Xu, and G. Yang, “Flexible, transparent and ultra-broadband photodetector based on large-area wse2 film for wearable devices,” Nanotechnology **27**, 225501 (2016). [CrossRef] [PubMed]

**24. **T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” Electron Devices, IEEE Transactions on **52**, 2502–2511 (2005). [CrossRef]

**25. **T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Applied Physics Letters **92**, 213303 (2008). [CrossRef]