Abstract

We present a 300 μm thick optical Söller collimator realized by X-ray lithography on a PMMA wafer which, when paired with luminescent concentrator films, forms the first complete prototype of a short-distance, flexible, scalable imaging system that is less than 1 mm thick. We describe two ways of increasing the light-gathering ability of the collimator by using hexagonal aperture cells and embedded micro-lenses, evaluate a new micro-lens aperture array (MLAA) for proof of concept, and analyze the optical imaging properties of flexible MLAAs when realized as thin films.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

New computational photography approaches [1] that enable small (micro- to nano-size), scalable, flexible cameras offer tremendous opportunities for future imaging applications. Recent achievements in this area benefit from new findings in material science, mechanical and optical engineering, signal processing, and machine learning.

One way of reducing the form factor of cameras is to replace thick compound lenses with amplitude masks, such as coded apertures [2–5], or with phase masks, such as convex diffusers that concentrate light in the form of high-frequency pseudo-random caustic patterns onto the image sensor [6,7]. In both cases, the sensed light signal must be computationally de-multiplexed for image reconstruction. This often leads to artefacts and requires image sensors with a significantly higher spatial resolution than that of the images being reconstructed.

Alternatively to amplitude or phase masks, thin micro-lens arrays (MLAs) can be used instead of thick single lenses. As the individual micro-lenses can have a variety of optical properties and alignments, they enable unconventional optical imaging systems, such as foveated vision (i.e., unequal spatial resolution in the field of view) [8] and flexible lenses when MLAs are embedded in a soft silicone rubber carrier [9]. Compound micro-lens arrays can be printed at nano-scale accuracy by means of femtosecond direct laser writing onto CMOS image sensors [10].

Alongside the development of various kinds of thin imaging optics, possibilities for curved and flexible image sensors are being investigated. In some cases, such sensors are based on organic photo-diodes [11, 12]. In other cases, silicon photo-diodes are connected in a grid of flexible wires [13,14].

We have previously presented an entirely different approach to scalable, transparent, flexible image sensors [15–20]. It consists of multiple thin-film luminescent concentrator (LC) layers, each of which is sensitive to a different band of the light spectrum and allows sensing and reconstruction of a different color channel for images optically focussed on the sensor. For thin-film optical imaging in combination with our sensor, we investigated applications of static and dynamic coded apertures (i.e., amplitude masks) [21], phase masks, and optical Söller collimators [22]. In contrast to amplitude and phase masks, the optical Söller collimator—which is a micro-aperture array (MAA)—does not reduce spatial resolution (i.e., images are reconstructed at the resolution supported by the image sensor), but instead reduces the light-gathering ability of the entire thin-film camera system.

In this article, we make two main contributions: First, we present a 300 μm thick optical Söller collimator realized by means of X-ray lithography on a PMMA wafer that, in combination with our LC image sensor, forms the first complete prototype of a short-distance, flexible, scalable imaging system that is less than 1 mm thick. Second, we describe two ways of increasing the light-gathering ability of the collimator by using hexagonal aperture cells and embedded micro-lenses. We evaluate a new micro-lens aperture array (MLAA) for proof of concept and analyze the optical imaging properties of flexible MLAAs when realized as thin films. Figure 1 illustrates our prototypes.

 figure: Fig. 1

Fig. 1 Prototypes: I: MAA with 3D-printed square aperture cells [22] (a–c), II: 300 μm thick MAA with hexagonal cells embedded in a PMMA wafer by means of 3D laser lithography (d–f), III: MLAA with 3D-printed hexagonal cells and round lens inlays (g–i). Thin-film camera concept with stacked LC and MAA / MLAA layers (a,d,g), implemented prototypes (b,e,h), and optics geometry (c,f,i).

Download Full Size | PPT Slide | PDF

2. Luminescent concentrator sensor

The optical sensor in our imaging system is a transparent and flexible material (currently polycarbonate), that is doped with fluorescent particles. These particles absorb the light of a particular sub band of the light spectrum and emit light at a longer wavelength.

Optically focusing an image on the LC surface leads to fluorescent emission and TIR transport of light with reduced multi-scattering towards the LC edges, where energy is lost over transport distance. Thus, the whole sensor acts as an efficient 2D waveguide. By cutting the LC border regions into adjacent triangular micro-apertures that act as one-dimensional pinholes, we multiplex the complex light signal that reaches the LC edges into spatial-directional integral measurements that approximate the Radon transform of the image. With the help of optical fibres, the Radon-transformed signal projected at the LC edges is forwarded for measurement to line-scan cameras, as shown in Figs. 1(a), 1(d) and 1(g).

Our sensor model [16,18] consists of a system of linear equations:

l=TPSFp+η,
where l is the measured Radon transform of the projected image p (vectorized), and η is an unknown error term (e.g., ambient light). The matrix T is the light transport from a pixel on the imaging sensor to the corresponding measurement positions on the edges of the LC sensor. A projected image is transformed by the point spread function (PSF) of optical elements that are placed on the sensor (such as a Söller collimator [22] or coded apertures [21]). The corresponding convolution is encoded in the matrix PSF.

The inverse of Eq. (1) enables image reconstruction by simply multiplying matrices and vectors:

p=(TPSF)1(lη).
The matrix (T PSF)−1 contains the inverse radon transform of the LC’s light transport and the deconvolution of the optic’s PSF, and is learned using machine learning: In the course of a one-time calibration step, we project 60000 of known images, with a resolution of 64 × 64, onto the LC and measure their Radon-transform signals. This training set feeds a machine learning algorithm (we apply linear regression with an 2-norm) to find (T PSF)−1 robustly.

Multiple LC layers that are sensitive to different wavelengths can be stacked for color image reconstruction, where p and l are extended to contain the information for each color channel [19].

Further details on the LC sensor optics and image reconstruction are beyond the scope of this article. The interested reader is referred to [15–20,22] for more information. Instead, this article presents enhanced approaches to thin-film optical imaging that can be used together with LC sensor layers to increase depth of field, as shown in Figs. 1(a), 1(d) and 1(g). Previously, we have introduced the optical Söller collimator to achieve this [22]. This first solution, however, suffers from a relatively low light-gathering ability as the collimator walls block 75 % of the light, leading to poor signal-to-noise ratios in our measurements.

3. Micro-lens collimator array

The optical Söller collimator introduced in [22], and shown in Figs. 1(a) and 1(b), is an MAA that increases depth of field by transmitting rays within a narrow collimation angle while absorbing all others. As illustrated in Fig. 1(c), the collimation angle α of a single micro-aperture of a planar Söller collimator is

α=2tan1(dH),
where H and d are its height and aperture diameter (fixed distance between walls at the upper aperture rim), respectively. For curved collimator shapes, the effective aperture diameter (d′) increases or decreases as follows:
d={(dHd2(r+H))1(d2(r+H))2ifr0;0ifr=0,
where r is the radius of curvature (which is infinite for planar collimators where d′ = d). Assuming an infinite planar light source, the fraction f of light passing through the Söller collimator (in the one-dimensional case) is
f=d(d+w)1α20α2df(θ)ddθ,
where w is the wall width and f(θ) is the fraction of the aperture occluded by walls when parallel rays fall into the aperture at angle θ. For convex shapes
f(θ)={H2sinθcosγcosθifθ<γHsin(θ+γ)cosθifγθα2;
and for concave shapes
f(θ)={0ifθ<γ,Hsin(θγ)cosθifγθα2,
where γ is the angle between the walls and the optical axis of the enclosed aperture
γ=(±sin1(d2(r+H))).
Note that + and − in Eq. (8) indicate negative and positive curvature radii, respectively.

The light-gathering capability L of the collimator can then be expressed for the two-dimensional case as

LNAf2.
It can be increased by integrating micro-lenses with a focal length that matches H into the micro-apertures, as shown in Figs. 1(g) and 1(h). The micro-lenses bend all rays within the collimation angle α—many of which would otherwise be blocked by the collimator walls—towards the LC sensor, as illustrated in Figs. 1(c) and 1(i). In Fig. 1(i), the collimation angle of the MLAA is
α=2tan1(d2H).
Comparison of Eq. (10) with Eq. (3) shows that for planar shapes, for instance, the height H of an MLAA decreases by a factor of 2 compared to an MAA with the same α.

The effective aperture diameter of a curved MLAA (assuming inflexible micro-lenses but a flexible luminescent concentrator layer; Figs. 2(c) and 2(d)) is

d={2(dHd2(r+H))1(d2(r+H))2difr0;0ifr=0.
The fraction of light passing through the MLAA is also given by Eq. (5), but
f(θ)={0ifθβ2,Htanθtan(β2)secγ1ifβ2<θα2,
where β2 is the angle at which the focused light beam starts to become blocked by the walls
β2=tan1(d2±HtanγH).
Note that + and − in Eq. (13), again, indicate negative and positive curvature radii, respectively.

 figure: Fig. 2

Fig. 2 Curved shapes: Geometric properties of positively/negatively curved MAAs (a,b) and MLAAs (c,d).

Download Full Size | PPT Slide | PDF

The light-gathering capability of the MLAA can be obtained by substituting Eq. (12) into Eqs. (5) and (9). For planar shapes and the same α, for instance, L increases by a factor of 4 for an MLAA compared to an MAA.

A suitable choice of grid topology can further increase a collimator’s light-gathering ability. The goal is to select a tileable grid cell that, for the same collimation angle, minimizes the light-blocking wall areas while maximizing the light-transmitting aperture areas. Hexagonal tiling is the densest way in which to arrange circles in two dimensions (i.e., optimal for dividing a surface into regions of equal area with the smallest total perimeter). Figure 3 shows optical simulation results for various tileable grid cells (equilateral triangles, equilateral squares, and hexagons). Hexagonal tiling is always optimal. Figure 3 also shows that, for the same depth of field (i.e., same α and L), downscaling the layer thickness H requires downscaling the wall thickness w by the same factor.

 figure: Fig. 3

Fig. 3 Grid topologies and scaling: Optical simulation of light-gathering ability L over depth of field (collimation angle α / f-number). Since the structural dimensions of the grid (H and w) remain constant for each plot, only the aperture diameter d varies to achieve a desired α.

Download Full Size | PPT Slide | PDF

4. Prototype and results

Figure 1 shows three prototypes that we built and evaluated. All three feature the same single LC layer for sensing (300 μm thick Bayer Makrofol LISA Green).

  • Prototype I, shown in Figs. 1(a)–1(c), is the 5 × 5 cm MAA that was presented in [22]. It consists of 32 × 32 = 1024 3D-printed (Stratasys J750, acrylonitrile butadiene styrene) square aperture cells.
  • Prototype II, in Figs. 1(d)–1(f), is a new, round MAA (2 cm diameter) with hexagonal cells embedded in a H = 300 μm thick blackened PMMA wafer by means of X-ray lithography. For the fabrication of this prototype the synchrotron radiation source KARA (Karlsruher Research Accelerator, former ANKA) was used. A thin gold mesh with a thickness of around 20 μm with the primary structure was printed via shadow projection into a blackened 4 inch PMMA wafer, 300 μm in thickness. After a wet chemical development process the PMMA mesh could be released [23].
  • Prototype III, illustrated in Figs. 1(g)–1(i), is a new 3D-printed (Stratasys J750, VeroBlack) 5 × 5 cm MLAA. It consists of hexagonal cells with round lens inlays. The 18 × 18 = 288 biconvex lenses (N-BK7 borosilicate crown glass) have a diameter of 3000 μm, a focal length of 9830 μm, a radius of curvature of 9898 μm (both sides), and a thickness of 1500 μm.

Table 1 compares the properties of all three prototypes.

Tables Icon

Table 1. Properties of the prototypes used in our experiments.

The optical simulations in Fig. 4 show the improvement of MLAAs over MAAs in light-gathering L for various depths of field (collimation angles α / f-numbers). It can be seen that our MLAA prototype III (L = 0.095, α = 17.34°) outperforms our MAA prototype I (L = 0.0094, α = 17.34°) by a factor of 10. This improvement can be attributed to the integration of micro-lenses and the enhanced collimator geometry (i.e., hexagonal vs. rectangular grid and the smaller wall thickness of w = 600 μm instead of w = 800 μm). Due to its relatively large wall width of w = 60 μm (compared to its height H = 300 μm), our MAA prototype II (L = 0.049, α = 49.01°) performed least well. However, it is the first thin-film MAA implementation. If structural dimensions can be reduced further (e.g., down to w = 5 μm) and suitably small micro-lenses can be integrated, then effective thin-film implementations could become feasible (as illustrated by plot IV).

 figure: Fig. 4

Fig. 4 MLAA vs. MAA: Optical simulations of light-gathering ability L over depth of field (collimation angle α / f-number). Grid-structure dimensions (H and w) of the prototypes (I,II,III) remain constant for each plot, while the aperture diameter d varies to achieve a desired α. Note that plot IV simulates a thin MLAA with the smallest structure (H = 300μm, w = 5μm) that we currently consider practical. The black dots indicate the optical performance of the implemented prototypes.

Download Full Size | PPT Slide | PDF

Figure 5 presents image reconstruction results achieved with the three prototypes in Fig. 1. The ground-truth images are focused on a diffusor at a distance of 3 cm from the LC sensor to be optically entirely blurred on the sensor plane. The exposure times were chosen such that the measured peak signals matched and reached the maximum range of the line-scan cameras. Reconstructed images are compared to the ground truth using the structural similarity index metric (SSIM) [24].

 figure: Fig. 5

Fig. 5 Image reconstruction results: Ground truth images and reconstruction results achieved with the three prototypes (I,II,III, Fig. 1). SSIM is shown below each image. Numbers at the top row indicate the chosen exposure times.

Download Full Size | PPT Slide | PDF

For the MLAA prototype III, we measure a light gain of factor 8 compared to the MAA prototype I. The simulated light gain of factor 10 shown in Fig. 4 was not achieved perfectly due to manufacturing limitations (e.g., imprecise placement of micro-lenses within micro-apertures). Although both prototypes support the same depth of field (same α), the MLAA leads to less noisy images at shorter exposure times than the MAA. This is due to the better light-gathering ability of the MLAA leading to a reduction in dark current noise and to a better signal-to-noise ratio.

The MAA of prototype II has a wide collimation angle and a more shallow depth of field, which results in more blur in the reconstructed images compared to the other prototypes. We measured a light gain of ×20 compared to MAA prototype I. In this case, the simulated light-gain factor was significantly lower (i.e. ×5), as shown in Fig. 4. This is due to the applied PMMA wafer not being fully opaque. Although the aperture walls transmit only about 5% of light penetrating the MAA perpendicularly at 300 μm thickness, the effective transmission is higher at steeper angles because the penetrated wall thickness is lower.

5. Discussion and conclusion

We have shown that a 300 μm thick MAA can be realized on a PMMA wafer by means of X-ray lithography, and that combining this with our LC image sensor yields the first complete prototype of a short-distance, flexible, scalable imaging system that is less than 1 mm thick. Higher-opacity PMMA or blackened metal materials will improve its efficiency.

Furthermore, we have described two ways of increasing the light-gathering ability of the MAA by using hexagonal aperture cells and embedded micro-lenses. We have evaluated a new 3D-printed MLAA for proof of concept, and analyzed the optical imaging properties of future flexible MLAAs when realized as thin films. Manufacturing techniques must be found for producing MLAAs at micrometer scale that reach the performance levels indicated with plot IV of Fig. 4. Two-photon direct laser lithography seems to be an suitable technology to achieve this [10].

We can conclude that, a planar MLAA generally gathers four times the amount of light while being two times thinner than a planar MAA for the same depth of field (i.e., same α). Moreover, hexagonal tiling is always the optimal grid topology for both MAAs and MLAAs.

Comparing non-planar MLAA and MAA shows that for the same α, layer thickness decreases by more than a factor of two, while light-gathering increases by more than a factor of four in the case of convex shapes and by less than a factor of four for concave shapes—depending on the curvature radius (approaching a factor of one for concave shapes of strong curvature). For curved MLAAs and MAAs with a constant aperture diameter d (assuming inflexible lenses / upper aperture rims), more convex bending leads to an increase in depth of field and to a decrease in light gathering. The opposite is the case for a more concave bending. However, non-planar MLAAs also always outperform MAAs with the same curvature in light-gathering (depending on the degree of curvature: by a factor of > 4 for convex shapes, a factor of 4 in the planar case, and a factor of < 4 for concave shapes, and approaching x1 with extreme curvatures). The depth of field of curved MLAAs changes by a factor of > x1 (convex) and by a factor of < x1 (concave) when compared to an MAA of the same curvature.

As part of future work, we will investigate other, more efficient alternatives for thin-film imaging. One option is to apply recent findings in meta-lenses [25]. However, current implementations of broad-band meta-lenses continue to suffer from very limited light-gathering [26].

Funding

Linz Institute of Technology (LIT) (LIT-2016-1-SEE-008 – LumiConCam).

Acknowledgments

We thank the Institute of Science and Technology Austria (IST Austria) for 3D-printing the aperture arrays used in prototypes I and III, and the Karlsruhe Nano Micro Facility (KNMF) of the Karlsruhe Institute of Technology (KIT) for manufacturing the thin aperture array wafer (prototype II).

References

1. S. J. Koppal, “A survey of computational photography in the small: Creating intelligent cameras for the next wave of miniature devices,” IEEE Signal Process. Mag. 33, 16–22 (2016). [CrossRef]  

2. E. E. Fenimore and T. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. 17, 337–347 (1978). [CrossRef]   [PubMed]  

3. R. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys. J. 153, L101 (1968). [CrossRef]  

4. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.

5. M. J. DeWeert and B. P. Farm, “Lensless coded-aperture imaging with separable doubly-toeplitz masks,” Opt. Eng. 54, 023102 (2015). [CrossRef]  

6. G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017).

7. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

8. S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017). [CrossRef]   [PubMed]  

9. D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

10. T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016). [CrossRef]  

11. G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998). [CrossRef]  

12. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005). [CrossRef]  

13. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008). [CrossRef]   [PubMed]  

14. Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013). [CrossRef]   [PubMed]  

15. A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Opt. Express 21, 4796–4810 (2013). [CrossRef]   [PubMed]  

16. A. Koppelhuber, C. Birklbauer, S. Izadi, and O. Bimber, “A transparent thin-film sensor for multi-focal image reconstruction and depth estimation,” Opt. Express 22, 8928–8942 (2014). [CrossRef]   [PubMed]  

17. A. Koppelhuber, S. Fanello, C. Birklbauer, D. Schedl, S. Izadi, and O. Bimber, “Enhanced learning-based imaging with thin-film luminescent concentrators,” Opt. Express 22, 29531–29543 (2014). [CrossRef]  

18. A. Koppelhuber and O. Bimber, “A classification sensor based on compressed optical radon transform,” Opt. Express 23, 9397–9406 (2015). [CrossRef]   [PubMed]  

19. A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Opt. Express 23, 33713–33720 (2015). [CrossRef]  

20. A. Koppelhuber and O. Bimber, “Computational imaging, relighting and depth sensing using flexible thin-film sensors,” Opt. Express 25, 2694–2702 (2017). [CrossRef]  

21. O. Bimber and A. Koppelhuber, “Toward a flexible, scalable, and transparent thin-film camera,” Proc. IEEE 105, 960–969 (2017). [CrossRef]  

22. A. Koppelhuber and O. Bimber, “Thin-film camera using luminescent concentrators and an optical söller collimator,” Opt. Express 25, 18526–18536 (2017). [CrossRef]   [PubMed]  

23. V. Saile, U. Wallrabe, O. Tabata, and J. G. Korvink, LIGA and its Applications, vol. 7 (John Wiley & Sons, 2009).

24. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]   [PubMed]  

25. M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016). [CrossRef]   [PubMed]  

26. W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. S. J. Koppal, “A survey of computational photography in the small: Creating intelligent cameras for the next wave of miniature devices,” IEEE Signal Process. Mag. 33, 16–22 (2016).
    [Crossref]
  2. E. E. Fenimore and T. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. 17, 337–347 (1978).
    [Crossref] [PubMed]
  3. R. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys. J. 153, L101 (1968).
    [Crossref]
  4. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.
  5. M. J. DeWeert and B. P. Farm, “Lensless coded-aperture imaging with separable doubly-toeplitz masks,” Opt. Eng. 54, 023102 (2015).
    [Crossref]
  6. G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017).
  7. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.
  8. S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017).
    [Crossref] [PubMed]
  9. D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.
  10. T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016).
    [Crossref]
  11. G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998).
    [Crossref]
  12. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
    [Crossref]
  13. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
    [Crossref] [PubMed]
  14. Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
    [Crossref] [PubMed]
  15. A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Opt. Express 21, 4796–4810 (2013).
    [Crossref] [PubMed]
  16. A. Koppelhuber, C. Birklbauer, S. Izadi, and O. Bimber, “A transparent thin-film sensor for multi-focal image reconstruction and depth estimation,” Opt. Express 22, 8928–8942 (2014).
    [Crossref] [PubMed]
  17. A. Koppelhuber, S. Fanello, C. Birklbauer, D. Schedl, S. Izadi, and O. Bimber, “Enhanced learning-based imaging with thin-film luminescent concentrators,” Opt. Express 22, 29531–29543 (2014).
    [Crossref]
  18. A. Koppelhuber and O. Bimber, “A classification sensor based on compressed optical radon transform,” Opt. Express 23, 9397–9406 (2015).
    [Crossref] [PubMed]
  19. A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Opt. Express 23, 33713–33720 (2015).
    [Crossref]
  20. A. Koppelhuber and O. Bimber, “Computational imaging, relighting and depth sensing using flexible thin-film sensors,” Opt. Express 25, 2694–2702 (2017).
    [Crossref]
  21. O. Bimber and A. Koppelhuber, “Toward a flexible, scalable, and transparent thin-film camera,” Proc. IEEE 105, 960–969 (2017).
    [Crossref]
  22. A. Koppelhuber and O. Bimber, “Thin-film camera using luminescent concentrators and an optical söller collimator,” Opt. Express 25, 18526–18536 (2017).
    [Crossref] [PubMed]
  23. V. Saile, U. Wallrabe, O. Tabata, and J. G. Korvink, LIGA and its Applications, vol. 7 (John Wiley & Sons, 2009).
  24. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref] [PubMed]
  25. M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
    [Crossref] [PubMed]
  26. W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
    [Crossref] [PubMed]

2018 (1)

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

2017 (3)

2016 (3)

S. J. Koppal, “A survey of computational photography in the small: Creating intelligent cameras for the next wave of miniature devices,” IEEE Signal Process. Mag. 33, 16–22 (2016).
[Crossref]

T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016).
[Crossref]

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

2015 (3)

2014 (2)

2013 (2)

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Opt. Express 21, 4796–4810 (2013).
[Crossref] [PubMed]

2008 (1)

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

2005 (1)

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

1998 (1)

G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998).
[Crossref]

1978 (1)

1968 (1)

R. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys. J. 153, L101 (1968).
[Crossref]

Antipa, N.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017).

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

Arzenbacher, K.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017).
[Crossref] [PubMed]

Asif, M. S.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.

Ayremlou, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.

Baraniuk, R.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.

Bimber, O.

Birklbauer, C.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Cannon, T.

Capasso, F.

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

Chen, W. T.

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

Choi, K.-J.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Choi, W. M.

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Crozier, K. B.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Devlin, R. C.

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

DeWeert, M. J.

M. J. DeWeert and B. P. Farm, “Lensless coded-aperture imaging with separable doubly-toeplitz masks,” Opt. Eng. 54, 023102 (2015).
[Crossref]

Dicke, R.

R. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys. J. 153, L101 (1968).
[Crossref]

Fanello, S.

Farm, B. P.

M. J. DeWeert and B. P. Farm, “Lensless coded-aperture imaging with separable doubly-toeplitz masks,” Opt. Eng. 54, 023102 (2015).
[Crossref]

Fenimore, E. E.

Geddes Iii, J. B.

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Giessen, H.

T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016).
[Crossref]

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017).
[Crossref] [PubMed]

Gissibl, T.

T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016).
[Crossref]

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017).
[Crossref] [PubMed]

Heeger, A. J.

G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998).
[Crossref]

Herkommer, A.

T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016).
[Crossref]

Herkommer, A. M.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017).
[Crossref] [PubMed]

Huang, Y.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Iba, S.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

Izadi, S.

Jung, I.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Kato, Y.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

Kawaguchi, H.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

Khorasaninejad, M.

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

Kim, R. H.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Ko, H. C.

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Koppal, S. J.

S. J. Koppal, “A survey of computational photography in the small: Creating intelligent cameras for the next wave of miniature devices,” IEEE Signal Process. Mag. 33, 16–22 (2016).
[Crossref]

Koppelhuber, A.

Korvink, J. G.

V. Saile, U. Wallrabe, O. Tabata, and J. G. Korvink, LIGA and its Applications, vol. 7 (John Wiley & Sons, 2009).

Kuo, G.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017).

Lee, E.

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

Li, R.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Liu, Z.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Lu, C.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Malyarchuk, V.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

McElvain, J.

G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998).
[Crossref]

Nayar, S. K.

D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

Necula, S.

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

Ng, R.

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017).

Noguchi, Y.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

Oh, J.

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

Park, H.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Rogers, J. A.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Saile, V.

V. Saile, U. Wallrabe, O. Tabata, and J. G. Korvink, LIGA and its Applications, vol. 7 (John Wiley & Sons, 2009).

Sakurai, T.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

Sanjeev, V.

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

Sankaranarayanan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.

Schedl, D.

Sekitani, T.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Shi, Z.

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Sims, D. C.

D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

Someya, T.

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

Song, J.

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Song, Y. M.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Stoykovich, M. P.

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Tabata, O.

V. Saile, U. Wallrabe, O. Tabata, and J. G. Korvink, LIGA and its Applications, vol. 7 (John Wiley & Sons, 2009).

Thiele, S.

T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016).
[Crossref]

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017).
[Crossref] [PubMed]

Veeraraghavan, A.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.

Waller, L.

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017).

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

Wallrabe, U.

V. Saile, U. Wallrabe, O. Tabata, and J. G. Korvink, LIGA and its Applications, vol. 7 (John Wiley & Sons, 2009).

Wang, J.

G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998).
[Crossref]

Wang, S.

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Xiao, J.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Xie, Y.

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Yu, C.-J.

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Yu, G.

G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998).
[Crossref]

Yue, Y.

D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

Zhu, A. Y.

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

Adv. Mater. (1)

G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998).
[Crossref]

Appl. Opt. (1)

Astrophys. J. (1)

R. Dicke, “Scatter-hole cameras for x-rays and gamma rays,” Astrophys. J. 153, L101 (1968).
[Crossref]

IEEE Signal Process. Mag. (1)

S. J. Koppal, “A survey of computational photography in the small: Creating intelligent cameras for the next wave of miniature devices,” IEEE Signal Process. Mag. 33, 16–22 (2016).
[Crossref]

IEEE Trans. Electron Devices (1)

T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005).
[Crossref]

IEEE Trans. Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Nat. Nanotechnol. (1)

W. T. Chen, A. Y. Zhu, V. Sanjeev, M. Khorasaninejad, Z. Shi, E. Lee, and F. Capasso, “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol. 13, 220 (2018).
[Crossref] [PubMed]

Nat. Photonics (1)

T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016).
[Crossref]

Nature (2)

H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C.-J. Yu, J. B. Geddes Iii, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature 454, 748 (2008).
[Crossref] [PubMed]

Y. M. Song, Y. Xie, V. Malyarchuk, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R. H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497, 95 (2013).
[Crossref] [PubMed]

Opt. Eng. (1)

M. J. DeWeert and B. P. Farm, “Lensless coded-aperture imaging with separable doubly-toeplitz masks,” Opt. Eng. 54, 023102 (2015).
[Crossref]

Opt. Express (7)

Proc. IEEE (1)

O. Bimber and A. Koppelhuber, “Toward a flexible, scalable, and transparent thin-film camera,” Proc. IEEE 105, 960–969 (2017).
[Crossref]

Science (1)

M. Khorasaninejad, W. T. Chen, R. C. Devlin, J. Oh, A. Y. Zhu, and F. Capasso, “Metalenses at visible wavelengths: Diffraction-limited focusing and subwavelength resolution imaging,” Science 352, 1190–1194 (2016).
[Crossref] [PubMed]

Other (6)

V. Saile, U. Wallrabe, O. Tabata, and J. G. Korvink, LIGA and its Applications, vol. 7 (John Wiley & Sons, 2009).

G. Kuo, N. Antipa, R. Ng, and L. Waller, “Diffusercam: diffuser-based lensless cameras,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2017).

N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv.3 (2017).
[Crossref] [PubMed]

D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2016), pp. 1–11.

M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in Proceedings of IEEE International Conference on Computer Vision Workshop, (IEEE, 2015), pp. 663–666.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Prototypes: I: MAA with 3D-printed square aperture cells [22] (a–c), II: 300 μm thick MAA with hexagonal cells embedded in a PMMA wafer by means of 3D laser lithography (d–f), III: MLAA with 3D-printed hexagonal cells and round lens inlays (g–i). Thin-film camera concept with stacked LC and MAA / MLAA layers (a,d,g), implemented prototypes (b,e,h), and optics geometry (c,f,i).
Fig. 2
Fig. 2 Curved shapes: Geometric properties of positively/negatively curved MAAs (a,b) and MLAAs (c,d).
Fig. 3
Fig. 3 Grid topologies and scaling: Optical simulation of light-gathering ability L over depth of field (collimation angle α / f-number). Since the structural dimensions of the grid (H and w) remain constant for each plot, only the aperture diameter d varies to achieve a desired α.
Fig. 4
Fig. 4 MLAA vs. MAA: Optical simulations of light-gathering ability L over depth of field (collimation angle α / f-number). Grid-structure dimensions (H and w) of the prototypes (I,II,III) remain constant for each plot, while the aperture diameter d varies to achieve a desired α. Note that plot IV simulates a thin MLAA with the smallest structure (H = 300μm, w = 5μm) that we currently consider practical. The black dots indicate the optical performance of the implemented prototypes.
Fig. 5
Fig. 5 Image reconstruction results: Ground truth images and reconstruction results achieved with the three prototypes (I,II,III, Fig. 1). SSIM is shown below each image. Numbers at the top row indicate the chosen exposure times.

Tables (1)

Tables Icon

Table 1 Properties of the prototypes used in our experiments.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

l = T PSF p + η ,
p = ( T PSF ) 1 ( l η ) .
α = 2 tan 1 ( d H ) ,
d = { ( d H d 2 ( r + H ) ) 1 ( d 2 ( r + H ) ) 2 if r 0 ; 0 if r = 0 ,
f = d ( d + w ) 1 α 2 0 α 2 d f ( θ ) d d θ ,
f ( θ ) = { H 2 sin θ cos γ cos θ if θ < γ H sin ( θ + γ ) cos θ if γ θ α 2 ;
f ( θ ) = { 0 if θ < γ , H sin ( θ γ ) cos θ if γ θ α 2 ,
γ = ( ± sin 1 ( d 2 ( r + H ) ) ) .
L NA f 2 .
α = 2 tan 1 ( d 2 H ) .
d = { 2 ( d H d 2 ( r + H ) ) 1 ( d 2 ( r + H ) ) 2 d if r 0 ; 0 if r = 0 .
f ( θ ) = { 0 if θ β 2 , H tan θ tan ( β 2 ) sec γ 1 if β 2 < θ α 2 ,
β 2 = tan 1 ( d 2 ± H tan γ H ) .

Metrics