Abstract
Imaging three-dimensional (3D) objects has been realized by methods such as binocular stereo vision and multi-view imaging. These methods, however, needs multiple cameras or multiple shots to get elemental images. In this paper, we develop a single-shot multi-view imaging technique by utilizing the natural randomness of scattering media. By exploiting the memory effect and uncorrelated point spread functions (PSF) among scattering media, we demonstrate that both stereo imaging with large disparity and up to seven-view imaging of a 3D object can be reconstructed from only one speckle pattern by deconvolution. The elemental images are consistent with 3D object projection and images taken by multi-shot imaging. Our technique provides a feasible method to capture multi-view imaging with short acquisition time and easy calibration.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Imaging is one of the most important technologies to characterize or visualize objects because a large amount of information can be captured in a single image. In addition to two-dimensional images, three-dimensional (3D) imaging provides depth information about the scene. Currently, there are various 3D imaging technologies such as holographic photography, light-field photography, time of flight imaging and binocular stereo imaging [1–6], which are important for not only scientific characterizations, but also glasses-free 3D displays. Compared to binocular stereo vision which is analogous to human eyes [7,8], multi-view displays provide strong depth impression, large disparity and seamless viewing change simultaneously [4,9,10]. Capturing multi-view images needs multi-shots with a single camera or multiple cameras, which require a challenging camera calibration process for displaying all the elemental images. A single-shot multi-view imaging technique, where all the elemental images are multiplexed in a single camera image, would be the ideal solution.
Unlike conventional imaging with a transparent optical lens, computational imaging techniques could transform scattering media such as frosted glass or optical diffusers into a scattering lens. Besides non-invasive imaging through nature obstacles of scattering effects for real applications in astrophysics [11–15] or biomedical imaging [16–19], researchers have recently used thin scattering media as passive optical components. The principle is based on the optical memory effect [20,21], in which the point spread function (PSF) of the scattering media is linear shift-invariant [20–22]. The object can be revealed by deconvolution of the speckle images with the speckle-type PSF which is associated with the scattering media [23–27]. Utilizing the spectral decorrelation of PSF, one can even extract multi-spectral information from a single monochrome speckle image [28], turning a piece of frosted glass into a scattering lens and multiple band-pass color filters at the same time just by computation. The performance of scattering lenses is getting better with increasing research on enlarging the field of view [24,26], improving depth of focus [27] and enhancing resolution to the diffraction limit [19] or even super resolution [29].
Here, we utilize scattering media with a computational approach to introduce a single-shot multi-view imaging technique. Scattering media with natural randomness features guarantee that there are no identical scattering regions. Thus, their corresponding PSFs are uncorrelated. The single-shot multi-view imaging is proved feasible by dividing scattering media into an aperture array. In a single-shot camera image, multiple elemental images are multiplexed naturally through multiple PSFs corresponding to multiple apertures. Then each elemental image is obtained by deconvolution of the single speckle image with the corresponding PSF. The disparity of a 3D object is defined by multiple apertures on the scattering media, which is static in relation the imaging sensor position. The calibration for viewing angle becomes very simple as only the relative aperture positions are required.
2. Principle
Multiview imaging is a 3D imaging method that uses $N$ elemental images to reconstruct the object. For each viewing dirction $i$, the 3D object $O$ is viewd as the 2D image $O_i$, which is elemental image containing different disparity information. The captured image through scattering media is typically a seemingly random speckle pattern. Within the memory effect region of the scattering media, the $PSF$ is a linear shift-invariant speckle pattern. Therefore, the aperture of viewing direction $i$ with corresponding $PSF_i$ will generate the speckle image $I_i$ as:
where $\ast$ denotes convolution, $i=1,2,\ldots ,N$ are all the viewing directions, corresponding to the viewing apertures. One can easily derive the elemental images $O_i$ by deconvolution of $I_i$ with respect to the $PSF_i$. With an aperture array, multiple speckle images $I_i$ are superposed on a single imaging sensor to generate a single image $I$ as: Because of the natural randomness of the scattering media, the PSFs for apertures are uncorrelated to each other provided that there is no overlap among apertures. Mathematically, it can be expressed as: where $\otimes$ represents correlation, $i,j=1,2,\ldots ,N$. By exploiting Eqs. (1)–(3), single-shot multi-view imaging can be achieved. $N$ elemental images of a 3D object can be reconstructed from just one speckle pattern by deconvolution as below, which is similar to the mathematic expression [28].3. Experiment
To illustrate the principle of uncorrelated scattering media, an experiment is performed as illustrated in Fig. 1(a). A point source, which is created by illuminating a $100\mu m$ pinhole with an incoherent light source (Cold light L150-A), is put in the center of the object plane. An aperture ($D=3mm$) is fixed in front of the scattering media (DG20-600 optical diffuser, Thorlabs). A $PSF$ is recorded by the camera as a speckle pattern generated by light from a point source through the optical diffuser. A set of $PSF_0, PSF_1,\ldots , PSF_{30}$ is captured by moving the diffuser $0.1mm$ in each step. After that, the object’s speckle pattern $I$ is recorded when the diffuser is in the initial position. The model of our 3D object is shown in Fig. 1(b). Two round holes ($2R=0.83mm$) and one square hole ($L=1.0mm$) are cut from $0.1mm$ thick steel slices; folding the two slices by an angle, $\alpha$, then illuminating them by the same light to form a bright 3D object.
Figure 1(c) presents the cross-correlation of all $PSFs$ with $PSF_0$. The insets show some corresponding reconstruction images by deconvolution of the single speckle image $I$ with different $PSFs$. The reconstruction quality decreases as $PSF$ cross-correlation gets lower. In other words, the scattering media for $PSF$ measurement and for speckle imaging become uncorrelated gradually because the overlapping area between scattering media decreases. When the diffuser’s movement reaches $3mm$, the scattering media is separated physically and reconstruction of images is not successful (excessive noise). It should be noted that when the scattering media is moved, the contribution for decorrelation of $PSFs$ is not only from uncorrelated scattering media, but also from the limited memory effect of the optical diffuser. With limited memory effect, the correlation coefficient decreases when angle $\theta$ increases as $C(\theta ,L)=k\theta L/sinh(k\theta L)$, where $k=2\pi /\lambda$ is the amplitude of wave vector and $L$ is effective thickness of scattering media [22]. The combination of the two effects might explain the behavior of cross-correlation dropping suddenly with 1.0mm movement of the scattering media. Nevertheless, the speckle pattern decorrelates completely when there is no physical overlap between the scattering apertures.
The schematic of single-shot multi-view imaging by a scattering lens is shown in Fig. 2(a). Along the z-axis, there are a reference object plane (ROP), a bandpass filter, an array of apertures, scattering media, a lens and a camera. The distance from ROP to the scattering lens is $u=125mm$, and from the scattering media to the camera is $v=50mm$. The “scattering lens” is a lens ($f=40mm$) with a diffuser (600 grit) on the plano side (ACL5040U-DG6-A, Thorlabs). The optical lens is used simply to collect more light into the camera. A thin black plastic apertures film (made by 3D printing) is placed on the surface of scattering lens to form a scattering media array, as shown in Fig. 2(b). One can also make a 2D array of apertures to capture different viewing angles. The speckle pattern is received by a camera (Andor Zyla 5.5, 2560x2160, pixel size $6.5\mu m$). A bandpass filter (FB550-10) is used to enhance speckle contrast.
Adjusting the diameter $D$, pitch $p$ and maximum baseline $b$ of scattering aperture array as presented in Fig. 2(b) by using different apertures films allows us to define resolution and disparity ($b/u$) of our multi-view imaging technique. A point source is placed at the center of the ROF to measure all $PSFs$ separately, i.e. only one aperture opens each time as presented in Fig. 2(e). Each $PSF$ forms a memory effect region around the center of the ROP. From a practical point of view, it is only necessary to do these multiple $PSF$ measurements once, provided the positions of the scattering media and the aperture array are fixed relative to the camera sensor. After measuring the $PSFs$, we put a 3D object in the ROP center and take multi-shot speckle patterns with only one aperture ‘opened’ each time (Fig. 2(c)), and a single-shot speckle pattern with all aperture ‘opened’ (Fig. 2(d)). According to our principle (Eqs. (4)–(5)), deconvoluting a series of speckle images in Fig. 2(c) or a single speckle image in Fig. 2(d) with the individual $PSFs$ in Fig. 2(e) will reconstruct the same results as presented in Fig. 2(f) with clear disparities of elemental images.
In this paragraph, we perform a single-shot two-view imaging with large disparity. As shown in Fig. 3(a), an optical blocker with two apertures ($D=3mm$, $b=20mm$, left(L) and right(R)) at the surface of the scattering lens is used to take stereo imaging. Two $PSFs$ of the scattering lens are pre-recorded with a $100 \mu m$ point source and two scattering apertures (one for each time). We adjust the angle $\alpha =60^{\circ }, 90^{\circ }, 120^{\circ }$ to create different 3D objects. The single-shot speckle pattern is used to reconstruct two-view elemental images as presented in the first row of Figs. 3(b)–3(d). We use the Wiener-deconvolution approach to obtain images from a single speckle pattern with two $PSFs$. The second row of images in Figs. 3(b)–3(d) are also two-view elemental images, however, taken by the conventional multi-shot method, i.e. deconvolution of individual speckle patterns with $PSFs$ corresponding to each scattering apertures. We use image’s entropy to quantify the reconstruction artefact. Adding artefact into the image will increase its entropy. The multi-shot technique has average entropy values of 1.20, 2.37, 3.07 versus 1.34, 2.55, 3.31 as those in single shot technique for $\alpha =60^{\circ }, 90^{\circ }, 120^{\circ }$, correspondingly. The higher entropy in single shot technique is due to limited dynamic range and photon capacity of the camera pixels. These practical limitations not only cause some small artificial correlation among $PSFs$ affecting the validity of Eq. (3), but also decrease speckle contrast. However, the advantages of single shot technique is clear with acquisition time and validity of the elemental images for 3D reconstruction. Nevertheless, these pairs of images clearly show the disparity of the 3D objects. First, each elemental image consists of one rectangle and two ellipses, which is consistent with 3D object projection. Second, in each pair, the left-view image has a larger rectangle and smaller ellipses comparing to those in the right-view image. Third, these difference decreases with $\alpha$ increasing, and it should be noted that there will be no difference if $\alpha =180^{\circ }$ as both elemental images are the front view of a 2D object. The experimental results demonstrate our single-shot multi-view imaging technique for stereo imaging.
We extend our scattering apertures to an array by using a mask aperture as shown in Fig. 4(a) ($D=2mm, p=3mm, b=18mm$). All seven scattering apertures are along the $x$-axis, and their corresponding $PSFs$ are pre-recorded. Taking a single-shot of the 3D object ($\alpha =60^{\circ }$) with $N$ = 3, 5 and 7 apertures ‘opened’ respectively. The results of Wiener-deconvolution with $PSFs$ are shown in Figs. 4(b)–4(d). We can see a slight disparity between any two adjacent elemental images; however, the distinction is very obvious when comparing the first and the last elemental images of each group, in which the rectangle and the ellipses are deformed.
When comparing the same elemental images (same views) with different aperture numbers, such as the center column images in Figs. 4(b)–4(d), we observe that noise increases as aperture number increases (entropy increases from 0.76 to 0.87 and 1.31 for $N$ = 3, 5 and 7, correspondingly). Again, due to limited dynamic range and the photon capacity of each camera pixel, the speckle contrast decreases when we multiplex more scattering apertures into a single speckle pattern. This leads to lower image reconstruction quality. Nevertheless, up to 7 views can be captured by then constructed from a single speckle pattern.
4. Discussion and conclusion
The key principle of our technique is based on Eq. (3), which requires physically separated scattering media to ensure no correlation, which is inherited from the truly random nature of the scattering media. The image reconstruction still relies on the memory effect of scattering media and the field of view (FOV) is limited by the memory effect. To build a single-shot multi-view camera, the super-thin strongly-scattering media are preferred. One can enlarge the FOV by multiplexing multiple memory effect regions in a single-shot image [26]. One also can multiplex multi-spectral $PSFs$ into a single-shot image to enable multi-spectral multi-view imaging [28]. These techniques are all related to multiplexing then demultiplexing multiple pieces of information in to a single 2D speckle pattern. We can increase the information capacity in a single 2D speckle image by employing a camera with a higher dynamic range and a higher resolution. Higher dynamic range allows us to resolve speckles from a low contrast pattern. With that, we can increase aperture size, which enhances optical resolution and SNR. A higher resolution camera allows us to capture more speckles to minimize the digital artefact of measurement and ensure the precision of deconvolution. However, the limited capacity of information in a single 2D speckle pattern and the requirement of high speckle contrast for deconvolution sets a limit for our multiplexing capability. We will have trade-off between the image quality, image spatial and spectral sparsity and the number of views.
In summary, we present a technique for single-shot multi-view imaging by utilizing the natural randomness of the scattering lens. An optical blocking layer with multiple apertures physically separates multiple sections in an optical diffuser, creating uncorrelated scattering media with their own $PSFs$. This helps us to multiplex multi-view images in a single-shot speckle image. Cross-correlation of scattering media is analyzed, and multi-view imaging demonstrations are performed with both the single-shot and multi-shot techniques. Both techniques show the same disparity of elemental images, however, our single-shot technique is preferred for its significantly reduced image acquisition time. We demonstrate up to seven views simultaneously in a single speckle image. These multi-view elemental images could be used to reconstruct a 3D model of the object or directly projected to a multi-view display.
Funding
Ministry of Education - Singapore (MOE2017-T1-002-142).
Acknowledgments
We would like to thank the financial support from Singapore Ministry of Education through AcRF Tier1 grant MOE2017-T1-002-142.
Disclosures
The authors declare that there are no conflicts of interest related to this article.
References
1. F. Yaraş, H. Kang, and L. Onural, “Circular holographic video display system,” Opt. Express 19(10), 9147–9156 (2011). [CrossRef]
2. A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE 94(3), 591–607 (2006). [CrossRef]
3. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]
4. K. Muller, P. Merkle, and T. Wiegand, “3-d video representation using depth maps,” Proc. IEEE 99(4), 643–656 (2011). [CrossRef]
5. S. D. Cochran and G. Medioni, “3-d surface description from binocular stereo,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(10), 981–994 (1992). [CrossRef]
6. M. Martinez-Corral and B. Javidi, “Fundamentals of 3d imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]
7. F. L. Kooi and A. Toet, “Visual comfort of binocular and 3d displays,” Displays 25(2-3), 99–108 (2004). [CrossRef]
8. J. Kim, D. Kane, and M. S. Banks, “The rate of change of vergence-accommodation conflict affects visual discomfort,” Vision Res. 105, 159–165 (2014). [CrossRef]
9. R. Tomer, K. Khairy, F. Amat, and P. J. Keller, “Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy,” Nat. Methods 9(7), 755–763 (2012). [CrossRef]
10. J.-Y. Son and B. Javidi, “Three-dimensional imaging methods based on multiview images,” J. Disp. Technol. 1(1), 125–140 (2005). [CrossRef]
11. J. C. Dainty, Laser speckle and related phenomena (Springer,1984).
12. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).
13. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22(24), 4028 (1983). [CrossRef]
14. G. R. Ayers, M. J. Northcott, and J. C. Dainty, “Knox-thompson and triple-correlation imaging through atmospheric turbulence,” J. Opt. Soc. Am. A 5(7), 963–985 (1988). [CrossRef]
15. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]
16. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]
17. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]
18. O. Katz, E. Small, Y. Guan, and Y. Silberberg, “Noninvasive nonlinear focusing and imaging through strongly scattering turbid layers,” Optica 1(3), 170–174 (2014). [CrossRef]
19. A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7(1), 10687 (2017). [CrossRef]
20. I. Freund, “Looking through walls and around corners,” Phys. A: Stat. Mech. its Appl. 168(1), 49–65 (1990). [CrossRef]
21. I. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]
22. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61(7), 834–837 (1988). [CrossRef]
23. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]
24. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3d objects,” Light: Sci. Appl. 6(2), e16219 (2017). [CrossRef]
25. T. Wu, J. Dong, X. Shao, and S. Gigan, “Imaging through a thin scattering layer and jointly retrieving the point-spread-function using phase-diversity,” Opt. Express 25(22), 27182–27194 (2017). [CrossRef]
26. D. Tang, S. K. Sahoo, V. Tran, and C. Dang, “Single-shot large field of view imaging with scattering media by spatial demultiplexing,” Appl. Opt. 57(26), 7533–7538 (2018). [CrossRef]
27. X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with psf manipulation,” Sci. Rep. 8(1), 4585 (2018). [CrossRef]
28. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]
29. D. Wang, S. K. Sahoo, and C. Dang, “Noninvasive super-resolution imaging through scattering media,” arXiv preprint arXiv:1906.03823 (2019).