Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Point spread function for diffuser cameras based on wave propagation and projection model

Open Access Open Access

Abstract

Compared to traditional imaging system, diffuser camera is an easy-built imaging system to capture the light field with small form factor. Its imaging target can be reconstructed by deconvolving the sensor images with the point-spread-function (PSF) at the corresponding depth. However, the existing method to obtain the PSFs is generally relied on measuring point-source-responses at several depths which presents high complexity and low reliability. In this paper, we propose a theoretical PSF model for the diffuser camera by estimating the diffuser’s phase based on projection model and deriving the image response at any depth by forward Fourier optics to enable the reconstruction of the object at any depth. The experimental results demonstrate the effectiveness of the proposed model in terms of the correlation between the captured PSFs and our model derived PSFs, the correlation between the ground truth image and reconstructed images under different depths, and the correlation between the images reconstructed by real captured PSFs and the images reconstructed by our model derived PSFs. The objects at different depths can be correctly reconstructed by our theoretically derived PSFs, which benefits the application of diffuser camera for much lower complexity.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light field data are widely used in many applications such as biomedical imaging, robotics and industrial inspection. The standard light field capturing device (light field camera) [1] proposed by R. Ng is made by placing a microlens array in front of a traditional digital camera’s sensor, which is possible to achieve different view angles via a single shot. However, the resolution of the captured light field is limited by the number of microlenses. The coded aperture based light field capturing systems [2,3] use a low-cost film mask instead of a microlens array, which can achieve higher spatial resolution. However, the reconstructed light field is darker because the mask limits the light throughput severely. Tajima et al. [4] proposed a lensless light-field camera composed by a multi-phased Fresnel zone aperture inserted before the image sensor. Although it presents the same problem as the mask based light field imaging system, this system has a very small overall size and a low-cost image reconstruction procedure. Singh et al. [5] discussed the possibility to use the diffuser as an imaging device that has a variable focal length and magnification. However, the proposed reconstruction method could only reconstruct the object at a determined depth. Antipa et al. [6] proposed an imaging system that use only a thin diffuser, which can also be used in communication [7,8] and cryptograph [9] applications, and a camera sensor to perform 3D volumetric object reconstruction. Its light throughput is higher than the mask based light field imaging systems because of the transparency of the diffuser. It has a very small form factor by system’s simple structure. However, the object reconstruction in this system is very complex as it needs to capture a great amount of PSFs under different depths in priori which limits its practical application. Xie et al. [10] proposed a geometric scaling way to obtain the PSFs at different depths by one single direct measurement and proved that the scaling method could extend the DOF of the diffuser camera. However, one single measurement could prone to large errors as the distance from the measured PSF increases.

It is of vital importance to obtain the PSFs in an easier and reliable way. The real capturing methods [11,12] require a lot of captures that is very time consuming, especially when the PSFs are spatially variant. It is not practical to capture all the point sources responses. Antipa et al. [13] calibrated the diffuser based light field imaging system via phase retrieval using transport of intensity. The retrieved phase is used to reconstruct the height map of diffuser, then the ray optics is used to model the system response. This method reduces the time to calibrate the system compared to direct capturing methods, but it still needs many through-focus images and the reconstruction algorithm is very time consuming. There are also some methods [14] of obtaining the PSFs for light field devices based on ray tracing and geometric optics. But when the optical element is scattering element like the diffusers, the ray tracing methods cannot solve the scatter effect unless there is a way to obtain diffuser’s transmission function which is very difficult to realize. Inspired by wave propagation [15] and computational Fourier optics [16], it is possible to construct a theoretical wave optics model for light field propagation. Such analyses have already been done on plenoptic camera 1.0 [17] and plenoptic camera 2.0 [18] systems. However, it lacks such analysis for the diffuser camera.

So, in this paper, we propose to derive diffuser camera’s PSFs under different depths by exploiting the forward propagation model. The model permits the estimation of PSFs in different depths which reduce the works of capturing PSFs. We will show that the only unknown variable for this model is diffuser’s phase which can be estimated by projection model. The correctness and effectiveness of the proposed model is demonstrated by evaluating the correlation between the theoretically derived PSF and the measured PSF, and comparing the reconstruction quality of the 2D objects at several depths. Experimental results showed that the reconstructions generated by the proposed model present comparable objective and subjective quality with those generated by the real captured PSFs with much lower reconstruction complexity, which will greatly benefit the application of diffuser camera.

The paper is organized as follows. The proposed model is described in detail in Section 2. Section 3 provides experimental results and discussions. Section 4 concludes the paper.

2. Proposed PSF model for the diffuser camera

2.1 Wave propagation modeling for the diffuser camera

The optical configuration of a diffuser camera is shown in Fig. 1. The light rays emitted from the object pass through an aperture, and then redirected by the diffuser toward the sensor.

 figure: Fig. 1

Fig. 1 Optical configuration of a diffuser camera.

Download Full Size | PDF

According to scalar diffraction theory and Fresnel diffraction approximation [15], denoting the complex field at position (x1,y1) on the object plane and that at position (x2,y2) on the aperture plane by U1(x1,y1) and U2(x2,y2) respectively, the field propagation from the object plane to the plane before the aperture can be formulated by a convolutional integral:

U2(x2,y2)=U1(x1,y1)h1(x2-x1,y2-y1)dx1dy1,
where h1 is the impulse response of propagation from the object plane to the front of the aperture plane. h1 is given by:
h1(x1,y1,x2,y2)=exp(jkz1)jλz1exp{jk2z1[(x2x1)2+(y2y1)2]},
where k is the wavelength number; z1 is the distance between the object plane and the aperture plane as shown in Fig. 1; and λ is the wavelength of the light source. Assuming that the aperture has no thickness, the light field on the backside of the aperture is given by:
U2'=C(x2,y2)U2(x2,y2),
where
C(x2,y2)={1,if(x22+y22)1/2r0,otherwise.
r is the radius of the circular aperture. The aperture is used to reduce the interference of stray lights. Assuming that the front side of diffuser has no separation from the backside of the aperture plane (it serves as the pupil function in this case), the field on the front side of diffuser would have the same distribution as given in Eq. (3). Since the diffuser is very thin, we treat it as a pure phase modulation element with phase distribution given by θd. Assuming that the diffuser has no off-axis aberration and other high angle phenomena. So, the light field on the back side of the diffuser is given by
U''2(x2,y2)=U2'(x2,y2)exp[jθd(x2,y2)].
Similarly, the light field propagated from the diffuser plane to the sensor plane follows the same principle as from the object plane to the diffuser plane. Denoting the sensor plane by U3(x3, y3), h2, is the impulse response from the diffuser plane to the sensor plane in the free space which is given by:
h2(x2,y2,x3,y3)=exp(jkz2)jλz2exp{jk2z2[(x3x2)2+(y3y2)2]}.
So, the response at the sensor plane could be modeled as two separate Fresnel propagations plus one phase change. By substituting U1 in Eq. (1) by an impulse function (consider a point object), the system PSF is derived:
h(x1,y1,x3,y3)=exp[jk(z1+z2)]λ2z1z2C(x2,y2)exp{jk2z1[(x2x1)2+(y2y1)2]}exp{jk2z1[(x3x2)2+(y3y2)2]}exp[iθd(x2,y2)]dx2dy2
And the imaging response on the sensor plane is given by:

I(x3,y3)=|U3(x3,y3)|2.

As the PSF of the diffuser camera can be derived, the imaging target can be reconstructed once the object response can be captured by the sensor. The main challenge is to find the diffuser phase distribution θd in Eq. (7) since it is the only unknown variable to derivate the forward propagation PSF.

2.2 Diffuser phase estimation by projection model

Since the only unknown variable in the wave propagation model is the diffuser’s phase, we propose a method to estimate the diffuser’s phase inspired by laser beam shaping theory [19]. Laser beam shaping tries to adjust the laser beam irradiance profile by a far-field beam shaper, called light shaping diffuser. The light shaping diffuser is designed by adjusting the phase distribution under the given far-field irradiance constraints. It can be borrowed to our work to estimate the unknown phase distribution for the diffuser based on the measurement of the far-field and near-field intensity distributions through the diffuser.

The configuration to estimate the diffuser phase is shown in Fig. 2(a). The diffuser limited by a circular aperture placed on z = 0 is illuminated by a laser beam and the sensor is placed at z = 0 and z = z’ which correspond to near-field and far-field configurations, respectively. Suppose the diffuser plane is at position (x2, y2), and the diffuser and aperture are removed, the near-field of laser beam at z = 0 is given by

Uen(x2,y2)=A(x2,y2)ejθe(x2,y2)=A(x2,y2)P(x2,y2),
where P is the phase term of the laser beam, A(x2, y2) and θe are the amplitude and phase of the laser beam at z=0, respectively. The far-field distributionUef(x2',y2') of laser beam at distance z = z’ is proportional to the Fourier transform of the near-field distribution:
Uef(x2',y2')=S(x2',y2')F{Uen(x2,y2)}=S(x2',y2')A'(x2',y2')*F{P(x2,y2)},
where denotes the convolution operation and A'(x2',y2') is the amplitude of the input beam after propagating a distance z’, S(x2',y2') is the multiplicative factor. Then, adding the diffuser with its pupil function (aperture) to the near-field position we have
Udn(x2,y2)=A(x2,y2)C(x2,y2)ej[θe(x2,y2)+θd(x2,y2)]=K(x2,y2)P(x2,y2)D(x2,y2),
where θd is diffuser’s phase distribution; C(x2,y2) is the aperture associated with the diffuser; K(x2,y2) is obtained by combining beam field’s amplitude with the aperture modulation; D(x2,y2) is the phase term associated with diffuser. The far-field distribution of the laser beam combined by diffuser’s modulation is given by:
Udf(x2',y2')=S(x2',y2')K'(x2',y2')F[P(x2,y2)]F[D(x2,y2)]
where K is the propagated amplitude of the aperture modulation plus the laser beam.

 figure: Fig. 2

Fig. 2 Experimental setup to estimate diffuser’s phase: (a) general architecture to obtain the near-field and far-field intensity distributions; (b) experimental setup to obtain the far-field intensity distribution

Download Full Size | PDF

As the high spatial frequency component in diffuser is more evident than in laser beam, the shape of the energy envelope of the far-field intensity distribution is dominated by the phase function of the diffuser but not the divergence of the input beam. This indicates that the far-field distribution of laser beam combined with diffuser modulation is related with the near-field by a Fourier transform and diffuser’s phase has major influence to the final output pattern, which is independent with the shape of the input laser beam. Since the near-field and far-field intensity distributions can be measured, the phase of the diffuser which corresponds the near-field phase component can be easily retrieved by state-of-the-art phase retrieval methods [20–22].

As it is difficult to measure the far-field distribution in real experiment, we propose in using a Fourier lens to achieve the “Fourier transformation” as shown in Fig. 2(b). The diffuser is placed at front focal plane of the Fourier lens and sensor captures the intensity distribution at the back focal plane of the Fourier lens. For this arrangement, the field at the focal plane is given by

U2'(x2',y2')=exp(jkf)jλf×U2(x2,y2)C(x2+x2',y2+y2')exp[j2πλf(x2'x2+y2'y2)]dx2dy2,
where U2 is the field at the front focal plane; and C is the pupil function (aperture). It is easy to verify that the field U2' is a scaled Fourier transformation of U2 with the pupil function. So, U2' corresponds to the far field and U2 corresponds to the near field.

The intensity distributions |U2| and |U2'| can be captured by a CMOS sensor. The diffuser’s phase θd associated with U2 can thus be calculated via phase retrieval methods and the result will be inserted to Eq. (7) to calculate the PSF.

3. Experimental results

3.1 Simulated experiments

The proposed diffuser phase estimation is first verified by using the simulated near-field and far-field, comparing the estimated phases with the ground truth, and comparing the reconstructed results. The simulation grid is composed by a 4000 × 4000 pixels plane with each pixel corresponding to the physical size 5μm. The diffuser has plane size 512 × 512 pixels and one circular aperture with physical radius 1.2mm is added to the center of the diffuser.

Ground truth of the diffuser’s phase is created by a 512 × 512 gray “cameraman” image scaled proportionally from -π to π to represent the phase distribution as shown in Fig. 3(a). The near-field intensity distribution, as shown in Fig. 3(b), is given by uniform intensity 1. The far-field intensity distribution, as shown in Fig. 3(c), which in real experiment would be captured by the CMOS sensor, is obtained by applying the ground truth near-field to Eq. (13) to simulate the Fraunhofer propagation process, then calculate the square of the absolute value. The Fourier lens used here has focal length 5cm. Using near and far-field intensity distributions, the diffuser’s phase could be estimated by Gerchberg-Saxton algorithm. The iterative process is illustrated in Fig. 4. Figure 3(d) shows the estimated phase after 2000 iterations of Gerchberg-Saxton algorithm. The central 300 × 300 pixels for both the simulated and ground truth phase images are extracted for Pearson correlation coefficient calculation. Pearson correlation is given by:

P(A,B)=mn(AmnA¯)(BmnB¯)(mn(AmnA¯)2)(mn(BmnB¯)2),
where A, B represent 2D images with pixels indexed by m and n; A¯ and B¯ represent the mean of all values in images A and B. This metric represents the linear dependency between the target phase and the estimated phase image. Calculating P between the red square region of Fig. 3(a) and (d), 0.7890 is achieved, which demonstrates a good correlation between the estimation and the ground truth phase. The error associated with the phase recovery is mainly caused by local minimum and non-convergence problems [20].

 figure: Fig. 3

Fig. 3 Phase estimation simulation: (a) Ground truth phase distribution; (b) Intensity distribution of the near field; (c) Far-field intensity distribution generated by Eq. (13); (d) Estimated phases.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Gerchberg-Saxton algorithm adapted for diffuser phase estimation illustration, where the known variables are near-field intensity and far-field intensity.

Download Full Size | PDF

Then, we apply the estimated phase information to the proposed PSF model and reconstruct the object by deconvolution. The testing object, as shown in Fig. 5(a), is placed with distances z1 = 25mm, z2 = 1mm (as shown in Fig. 1). The simulated image on the sensor is shown in Fig. 5(b), which is obtained by using Fresnel propagation, assuming that the diffuser has a pure phase distribution given in Fig. 3(a). The ground truth phase distribution and the ground truth PSF that is generated by the ground truth phase distribution are shown in the left-most column in Fig. 5(c). Also, the ground truth reconstruction result is shown in the same column, which is obtained by deconvolving the sensor response with the PSF generated by the ground truth phase. The deconvolution algorithm used here is the non-regularized Richardson-Lucy algorithm [23], which is an iterative algorithm that computes maximum likelihood estimation adapting to Poisson statistics. Its one iteration step is given by:

o(k+1)=ok(IokHH*),
where ok is the estimation of the recovered object at iteration k; H is the PSF and I is the sensor response. The iteration number k should be controlled to not be too large as it amplifies the noise. For both simulation and real imaging experiments, k is set as 150 for a good result. From the second left-most column to the right-most column in Fig. 5(c) provide the estimated phases during different GS iterations with different phase errors, the PSFs derived by our model using the corresponding estimated phases and the reconstructed results using our model derived PSFs. The accuracy of the estimated phases’ is shown by calculating the mean square error (MSE) between the estimated phase and the ground truth phase, of which smaller value means better estimation. The Pearson correlation between the ground truth PSF and our model derived PSF are calculated, of which higher value indicates higher quality in the derived PSF. For the reconstructed results, the SSIM [24], which reflects the structure similarity, and the Pearson correlation index are both calculated to analyze the reconstruction quality objectively.

 figure: Fig. 5

Fig. 5 (a) Simulated object for reconstruction with physical size 0.35mm × 0.35mm; (b) Simulated sensor response; (c) The first row corresponds to the estimated phase with different errors, second row corresponds to the model derived PSFs using the phase in the first row, and the last row corresponds to the reconstruction results using the PSFs in the second row.

Download Full Size | PDF

From Fig. 5, it can be observed that the reconstruction quality increases with the increase in the quality of the recovered phase, which means that the quality of the recovered phase affects the reconstruction quality proportionally. However, when the MSE of the recovered phase becomes 0.1373 and lower, corresponding to higher quality of the recovered phase, the reconstruction qualities of the object are visually the same and the improvement in the objective quality, measured by SSIM and P, is also marginal, which reflects that the phase quality does not benefit reconstructing the object “F”. To illustrate how the reconstruction performs for a more complex object in a more realistic phase distribution, we created a “virtual” diffuser using a random phase distribution, where the minimum feature size is 30μm, corresponding to 1-degree deflection angle, shown in the top-left corner in Fig. 6(c). The object is changed to a “flower”, which is more complex than “F” with tiny holes like that shown Fig. 6(a). The same experiment flow is performed as that in Fig. 5 and the results are shown in Fig. 6(c). For the complex object reconstruction, the positive correlation between the reconstruction quality and the phase quality still holds. However, it is observed that for the complex object, the reconstruction quality is not visually as good as that of “F”. Even the error in the recovered phase drops to 0.1603, the reconstruction quality is not improved obviously. It means that the influence of the error to the reconstruction quality is different between the complex texture object and simple object. Determining quantitatively the error influence to different objects and phase distribution is still an open problem to be solved.

 figure: Fig. 6

Fig. 6 Simulation with a more complex object and phase distribution: (a) Simulated object for reconstruction with physical size 0.35mm × 0.35mm; (b) Simulated sensor response; (c) The first row corresponds to the estimated phase with different errors, second row corresponds to the model derived PSFs using the phase in the first row, and the last row corresponds to the reconstruction results using the PSFs in the second row.

Download Full Size | PDF

3.2 Real imaging experiments

The performance of the proposed model is further tested on a real diffuser camera shown in Fig. 7(a). The diffuser used here is a 1-degree Luminit light shaping diffuser [25]. The camera sensor is a point grey CMOS sensor GS3-U3-123S6C-C with pixel size 3.45μm and resolution 4096 × 3000. The illumination source for the phase estimation is a laser with wavelength 532nm. The laser passes through a beam expander to produce the parallel light source. The circular aperture we used has clear radius of 2.3mm. The Fourier lens used to produce the far-field intensity distribution has focal length 5cm. The far-field and the near-field intensity captured are shown in Fig. 8(a) and (b), respectively. As the dynamic range between near-field and far-field intensities are large, the exposure time is adjusted to obtain images with similar brightness. Also, the background pixel value is subtracted for both images. They are further normalized by min-max normalization before entering the GS algorithm. The near field intensity distribution is captured by placing the diffuser on the camera sensor, where the separation distance is approximately the thickness of sensor’s cover glass (0.5mm). The far field intensity distribution is captured according to Fig. 2(b). The phase estimated is shown in Fig. 8(c).

 figure: Fig. 7

Fig. 7 Diffuser camera system: (a)Vertical and side view of the diffuser camera system; (b)Test samples used for reconstruction.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 (a) Captured far-field intensity distribution; (b) Captured near-field intensity distribution; (c) Estimated phase distribution.

Download Full Size | PDF

Using the estimated phase and Eq. (7), PSF can be generated at any object distance. The generated PSF is cropped within its central 900 × 900 pixels and the object can be reconstructed by deconvolving the image response with the corresponded cropped PSF. The PSFs calculated by the proposed model from z1 = 22mm to z1 = 30mm, shown in the third row of Fig. 9, are compared in terms of correlation with the PSFs captured by point sources (real PSFs) using a 3D moveable platform at z1 from 22mm to 30mm, shown in the first row of Fig. 9, where the distance z2 (distance from the cover glass) is adjusted by defocus level. The second row of Fig. 9 shows the PSFs obtained by the scaling method [10], where the captured PSF at z1 = 22mm is made as the anchor and the corresponding scaling is applied to other depths. As shown in the figure, the scaled PSFs only present high correlation P(A, B) with the real captured ones as z1 = 22mm and 24mm since their input PSF is the captured PSF at z1 = 22mm. As they scale to other depths, P(A, B) drops quickly since the errors propagate intensively. In comparison, our model derived PSFs present high correlation with the real captured ones, even with the distance increase, both from visual distribution and the Pearson correlation P(A, C). The increment in P(A, C) is mainly caused by the decrease in z2 (the distance between the diffuser and the sensor), which results in the captured PSF is close to the near field light field we captured for phase estimation.

 figure: Fig. 9

Fig. 9 Comparison between the real captured PSFs, scaled PSFs and our model derived PSFs.

Download Full Size | PDF

Then, two real imaging objects in Fig. 7(b) are tested by recording sensor images under the corresponding object distances. The captured sensor images of each object are shown in the first row of Fig. 10(a) and 10(b), respectively. The reconstructed results in the second, the third and the fourth rows of Figs. 10(a) and 10(b) are generated by deconvolving the sensor images by the corresponding real captured PSFs (shown in the first row of Fig. 9), the corresponding PSFs generated by scaling method [10] (shown in the second row of Fig. 9) and our model derived PSFs (shown in the third row of Fig. 9), respectively.

 figure: Fig. 10

Fig. 10 Reconstruction results comparison for: (a) Object 1; and (b) Object 2 using the captured PSFs, scaled PSFs and our model derived PSFs provided in Fig. 9 correspondingly.

Download Full Size | PDF

It is observed that the subjective quality of the reconstruction generated by the proposed PSFs is comparable to those generated by the captured PSFs, while the reconstruction generated by the scaling method [10] does not work well as the scaling distance increases. The reason for the scaling method does not work well is that the direct measurement of the PSF does contain the errors and the errors propagate more intense as the depth increases, which can be observed in Fig. 9 that the correlation index decreases very quick as the scaling distance increases. It is shown by experiment that our method can extend wider depth (ranging from z1 = 22mm to z1 = 30mm) than the scaling method (ranging from z1 = 22mm to z1 = 26mm) because it is more robust to noises. The objective quality of the reconstructions is further compared in Table 1 using structural similarity (SSIM) and Pearson correlation between the reconstructed object and the ground truth images in Fig. 7(b). We can see that our reconstructions also provide comparable objective quality with the reconstructions using the real captured PSFs, which are consistent with the subjective quality presented above. Furthermore, it is observed that our method outperforms the scaling method in all metrics evaluation for z1 larger than 24mm, which is also consistent with the subjective quality presented above. The possible error sources for the proposed PSF are due to the condition of measurements, where there is no guarantee that the point source is placed exactly on-axis. Also, the non-ideality in illumination condition, the thickness of diffuser (energy absorption) and sensor noises contribute to the error. Actually, the real captured PSFs are also subject to the errors during the measurement, but it can serve as a good indicator whether our proposed model is valid. It demonstrates that the proposed PSF model is robust to different objects at different imaging distances and can provide the reconstruction good enough for the real applications to save a huge amount of acquisitions in retrieving real PSFs, which will greatly benefit the application of diffuser cameras.

Tables Icon

Table 1. Objective quality comparison using the ground truths in Fig. 7(b) as the references.

4. Conclusion

In this paper we proposed a theoretical model to derive the PSFs for diffuser camera systems. The phase of diffuser is estimated by the projection model, which is inspired by laser beam shaping theory applied for diffuser design. The experimental results demonstrate that the proposed models can generate the PSFs in different depths with high accuracy and robustness. Only using two captures to estimate the phase distribution, it can reconstruct the object at any depth configurations. It provides comparable reconstruction performance with the real captured PSFs, which facilitates the application of diffuser cameras.

In the future work, we are going to analyze quantitatively how the error in the phase estimation is manifested in PSF’s error by establishing a statistical relationship between the object complexity and the reconstruction quality. Another point to work is to extend the reconstruction of planar object to 3D volumetric object. Also, the reconstruction quality for the off-axis locations of the proposed method is affected by the off-axis aberration and other high angle phenomena. Few works have been proposed to model the aberration for diffuser camera system. Therefore, one of our future works is to improve the reconstruction quality by modeling the off-axis aberration.

Funding

National Natural Science Foundation of China (NSFC) (61827804); Shenzhen Project (JCYJ20170817162658573), China.

References

1. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a handheld plenopic camera,” Technical Report, Stanford University (2005).

2. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM SIGGRAPH, New York, NY, USA, Article 69 (2007). [CrossRef]  

3. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM SIGGRAPH, New York, NY, USA, Article 70 (2007). [CrossRef]  

4. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” 2017 IEEE International Conference on Computational Photography (ICCP), Stanford, CA, pp. 1–7 (2017). [CrossRef]  

5. A. K. Singh, G. Pedrini, M. Takeda, and W. Osten, “Scatter-plate microscope for lensless microscopy with diffraction limited resolution,” Sci. Rep. 7(1), 10687 (2017). [CrossRef]   [PubMed]  

6. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

7. Y. T. Lu and S. Chi, “Fabrication of light-shaping diffusion screens,” Opt. Commun. 214(1), 55–63 (2002).

8. J. Yao and G. C. K. Chen, “Holographic diffuser for diffuse infrared wireless home networking,” Opt. Eng. 42(2), 1–8 (2003).

9. M. Singh and A. Kumar, “Optical encryption and decryption using a sandwich random phase diffuser in the Fourier plane,” Opt. Eng. 46(5), 055201 (2007). [CrossRef]  

10. X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8(1), 4585 (2018). [CrossRef]   [PubMed]  

11. G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017). [CrossRef]   [PubMed]  

12. G. Kim and R. Menon, “Computational imaging enables a “see-through” lens-less camera,” Opt. Express 26(18), 22826–22836 (2018). [CrossRef]   [PubMed]  

13. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” 2016 IEEE International Conference on Computational Photography (ICCP), Evanston, IL, pp. 1–11 (2016).

14. C. S. Liu and P. D. Lin, “Computational method for deriving the geometric point spread function of an optical system,” Appl. Opt. 49(1), 126–136 (2010). [CrossRef]   [PubMed]  

15. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

16. D. G. Voelz, Computational Fourier Optics: A MATLAB Tutorial (SPIE, 2011).

17. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52(10), D22–D31 (2013). [CrossRef]   [PubMed]  

18. X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017). [CrossRef]   [PubMed]  

19. M. F. Dickey, Laser Beam Shaping: Theory and Techniques, Second Edition (Crc, 2017).

20. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

21. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73(11), 1434–1441 (1983). [CrossRef]  

22. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A 20(1), 40–55 (2003). [CrossRef]   [PubMed]  

23. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. 62(1), 55–59 (1972). [CrossRef]  

24. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

25. Light shaping diffuser, https://www.luminitco.com/products/light-shaping-diffusers.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Optical configuration of a diffuser camera.
Fig. 2
Fig. 2 Experimental setup to estimate diffuser’s phase: (a) general architecture to obtain the near-field and far-field intensity distributions; (b) experimental setup to obtain the far-field intensity distribution
Fig. 3
Fig. 3 Phase estimation simulation: (a) Ground truth phase distribution; (b) Intensity distribution of the near field; (c) Far-field intensity distribution generated by Eq. (13); (d) Estimated phases.
Fig. 4
Fig. 4 Gerchberg-Saxton algorithm adapted for diffuser phase estimation illustration, where the known variables are near-field intensity and far-field intensity.
Fig. 5
Fig. 5 (a) Simulated object for reconstruction with physical size 0.35mm × 0.35mm; (b) Simulated sensor response; (c) The first row corresponds to the estimated phase with different errors, second row corresponds to the model derived PSFs using the phase in the first row, and the last row corresponds to the reconstruction results using the PSFs in the second row.
Fig. 6
Fig. 6 Simulation with a more complex object and phase distribution: (a) Simulated object for reconstruction with physical size 0.35mm × 0.35mm; (b) Simulated sensor response; (c) The first row corresponds to the estimated phase with different errors, second row corresponds to the model derived PSFs using the phase in the first row, and the last row corresponds to the reconstruction results using the PSFs in the second row.
Fig. 7
Fig. 7 Diffuser camera system: (a)Vertical and side view of the diffuser camera system; (b)Test samples used for reconstruction.
Fig. 8
Fig. 8 (a) Captured far-field intensity distribution; (b) Captured near-field intensity distribution; (c) Estimated phase distribution.
Fig. 9
Fig. 9 Comparison between the real captured PSFs, scaled PSFs and our model derived PSFs.
Fig. 10
Fig. 10 Reconstruction results comparison for: (a) Object 1; and (b) Object 2 using the captured PSFs, scaled PSFs and our model derived PSFs provided in Fig. 9 correspondingly.

Tables (1)

Tables Icon

Table 1 Objective quality comparison using the ground truths in Fig. 7(b) as the references.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

U 2 ( x 2 , y 2 )= U 1 ( x 1 , y 1 ) h 1 ( x 2 - x 1 , y 2 -y 1 )d x 1 d y 1 ,
h 1 ( x 1 , y 1 , x 2 , y 2 )= exp(jk z 1 ) jλ z 1 exp{ jk 2 z 1 [ ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 ]},
U 2 ' =C( x 2 , y 2 ) U 2 ( x 2 , y 2 ),
C( x 2 , y 2 )={ 1,if ( x 2 2 + y 2 2 ) 1/2 r 0,otherwise.
U '' 2 ( x 2 , y 2 )= U 2 ' ( x 2 , y 2 )exp[j θ d ( x 2 , y 2 )].
h 2 ( x 2 , y 2 , x 3 , y 3 )= exp(jk z 2 ) jλ z 2 exp{ jk 2 z 2 [ ( x 3 x 2 ) 2 + ( y 3 y 2 ) 2 ]}.
h( x 1 , y 1 , x 3 , y 3 )= exp[jk( z 1 + z 2 )] λ 2 z 1 z 2 C( x 2 , y 2 )exp{ jk 2 z 1 [ ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 ]} exp{ jk 2 z 1 [ ( x 3 x 2 ) 2 + ( y 3 y 2 ) 2 ]}exp[i θ d ( x 2 , y 2 )]d x 2 d y 2
I( x 3 , y 3 )=| U 3 ( x 3 , y 3 ) | 2 .
U en ( x 2 , y 2 )=A( x 2 , y 2 ) e j θ e ( x 2 , y 2 ) =A( x 2 , y 2 )P( x 2 , y 2 ),
U ef ( x 2 ' , y 2 ' )=S( x 2 ' , y 2 ' )F{ U en ( x 2 , y 2 )}=S( x 2 ' , y 2 ' ) A ' ( x 2 ' , y 2 ' )*F{P( x 2 , y 2 )},
U dn ( x 2 , y 2 )=A( x 2 , y 2 )C( x 2 , y 2 ) e j[ θ e ( x 2 , y 2 )+ θ d ( x 2 , y 2 )] =K( x 2 , y 2 )P( x 2 , y 2 )D( x 2 , y 2 ),
U df ( x 2 ' , y 2 ' )=S( x 2 ' , y 2 ' ) K ' ( x 2 ' , y 2 ' )F[P( x 2 , y 2 )]F[D( x 2 , y 2 )]
U 2 ' ( x 2 ' , y 2 ' )= exp(jkf) jλf × U 2 ( x 2 , y 2 )C( x 2 + x 2 ' , y 2 + y 2 ' )exp[j 2π λf ( x 2 ' x 2 + y 2 ' y 2 )]d x 2 d y 2 ,
P ( A , B ) = m n ( A m n A ¯ ) ( B m n B ¯ ) ( m n ( A m n A ¯ ) 2 ) ( m n ( B m n B ¯ ) 2 ) ,
o ( k + 1 ) = o k ( I o k H H * ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.