We experimentally demonstrate a light-field moment microscopy (LFMM). The proposed technique employs a better estimation of the intensity derivative in solving the Poisson equation and therefore can significantly reduce the noise and error in the reconstructed light-field moment. The light field can be reconstructed then by using the moment, enabling the perspective view and depth estimation of the object. The proposed LFMM can be simply implemented using a standard commercial light microscope. This will open up new possibility for standard microscopes in high-resolution light-field observations.
© 2015 Optical Society of America
Conventional cameras or microscopes capture two-dimensional (2D) image of the scene, whereas the depth (z) information is missing as the acquisition essentially integrates the light field along the z direction. However, the depth information is very important in many applications such as entertainment, optical inspection, biological imaging, and so on. In the last decade, a rapid, scanless three-dimensional (3D) imaging approach, called light field imaging, has received more and more attention because of its high compatibility with various applications ranging from photographic camera  to microscope [2, 3]. From the geometric optics point of view, light field imaging can simultaneously record the 2D spatial and 2D angular information of each light ray in the bundle of rays observed by the digital image sensors, thus allowing the generation of perspective views and focal stacks of the scene by computationally tracing the rays back to certain planes. Technically, the light field can be captured by using either a microlens array with a standard camera [1, 2], a camera array [4, 5], or an absorbing mask . Of all these, microlens-array-based light field instruments receive particular interest because of its commercial success and potential application in light field microscopy [2, 3, 7]. However, there is an intrinsic trade-off between the spatial and the angular resolution due to the microlens array: one must sacrifices spatial resolution in order to obtain angular resolution, which determines the number of perspective views, number of focal stacks, and the accuracy of the estimation of depth map.
Alternatively, one can manage to collect the angular moment of the light rays using the light field moment imaging (LMI) technique recently proposed by Orth and Crozier . With the LMI, one does not need to sacrifices the spatial resolution to achieve the perspective views since the raw images are taken by standard cameras or light microscopes. Recently, the LMI has also been used in Fourier hologram synthesis  and coherent diffractive imaging . The principle of the LMI is to extract the first-order angular moment of the light field by solving the Poisson equation, which connects the Laplacian of the scalar potential of the light field with the intensity derivative, in a way similar to the transport of intensity equation . We note that the Poisson solver severely affected by how accurately the intensity derivative can be estimated. To obtain a good estimation, one can think of taking the two intensity images at two lateral plane as close as possible from the pure mathematical perspective. However, as the two planes are getting close to each other, noise of the sensors becomes increasingly significant, and eventually submerges the intensity derivative, leading to a fault reconstruction.
In this paper, we propose a light field moment microscopy (LFMM) inspired by the work of Orth and Crozier  for the inspection of samples in micro-scale. Significant difference between our LFMM and the standard LMI is that we use a better method to estimate the intensity derivative in the Poisson solver. Both numerical and optical experiments are carried out to demonstrate our method. The paper is organized as follows: in Sec. 2, we review the basic principle of LMI. Then, we discuss the noise problem in LMI and describe our method of noise reduction LMI in Sec. 3. The simulation and experiment results of the proposed approach is described in Sec. 4 and Sec. 5, respectively. Finally, the conclusion is drew in Sec 6.
2. Light field moment imaging
According to the plenoptic function , the light field can be parameterized as a five-dimensional function L(x, y, z, tanθX, tanθY), where (x, y, z) is the spatial coordinates and (tanθX, tanθY) is the angular coordinates. The function L describes the position and propagation direction of each ray in the light field. The lateral intensity I(x, y) observed at a plane z can be expressed as the integral of L with respect to the angular dimensions  and over the distance z. The LMI connects the derivative of I(x, y; z) with respect to z to a scalar potential U(x, y; z) as :
It is clear that in the coherent limit, Eq. (1) reduces to the transport of intensity equation, and the scalar potential U becomes the phase of the field. Numerically, Eq. (1) can be solved using a Fourier-tranform-based algorithm8]
Once the scalar potential U(x, y; z) is computed [Eq. (3)], we can reconstruct the angular moment vector field M via the definition of U (x, y; z)
Orth and Crozier have shown that σ 2 = NA 2 gives a good perspective shifting effect . By extracting the 2D slice of the light field L at different (tanθX, tanθY) coordinate, we have different perspective view of the scene, similar to the perspective-shifting effect observed in the light-field photography  and light-field microscopy .
3. Noise reduction LFMM
It is seen from Eq. (5) in Section 2 that the intensity derivative ∂ I/∂ z is estimated from the intensity measured at the two neighoring planes. Ideally, the estimation accuracy is determined by the distance Δz between the two planes according to the finite difference theory . Thus, Δz should be as small as possible in order to obtain a good estimation. Otherwise, the influence of the higher-order terms cannot be ignored, and the linearity assumption to finite difference approximation is invalid, leading to a significant noise in the reconstructed light field. Therefore, the most direct way to increase the estimation accuracy of ∂ I/∂ z is to capture the two images at two planes as close as possible. However, it becomes difficult to distinguish the difference between I 1 (x, y; z 1) and I 2 (x, y; z 2) when the two planes are too close owing to the dynamic range and noise characteristics of the camera in actual experiments, in particular when the object is weakly diffractive. As a result, significant noise appears in the difference I 1 (x, y; z 1) − I 2 (x, y; z 2). Furthermore, the Poisson solver described in Eq. (3) requires the construction of the function H(fx, fy) defined in Eq. (4). This filter function has an extremely large value as , and is therefore very sensitive to the noise at lower spatial frequency region. This means that the reconstructed scalar potential U (x, y; z) is vulnerable to noise at this area as well, limiting the effect of the perspective view in LMI.
It becomes clear now that the approach in  Orth and Crozier proposed must have a trade-off between the estimation accuracy of ∂ I/∂ z and noise. In this work, we propose to fit the intensity derivative using multi-images to address this problem. In this algorithm, the intensity derivative ∂ I/∂ z is calculated in a pixelwise manner. At each pixel (xi,yj), where i, j = 1,…N, we first fit the intensity evolution to an m th order polynomial. Then the intensity derivative at this pixel, ∂ I(xi, yj)/∂ z, is calculated by directly taking the first order coefficient of the polynomial. One can see that all the images contribute to the estimation of intensity derivative, which will help improve the estimation accuracy. The similar method has been proposed in the field of phase retrieval for improving the accuracy of phase reconstruction .
Now let us provide a more theoretical analysis to the problem. To begin, we take the Taylor series expansion of the intensity captured at a defocus distance Δz, and have Eq. (8)
Apparently, we need to estimate the higher-order terms in order to improve the estimation accuracy of ∂ I/∂ z. It has been shown that the nth order derivative of a function can be computed numerically from n + 1 or more equally spaced measurements using polynomial fitting. Then we can use the more accurate estimation of ∂ I/∂ z in the Poisson solver, and Eq. (3) now can be written as
It is expected to obtain a better solution by solving this higher order Poisson equation. In the following sections, we provide both numerical simulation and experimental results to demonstrate this method.
4. Numerical simulation
In numerical simulation, we used a pure phase object to verify the usefulness of the higher order terms as this will provides a more quantitative understanding of the method. To put it simple, we use a coherent imaging system in our simulation. As discussed above, in the coherent limit, the light field moment reduces to the spatial frequency. In simulations, we first numerically construct a 4f imaging system based on wave optics, and propagate the object wavefront to a number of axial planes at both sides of the focal plane by using the Fresnel transform [16, 17], and store the intensity obtained at each of these planes. The distance between every two neighboring planes is set to the same value. Using this intensity data set, we can examine the evolution of the intensity at each pixel of these planes, and fit it to a curve specified by a high-order polynomial. The reduced noise ∂ I/∂ z can be calculated by directly taking the first order coefficient of the polynomial.
In simulation, we obtained intensity images with the axial spacing of Δz = 1 μm by using our numerical coherent imaging system. The simulation results are shown in Fig. 1 and Fig. 2. Figure 1(a) shows the axial evolution of the intensity of a certain pixel, fitted to the 1st, the 7th and the 13th orders polynomial, respectively. This clearly shows that the estimation of the intensity derivative using the first-order difference [I(Δz) − I(0)]/Δz only gives a very rough estimation, with significant deviation from the true value (shown by the blue markers). Instead, high-order polynomial fitting gives a far better estimation. To quantify the accuracy, we plot in Fig. 1(b) the fit error, the value of which corresponding to the 1st, 7th and 13th order polynomial fitting is 0.0249, 0.0065 and 0.0008 RMS, respectively. It is therefore expected to recover the moment M and the light field L using the better estimation of the intensity derivative, i.e., solving Eq. (10) instead of Eq. (3).
Poisson noise was added to the simulated intensity patterns in order to test the noise reduction performance of our LFMM technique. The recovered moment vectors M using different orders of polynomial fitting are plotted in Fig. 2. The first column in Fig. 2 shows the s and t components of the ideal moment M. These two images serve as the ground-truth data to test the proposed method. The images in the second column were computed by solving Eq. (3). And the images in the third and the fourth columns were computed by solving the high-order LMI equation [Eq. (10)] with the intensity derivative fitted to the 7th and the 13th order, respectively.
It is clearly seen from Fig. 2 that the estimation of the intensity derivation using the finite difference gives the worst result due to the noise sensitivity of the LMI solver . The result is significantly improved when using the higher-order estimation of the intensity derivative. However, we observed in the third and the fourth columns that the angular moments recovered using the 13th order polynomial fitting is worse than the one recovered using the 7th order fitting. The reason for this is that the higher-order polynomial fitting may fit the noise as the signal, yielding smaller RMS errors. This is the overfitting issue  encountered in curve fitting problems.
5. Experimental results
Now we set up an experimental system as shown in Fig. 3(a) to demonstrate the proposed LFMM technique. The system is just a standard microscope (Nikon Ni) mounted with a 10 × 0.3 NA microscope objective (Nikon) and an sCMOS camera (PCO Edge 4.2 M). The sample we used was a mosquitoes’ mouth-parts, which was a common biological sample slice one can buy on the market. We captured a stack of intensity images of the sample with an axial spacing of Δz = 1 μm by tuning the focus knobs. Three of intensity images are shown in Fig. 3(b).
It is clearly seen from Fig 3(b) that the intensity images captured at adjacent planes are very close to each other. This means that the corresponding intensity difference is easily corrupted by noise, as evidenced in Fig. 4(a). Instead, the noise can be significantly suppressed by using the proposed method, as shown in Fig. 4(b)–Fig. 4(d), in which the intensity derivative was calculated by fitting the polynomial to the 5th, the 7th and the 13th order using the experimental data set.
With these data at hand, we can recover the scalar potential U and the moment M by solving Eq. (10) and Eq. (6). The experimental results are plotted in Fig. 5. We observed that the recovered potentials and moments contain more detailed and finer structures by using our method (the second and third column) in comparison to the results recovered using the finite difference (the first column). This suggests that the proposed LFMM can significantly reduce the noise. In our experiments, we also observed the effect of overfitting in the recovered U and M. This is shown in the fourth column in Fig. 5 as well as in Fig. 4(d). One can see that noise is present in these images because fitting the data to a polynomial of too high order will take the noise as signal.
We can then construct the light field L of the scene according to Eq. (7) after recovering the moment vector M. Different perspective views of the sample can then be achieved by extracting the corresponding 2D intensity slice out of the constructed light field L. Examples of the extracted perspective views of the mosquitoes’ mouth-parts at angles coordinates (0°, 5.8°) are shown in Fig. 6. One can find in Visualization 1, Visualization 2, Visualization 3, and Visualization 4 for the complete perspective view of the light field. The constructed light-fields shown in Fig. 6 and the Visualizations suggest that the proposed LFMM technique has a much better overall noise performance in comparison to the standard LMI. The noise performance of the LFMM is dependent on the estimation accuracy of the intensity derivative. In our experiments, we observed that the intensity derivative estimated using the 7th-order fitting offers the best perspective-shifting impression (see Visualization 3). The other two are either suffered from noise (see Visualization 2) due to underfitting, or suffered from degeneration, especially on the marginal perspective view (see Visualization 4), by virtue of overfitting.
To summarize, we have described a noise reduction LFMM technique. The proposed technique makes a better estimation of the intensity derivative in solving the LMI equation by using polynomial fitting instead of finite difference, resulting in the significant reduction of noise. We have performed both simulation and experiments to demonstrate the proposed technique. From the experimental results, we can clearly see the light-field-like perspective view of the biological sample by using a standard microscope, without the sacrifice of the spatial resolution as in standard light field microscopy [2,3,7]. We believe that the proposed technique will open up new possibility for standard microscopes in high-resolution light-field observations.
It is also necessary to discuss the computational complexity of the noise reduction algorithm. Note that the computational complexity of polynomial fitting using the least square method is O(M 2 P), where M is the number of points, and P is the order of polynomial to be fit. So, for the images of N × N pixels as in our case, the total computation cost is O(M 2 N 2 P). It is worthy of pointing out that the pixelwise treatment of the algorithm can be implemented using parallel computing. In that case, the computation can be significantly sped up. In a follow-up study, we will try to find a better approach to reduce the noise with fewer images and lighter computation load.
We thank Antony Orth and Zhilong Jiang for helpful discussions with LMI. This project was supported by the National Science Foundation of China (Grant Numbers 61377005, 61327902, 61172178, 61371132 and 61471043), the Recruitment Program of Global Youth Experts, the Specialized Research Fund for the Doctoral Program of Higher Education (No. 20121101110022), and the International Science and Technology Cooperation Program of China (No. 2014DFR10960), and the Shanghai Pujiang Program.
References and links
1. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Report. CTSR2005-02 (2005).
2. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25, 924–934 (2006). [CrossRef]
3. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microscopy 235, 144–162 (2009). [CrossRef]
4. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” ACM Trans. Graph. 24, 765–776 (2005). [CrossRef]
5. X. Lin, J. Wu, and Q. Dai, “Camera array based light field microscopy,” in Optical Molecular Probes, Imaging and Drug Delivery, paper JT3A.48, (Optical Society of America, 2015). [CrossRef]
6. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 69–80 (2007). [CrossRef]
7. R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrodel, R. Raskar, M. Zimmer, and E. S. Boyden, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nature Methods 11, 727–730 (2014). [CrossRef] [PubMed]
9. N. Chen, J.-H. Park, J. Yeom, J. Kim, G. Li, and B. Lee, “Fourier hologram synthesis from two photographic images captured at different focal planes,” Signal Recovery and Synthesis, Paper JTu4A-6 (Optical Society of America, 2014). [CrossRef]
10. Z. Jiang, X. Pan, C. Liu, L. Wang, and J. Zhu, “Light field moment imaging with the ptychographic iterative engine,” AIP Advances 4, 107108 (2014). [CrossRef]
12. M. Levoy, “Light fields and computational imaging,” Comput. Aug. 39(8), 46–55 (2006). [CrossRef]
13. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” IEEE International Conference on Computational Photography (ICCP), 1–10 (2009).
14. G. Strang, Computational Science and Engineering, vol. 1 (Wellesley-Cambridge Press Wellesley, 2007).
16. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).
17. D. G. Voelz, Computational Fourier Optics: a MATLAB Tutorial (SPIE Press Bellingham, 2011).
18. P. Flach, Machine Learning: The Art and Science of Algorithms that Make Sense of Data (Combridge University Press, 2012). [CrossRef]