Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase sensitivity in differential phase contrast microscopy: limits and strategies to improve it

Open Access Open Access

Abstract

The phase sensitivity limit of Differential Phase Contrast (DPC) with partially coherent light is analyzed in details. The parameters to tune phase sensitivity, such as the diameter of illumination, the numerical aperture of the objective, and the noise of the camera are taken into account to determine the minimum phase contrast that can be detected. We found that a priori information about the sample can be used to fine-tune these parameters to increase phase contrast. Based on this information, we propose a simple algorithm to predict phase sensitivity of a DPC setup, which can be performed before the setup is built. Experiments confirm the theoretical findings.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Differential phase contrast refers to a group of phase imaging methods that employ an asymmetry, either on the illumination or on the detection side, or by introducing offset apertures, such that the resulting intensity on the detector bears information regarding the phase gradient of the sample [1]. In this paper we focus on Partially Coherent DPC (PC-DPC), which employs multiple asymmetric sources of illumination (such as LEDs) to reconstruct a quantitative map of the sample’s phase distribution. This approach to DPC is of interest for microscopic imaging of transparent samples, with the advantage, compared to interferometry-based techniques, of not showing artifacts such as ringing and speckles. Numerous ex-vivo biological applications have been demonstrated over the years [210]. Moreover, its use has been proven in cases such as in-vivo imaging of human retina [11], where interferometric techniques have limited capabilities in detecting structures with low reflectivity [12].

While several previous publications have explored the spatial resolution of this technique [13,14], not much attention has been given to the phase sensitivity performance that can be achieved, as well as the parameters affecting it and the fundamental limit. A few experimental, setup-specific phase sensitivity values can be found in the literature [1517], but a complete study of the parameters that dictate the limit of phase sensitivity is still missing. Ideally, any phase structure in the object generates a pattern of a certain intensity, whose minimum value could be a single photon. On the other hand, any source of noise limits detection only to the intensity values greater than the overall noise. The sensitivity limit of the DPC technique is thus given by the phase value that generates an intensity pattern whose amplitude equals that of the noise.

In this paper, we analyze the main factors that determine phase sensitivity in DPC detection. First, contrast trends are shown and discussed for different types of samples. Simulations of several DPC configurations are used to give a guideline as to how the contrast can be optimized for specific sample features. A simple approach is then described to evaluate the sensitivity performance of a DPC setup. The information required for this estimation are simply the illumination profile, the sample shape and an experimental model of Poissonian-Gaussian noise [18,19]. This simulation can be performed before the DPC setup is actually built, so it represents a powerful tool to explore the possible DPC configurations and choose the most suited to the specific user needs in terms of sensitivity and resolution. Experimental verification complements the simulations.

Finally, the sensitivity limit for common PC-DPC configurations is discussed and compared to other phase imaging techniques.

2. Theory

A thin, partially transparent object can be described by its two-dimensional transmission function $o({\vec{r}} )= {e^{ - \mu ({\vec{r}} )+ i\phi ({\vec{r}} )}}$, where $\vec{r}$ represents the transverse spatial coordinate and µ, ϕ respectively the absorption and the phase of the sample. In a typical DPC setup, this object is illuminated by a set of plane waves that propagate with a certain angle ${\vec{u}_s}$ with respect to the optical axis, whose amplitude is described by the function $S({\vec{u}_s})$. The light exiting the sample is then collected by the objective and low-pass filtered by its pupil P. If the object has weak phase and absorption, i.e. its transmission can be approximated as $o(\vec{r}) \approx 1 - \mu (\vec{r}) + i\phi (\vec{r})$, it is possible to demonstrate that the Fourier transform of the image created on the detector is of the form [14]:

$$\tilde{I}({\vec{u}_{c}}) = B\delta ({\vec{u}_{c}}) + {H_{abs}}({\vec{u}_{c}})\tilde{\mu }({\vec{u}_{c}}) + {H_{ph}}({\vec{u}_{c}})\tilde{\phi }({\vec{u}_{c}})$$
where ${\vec{u}_c}$ represents the spatial frequency coordinate at the detector, δ is a Dirac delta, B is the DC component, Habs and Hph are the absorption and phase transfer function, respectively, and $\tilde{\mu }$, $\tilde{\phi }$ are the Fourier transforms of the absorption and phase, respectively. For the scope of this paper we will consider samples with no absorption, as the theory can be easily extended to samples with weak absorption. The phase transfer function is proportional to [14,20]:
$${H_{ph}}({\vec{u}_{c}}) \propto i\left[ {\int\!\!\!\int {S({{\vec{u}}_{s}}){P^\ast }({{\vec{u}}_{s}})P( - {{\vec{u}}_{c}} + {{\vec{u}}_{s}}){d^2}{{\vec{u}}_{s}} - \int\!\!\!\int {S({{\vec{u}}_{s}})P({{\vec{u}}_{s}}){P^\ast }({{\vec{u}}_{c}} + {{\vec{u}}_{s}}){d^2}{{\vec{u}}_{s}}} } } \right]$$
where i is the complex unit and * denotes a complex conjugate operation. Therefore, the two critical parameters are the source profile $S({\vec{u}_s})$ and the pupil function P. For a non aberrated setup, the pupil $P(\vec{u})$ is a circular function of radius NAobj/(nλ) in frequency space, where NAobj is the objective’s numerical aperture, λ is the illumination wavelength and n is the refractive index of the medium in which the objective is immersed. The source profile can vary as long as it is asymmetric, in order to correctly perform DPC [14]. It has been shown that optimal results are obtained with a half ring illumination [13], whose external radius matches αouterNAobj/(nλ) and the internal radius is αinnerNAobj/(nλ), with αouter=1 and αinner<1. Throughout this manuscript, we will employ the half ring illumination. An example of source, pupil, and resulting phase transfer function are shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. (a) Pupil function: circle of radius NAobj/(nλ); (b) Source function: half ring with external radius αouterNAobj/(nλ) and internal radius is αinnerNAobj /(nλ), with αouter=1 and αinner<1; (c) Normalized phase transfer function obtained with the pupil of Fig. 1(a) and the source of Fig. 1(b).

Download Full Size | PDF

It is important to note that Eq. (1) and Eq. (2) are results obtained as a consequence of the Born approximation or, equivalently, of the Rytov approximation if the sample also has low phase gradient [21]. If the sample does not satisfy these assumptions, the contrast of the images will be reduced [16, 2123]. In this paper, the samples used for experiments and simulations are all well within the regime in which the linear equations can be used. In particular, when testing the sensitivity limit, the samples are so weakly scattering that the modulation they introduce is only a small fraction of the background intensity. As a consequence, it is correct to use the linear approximation to investigate the sensitivity, as the non-linear terms would only kick in significantly for samples well above the sensitivity limit.

2.1 Real samples: matching the frequency spectrum of the transfer function

As seen in Eq. (1), the phase profile of the sample is carried to the final image through a multiplication in Fourier space with a frequency-dependent transfer function. The spatial resolution will be mostly limited by the frequency cutoff of this transfer function, while the contrast depends on the total overlap between the spatial spectrum of the sample and the shape of the transfer function [24]: to obtain the real space image, it is necessary to inverse Fourier transform Eq. (1), obtaining a modulated integral of the product between ${H_{ph}}({\vec{u}_c})$ and $\phi ({\vec{u}_c})$. Indeed, for a phase-only object (assuming for now unitary magnification):

$$I({\vec{r}_c}) = B + {\Im ^{ - 1}}\{{{H_{ph}}({{\vec{u}}_c})\tilde{\phi }({{\vec{u}}_c})} \}= B + \int\!\!\!\int {{H_{ph}}} ({\vec{u}_c})\tilde{\phi }({\vec{u}_c}){e^{i2\pi {{\vec{u}}_c} \cdot {{\vec{r}}_c}}}{d^2}{\vec{u}_c}$$
where ${{\Im }^{ - 1}}$ denotes an inverse Fourier transform. The contrast is then given by the difference between the maximum and minimum pixel values in the region of interest:
$$c = \max [{{{\Im }^{ - 1}}\{{{H_{ph}}({{\vec{u}}_c})\tilde{\phi }({{\vec{u}}_c})} \}} ]- \min [{{{\Im }^{ - 1}}\{{{H_{ph}}({{\vec{u}}_c})\tilde{\phi }({{\vec{u}}_c})} \}} ]$$
while the normalized contrast is obtained as:
$$Normalized\textrm{ }Contrast = \frac{{\max [{{{\Im }^{ - 1}}\{{{H_{ph}}({{\vec{u}}_c})\tilde{\phi }({{\vec{u}}_c})} \}} ]- \min [{{{\Im }^{ - 1}}\{{{H_{ph}}({{\vec{u}}_c})\tilde{\phi }({{\vec{u}}_c})} \}} ]}}{{\max [{{{\Im }^{ - 1}}\{{{H_{ph}}({{\vec{u}}_c})\tilde{\phi }({{\vec{u}}_c})} \}} ]+ \min [{{{\Im }^{ - 1}}\{{{H_{ph}}({{\vec{u}}_c})\tilde{\phi }({{\vec{u}}_c})} \}} ]}}$$
for this reason, an analysis of the sensitivity requires careful consideration of the sample’s shape.

The main characteristic that dictates the sample’s spectrum is its sharpness, i.e. the magnitude of its derivative. For example, a sharp object like a rectangle has a broad frequency spectrum, while a smoother object has a spectrum more localized around low frequencies. When evaluating the performance of a microscope system, it is common to use standard targets such as the USAF target, which is only made up of sharp rectangular shapes. In contrast, biological samples have much more variety in terms of shape. This difference is critical and should be accounted for, especially if the specific application aims at greater phase sensitivity. To show the effect of the smoothness of the sample on the contrast, one can use two exemplary shapes in the simulations. In this manuscript, we will use a sharp circle and a truncated sinusoid, as shown in Fig. 2(a, d). A cross section of their normalized spectra is shown in Fig. 2(c) and (f); the sharp object has higher amplitude in the high frequency tails, compared to the smooth object. This fact influences contrast via the transfer function, as shown in Eq. (4).

 figure: Fig. 2.

Fig. 2. (a) Sharp circular object, (b) a cross-section along the blue line, and (c) a cross section of its spectrum in log scale; (d) smooth circular object, (e) a cross-section along the blue line, and (f) a cross section of its spectrum in log scale.

Download Full Size | PDF

As depicted in Fig. 1(a), the two main parameters that change the shape of the transfer function are NAobj and the αinner. The false color maps of Fig. 3(a) and (b) show how the normalized contrast changes for the two classes of objects. The contrast is normalized to 1 for the maximum contrast obtained with each object. To better understand the effect of the two parameters NAobj and αinner, one can look at the plots of Fig. 3(c,d). In Fig. 3(c) the normalized contrast is plotted against NAobj varying between 0.1 and 0.9 for αinner=0.6; in the maps of Fig. 3(a) and (b) this is shown as a blue line with circular markers for the sharp object and cross markers for the smooth object. Comparing the trend of contrast for the sharp and smooth objects as a function of NAobj, one can notice that for the sharp object, the contrast is nearly unchanged, while with the smooth object, increasing the NA reduces contrast by almost 40%. Figure 3(d) shows the contrast trends for NAobj=0.6 and αinner varying between 0 and 0.9. In this case, while the absolute values of contrast change, the trend is the same for both objects.

 figure: Fig. 3.

Fig. 3. (a) Normalized contrast vs. NAobj and αinner for the object of Fig. 2(a) (shown in the inset). (b) Normalized contrast vs. NAobj and αinner for the object of Fig. 2(d) (shown in the inset). (c) Normalized contrast for αinner=0.6. The line with circle markers refers to the sharp object, while the cross markers refer to the smooth object. (c) Normalized contrast for NAobj = 0.6. The line with circle markers refers to the sharp object, while the cross markers refer to the smooth object. The lines in (a) and (b) show where the plots of (c) and (d) have been obtained.

Download Full Size | PDF

As suggested previously in this section, the differences in contrast for smooth and sharp objects are easily explained in Fourier space. This is best illustrated in Fig. 4, where cross-sections of the spatial frequency spectra of the sharp and smooth object, ÕSharp and ÕSmooth, are compared to several phase transfer function cross-sections, normalized to their respective background values. In Fig. 4(a), the phase transfer functions are computed for two values of NAobj (0.1 and 0.6) at a fixed value of αinner=0.6. One can observe that, as expected, for the larger NAobj, the transfer function has a higher cutoff frequency, which would increase resolution. On the other hand, the amplitude of the transfer function at low frequencies is reduced. For the sharp object, the lower amount of information collected at low frequencies for larger NAobj is compensated by the integration of the information at higher frequencies. For the smooth object instead, the high frequencies have such small amplitude that the loss of information at low frequencies results in a lower contrast.

 figure: Fig. 4.

Fig. 4. (a) Cross-section of the normalized modulus of the phase transfer function Hph for two values of NAobj at fixed αinner, compared with the cross-section spatial frequency spectrum of the sharp and smooth object, ÕSharp and ÕSmooth respectively; assuming that the axis of asymmetry of illumination is in direction (0, uy), the cross-section is taken in the perpendicular direction (ux, 0). (b) Cross-section of the normalized modulus of the phase transfer function Hph for two values of αinner at fixed NAobj, compared with the spatial frequency spectrum of the sharp and smooth object.

Download Full Size | PDF

In Fig. 4(b), the case of varying αinner (0.1 and 0.9) for fixed NAobj = 0.6 is shown instead. At the increase of αinner, the cutoff frequency does not change, while the transfer for low frequencies increases. This is beneficial both for the sharp and smooth object, which indeed have an increasing contrast as shown in Fig. 3(d) [14].

2.2 Simulation and estimation of the sensitivity limit

In this section, the goal is to define a framework to simulate image formation for a given sample and microscopy DPC setup, and to estimate the minimum phase variation in the sample that can be detected. The standard definition for the sensitivity is the magnitude of the quantity of interest for which the Signal-to-Noise Ratio (SNR) equals one. Typical DPC images present a strong background, so it is rather more appropriate to use the Contrast-to-Noise Ratio (CNR), which measures the ratio between the difference in intensity of two reference points in the image and the noise [25,26]. In DPC, an object appears with its edges highlighted in opposite grey level polarity with respect to the background, so the maximum and minimum grey levels are considered to compute the CNR.

The first step is to simulate the DPC image. The phase transfer function ${H_{ph}}$ is computed using the complete forms of Eq. (1) and Eq. (2). Details on the method of simulation are provided in Appendix A. Given an a-priori knowledge of the general shape of the phase object of interest, or in other words of its spatial spectrum, we can then calculate the contrast as in Eq. (4):

$$c = \max [{{{\Im }^{ - 1}}\{{{H_{ph}} \cdot p \cdot {{\tilde{\phi }}_{01}}} \}} ]- \min [{{{\Im }^{ - 1}}\{{{H_{ph}} \cdot p \cdot {{\tilde{\phi }}_{01}}} \}} ]$$
where p is the phase magnitude and ${\tilde{\phi }_{01}}$ is the phase spectrum of the object previously normalized between 0 and 1.

Regarding the noise, we assume that the main contributions arise from the camera, in particular in the form of shot-noise [27] and a signal independent component. We model the Poissonian-Gaussian noise of the specific camera in use following the MATLAB algorithm developed by Foi et al. [18, 19]. The overall standard deviation of the noise is defined as $\sigma (I) = \sqrt {aI(x) + b} $, where a and b are the parameters that define the Poissonian and the Gaussian noise, respectively. The algorithm is capable of characterizing the noise profile of the camera from an image, by segmenting it in areas of uniform intensity. The noise model is then fitted to the data points of intensity and standard deviation of each area, giving back the parameters a, b.

The sensitivity limit can be estimated as the phase magnitude p such that the CNR is equal to 1:

$$CNR = \frac{{\max [{{{\Im }^{ - 1}}\{{{H_{ph}} \cdot p \cdot {{\tilde{\phi }}_{01}}} \}} ]- \min [{{{\Im }^{ - 1}}\{{{H_{ph}} \cdot p \cdot {{\tilde{\phi }}_{01}}} \}} ]}}{{\sigma ({I_b})}} = 1$$
$${p_{sensitivity}} = \frac{{\sigma ({I_b})}}{{\max [{{{\Im }^{ - 1}}\{{{H_{ph}} \cdot {{\tilde{\phi }}_{01}}} \}} ]- \min [{{{\Im }^{ - 1}}\{{{H_{ph}} \cdot {{\tilde{\phi }}_{01}}} \}} ]}}$$
where $\sigma ({I_b})$ is the noise standard deviation calculated at the background intensity. Other sources of noise may be included as a more sophisticated upgrade of this simulation, without changing the main steps here described. Indeed, it is only necessary to find and apply the correct model for the noise to compute $\sigma ({I_b})$.

2.3 Reconstructing samples below the sensitivity limit

The sensitivity limit obtained with the calculations of Section 2.2 related to the single DPC image. According to this definition, if the sample under observation were below the sensitivity limit, it would not be visible. On the other hand, DPC reconstructions can use multiple images to improve the fidelity of the retrieved phase. Normally, at least two images are recorded, with mirrored illumination profiles; the pixel-wise difference of the two images is then calculated, and this resulting image is used to reconstruct the phase. This digital subtraction process increases the modulation given by the sample by a factor of 2, while the noise standard deviation is also increased by a factor of $\sqrt 2 $. Overall, the image used for inversion will have a CNR that is $\sqrt 2 $ times that of the single image. Using more axis of illumination, the coverage of the spatial frequency spectrum is improved [9,14], and if a sufficient number of images is used, the sample may be reconstructed even if the CNR of a single image is below one.

Nevertheless, it is important to be aware of the sensitivity limit of a single image in a given setup, since a set of higher quality single images will provide a better reconstruction. Moreover, if the reconstruction is performed offline, the user of the microscope will only be able to see a stream of single DPC images.

The choice of reconstruction algorithm and parameters is also influenced by the quality of the DPC images. Both iterative [28] and direct [14] inversion methods have been demonstrated for DPC data, with Tikhonov inversion being the most common. In this method, a regularization parameter is used to balance the effect of fitting noise in the data: if the regularization parameter is too small, the inversion process will fit the noise and the reconstruction will suffer from excessive oscillations; if the regularization parameter is too big, the data will be under-fitted and errors will arise. It is a good approach to look for the smallest parameter that suppresses oscillations [29], but the optimal value depends on the quality of the data. Several approaches to automatically select the best parameter have been proposed [2931], but often a manual approach is employed. In this case, the regularization parameter is chosen to be proportional to the reciprocal of the CNR. As a result, images with low CNR are to be inverted with rather big regularization parameters, resulting in partially distorted reconstructions.

In this paper, we show experimentally in Section 4.2 that, by performing a phase reconstruction with four images, the ground truth phase can be recovered and confirmed by the theoretical contrast values. For the phase reconstruction, we followed the manual approach, starting from the inverse of the CNR and then testing several values until we obtained a satisfactory trade-off between accuracy of the reconstructed shape and management of noise amplification. Our results are applied to single images for which there is a detectable phase contrast (CNR > 1). As noted earlier, it may still be possible to reconstruct the phase from single images which are below the phase contrast limit.

3. DPC setup

The DPC setup used in the experiments is shown in Fig. 5. Illumination from red LEDs (660 nm) is focused onto the sample with a 4f system. The light exiting the sample is collected by an objective (20x magnification, 0.5 NA) whose Back Focal Plane (BFP) is relayed with a 4f system. A variable aperture is located at the relayed BFP, and by changing its radius, the effective NA of the setup can be controlled. A beam splitter separates the light in two paths. On the sample arm, a tube lens forms an image onto a CMOS camera. On the Fourier arm, a second 4f system creates an image of the angular profile of the illumination onto a second CMOS camera. This image contains at once information regarding the source profile $S({\vec{u}_s})$ and the pupil function P, which can be used for the calculation of the phase transfer function, ensuring that the simulations represent the actual configuration of the setup.

 figure: Fig. 5.

Fig. 5. The microscope setup. Yellow dashed planes are conjugated with the sample plane, while grey dashed planes are conjugated with the Fourier plane.

Download Full Size | PDF

The illumination setup is shown in detail in Fig. 6. In order to obtain a uniform illumination of the sample plane, with a half ring angular profile, a glass diffuser is located in the Fourier plane of the 4f system. The glass is partly covered with black tape, such that the profile is shaped as a half ring, where the outer radius corresponds to the radius of the glass diffuser. This partially obstructed diffuser serves both goals of uniform intensity and asymmetric illumination. By having the sample plane exactly at the image plane of the illumination 4f system, we obtain an illumination profile whose Fourier transform looks like the half ring diffuser. To change the size of the half ring profile, it is sufficient to change the second lens of the 4f system with one with a different focal length, as shown in Fig. 6. In the case on the left, the two lenses are equal, so the Fourier transform of the illumination at the sample plane would give the same shape of the half ring diffuser. In the case on the right, a lens with a longer focal length is used, so the Fourier transform of the illumination at the sample plane would be a shrunk version of the half ring diffuser. By switching between different lenses, we can obtain several ring sizes, while maintaining the ratio of the inner radius to the outer radius.

 figure: Fig. 6.

Fig. 6. The illumination configuration. The output of an LED is imaged onto the sample plane using a 4f system. A diffusing glass is located at the Fourier plane of this system. The glass is partially obstructed with black tape to form a half ring illumination. Thanks to the diffusing properties of the glass, the illumination at the sample plane is uniform. By using lenses with different focal length, it is possible to achieve several scaled versions of the same angular profile.

Download Full Size | PDF

4. Experiments and results

4.1 Contrast trend for increasing NAobj

The setup described in the previous section was used to collect measurements of different samples under varying conditions of illumination and collection. The lenses in the illumination setup were used to create several illumination profiles of different angular aperture. The first lens and the diffusing glass were kept in fixed relative position, while lenses of increasingly long focal length were used to focus the illumination on the sample. The illumination system was shifted vertically so that its focal plane would always correspond to the sample plane. At the same time, the aperture located in the relayed BFP of the objective was decreased to match the microscope NA to the maximum angle of the illumination. In this way, a set of measurements was obtained in which all the illumination and collection parameters would be maintained, except for the NA.

For each configuration, two images were taken: one image of the sample on the sample camera and, after having removed the sample, one image of the illumination profile on the Fourier camera. In particular, two samples are considered here: a USAF target etched in glass, and glass microbeads in a layer of immersion oil (IMMOIL-F30CC, by Olympus). These samples are shown in Fig. 7. The USAF target falls in the category of sharp objects, while the glass microbeads are representative of the smooth object category. In all cases, for each illumination configuration the normalized contrast was computed, according to Eq. (5). These contrast values were further normalized to the maximum of each series, so that changes in contrast can be read in relative terms. The results are shown in Fig. 8 and Fig. 9.

 figure: Fig. 7.

Fig. 7. (a) USAF target etched in glass. (b) Glass microbeads immersed in index-matching oil. The blue dashed squares represent the regions of interest of each sample.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Normalized contrast for varying NAobj for the USAF sample of Fig. 7(a). The blue line represents the simulated contrast, while the orange line represents the experimental contrast. Error bars for the experiment indicate the standard deviation of the contrast over several measurement. The ROI considered is shown in the inset.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Normalized contrast for varying NAobj for the glass microbead sample of Fig. 7(b). The blue line represents the simulated contrast, while the orange line represents the experimental contrast. Error bars for the experiment indicate the standard deviation of the contrast over several measurement. The ROI considered is shown in the inset.

Download Full Size | PDF

The data in Fig. 8 shows that the contrast varies between approximately 85% and 100% of the maximum value. From the simulations of Fig. 3(c), it is expected to observe an almost constant contrast, which is not the case in the experiment. This can be explained by the non-uniform thickness of the illumination half ring, which has a strong impact on the contrast as shown in Fig. 3(d). To verify the validity of this measurement, a corresponding simulation was performed. In order to reproduce faithfully the configuration of the experiment, we reconstructed the USAF target using an inversion algorithm with Tikhonov regularization [14], and we processed this image to obtain a thresholded mask in which we assigned the value of 0 mrad to the background and the nominal phase value of 685 mrad to the USAF structure. This nominal object was smoothed with a Gaussian filter of standard deviation σ=2 to suppress overshoot caused by the Gibbs effect, without excessive distortion of the nominal rectangular shape. Moreover, an image of the illumination profile obtained on the Fourier plane camera was used to calculate the phase transfer function, according to Eq. (2). The result of this simulation is also displayed in Fig. 8, and shows a good agreement with the measurements: similar variations of contrast are present in both simulation and experiment. The use of a measured illumination profile for simulations allowed to correctly account for shape deviations from the half-ring. The differences in exact values might be due to the estimation of the NA for each case: indeed, due to the polygonal shape of the variable aperture placed in the relayed BFP, it was necessary to calculate an average aperture radius.

Similarly, the experimental and simulation data for the glass microbead sample of Fig. 7(b) are shown in Fig. 9. In order to generate accurate simulations, it is necessary to have a nominal sample structure. The diameter of the bead was measured directly from the image, thanks to the known scale of magnification, to be 39 µm. The exact refractive index of these beads is unknown, so it was estimated by taking several DPC images of the bead immersed in oil mixes with varying refractive indices. The refractive index of these oils is very sensitive to temperature changes, so all measurements were performed in a temperature controlled room, and repeated several times. From each image, we measured the contrast, and we interpolated the results to find the refractive index at which the contrast fell to 0. With this approach, the refractive index of the beads was estimated to be 1.5197.

In this experiment, we reduced the phase difference introduced by this bead, by immersing it in microscopy oil (IMMOIL-F30CC, by Olympus) whose refractive index at our wavelength of interest was estimated to be 1.5163. Thus, for the simulations, a spherical object with a nominal phase of 1.33 rad was used. The results of both simulation and experiment are shown in Fig. 9. The expected decreasing trend of contrast versus numerical aperture of Fig. 3(c) is clearly visible, and the experimental values match the corresponding simulation.

4.2 CNR simulation and sensitivity estimation

Given the agreement between simulations and experiment for the contrast in DPC images, it is left to verify whether the simulation algorithm is accurate enough also at very low phase values, and if the noise model can provide the correct CNR values, for both types of samples. For this experiment, we used the same approach described in the previous section to record several DPC images for varying NA.

First we used a glass USAF target, which has a height of 12 nm as measured with AFM; given the refractive index of glass and air, at the wavelength of our LEDs, this sample introduces a phase difference of 55.34 mrad. An example of a single DPC image of this sample is shown in Fig. 10(a). The contrast of this image is stretched for better visualization, which makes the grainy noise pattern apparent. Nevertheless, with four images obtained with illumination shifted at 90° intervals (the remaining three are not shown here) it is still possible to reconstruct the phase object. The reconstructed phase was again used to draw a theoretical phase object, as described in Section 4.1, to use in the following simulations, shown in Fig. 10(c). The resulting simulated image, with added noise, is shown in Fig. 10(d). Due to the highly noisy nature of these images, in order to compute the grey levels of the maximum and minimum, we first averaged in the vertical direction to obtain a single low-noise cross section from which to extract the contrast. The noise was instead calculated as the standard deviation over a 100 × 100 pixel area where no features are present, highlighted in white in Fig. 10(a) [15, 16]. The area is chosen close to the region of interest, such that the background intensity is uniform. The resulting CNR values for several NAs are shown in the plot of Fig. 10(f), for both simulation and experiment. The results are in good agreement, and show that this sample is very close to the sensitivity limit, since it gives a CNR between 3.5 and 4 over the NAobj range.

 figure: Fig. 10.

Fig. 10. (a) Single DPC image of a USAF target with phase difference of 55.34 mrad, imaged in the setup with NAobj=0.19. (b) Corresponding phase reconstruction performed with 4-axis illumination. (c) Phase object used for simulation, representing the same portion of USAF target from the experiment. (d) Simulated image for the same illumination and collection conditions as in (a). Noise is added using the algorithm proposed by Foi et al. [18, 19]. (e) Phase reconstruction from 4-axis simulated DPC images. (f) CNR calculated for both simulation and experiment at several NAobj values. In order to calculate the CNR, a low noise cross-section was obtained by averaging the DPC image in the vertical direction, as shown in (a): the yellow square shows the ROI, and the dotted yellow line shows the direction of the cross section. The maximum and minimum of the cross section are used to compute the contrast. For the noise, the standard deviation of a featureless area was computed, shown in white in (a). The CNR is then calculated using Eq. (4). This process is repeated to obtain the data points in (f).

Download Full Size | PDF

Similarly, we prepared a low-CNR smooth sample using glass beads. In this case, we immersed the bead in an oil with a refractive index of 1.519, to obtain a much lower phase. Given the bead diameter of approximately 30.29 µm and the refractive index difference, this sample should introduce a maximum phase difference of approximately 463.4 mrad. As for the previous sample, we measured the CNR as the ratio between the amplitude of the phase object and the standard deviation of an empty area, shown in Fig. 11(a). We tested that it was indeed possible to invert this object, and obtained a sphere of approximately 0.45 rad, shown in Fig. 11(b). Following the same steps of the previous experiment, we generated a noisy image of this nominal sample; the nominal object, simulated image and simulated reconstruction are shown in Fig. 11(c-d-e), respectively. The CNR computed for both simulation and experiment is displayed in Fig. 11(f): the trend is correctly predicted by the simulation. The slight mismatch is due to an incorrect assumption of the refractive index of the immersion medium. Indeed, it is well known that the refractive index of oils is temperature dependent: as an example, the refractive index liquid #1809 by Cargille, used to make the medium for this experiment, has a temperature coefficient of -0.000418dnD/dt.

 figure: Fig. 11.

Fig. 11. (a) Single DPC image of a glass microbead immersed in oil, giving a maximum phase difference of 463.4 mrad, imaged in the setup with NAobj=0.375. (b) Corresponding phase reconstruction performed with 4-axis illumination. (c) Phase object used for simulation, representing the same bead size and refractive index mismatch from the experiment. (d) Simulated image for the same illumination and collection conditions as in (a). Noise is added using the algorithm proposed by Foi et al. [18, 19]. (e) Phase reconstruction from 4-axis simulated DPC images. (f) CNR calculated for both simulation and experiment at several NAobj values. In order to calculate the CNR, a low noise cross-section was obtained by averaging the DPC image as shown in (a): the yellow square shows the ROI, and the dotted yellow line shows the direction of the cross section. The maximum and minimum of the cross section are used to compute the contrast. For the noise, the standard deviation of a featureless area was computed, shown in white in (a). The CNR is then calculated using Eq. (4). This process is repeated to obtain the data points in (f).

Download Full Size | PDF

Given the refractive index difference between beads and medium in this experiment, a temperature drift of 1° would already cause a 90 mrad change in phase, which would account for a change in contrast. Moreover, as explained in Section 4.1, the refractive index of the beads has been estimated using similar oil mixes, thus the overall error on the nominal phase may equally come from an error on the beads refractive index.

The noisy image simulation can now be used to extrapolate the sensitivity limit, according to the procedure described in Section 2.2. The sensitivity limit is computed for several NAs and for both samples, based on the CNR simulations. The results are displayed in Fig. 12. It is expected that the result should be different for different types of samples, based on their spatial spectrum. According to the simulations, USAF targets with phase differences lower than 20 mrad can be measured in this DPC setup, and the sensitivity remains somewhat constant over the whole range. For the microbeads, the sensitivity is 85 mrad for the smallest NAobj, and up to 140 mrad for the higher NAobj value. In this case, the sensitivity for the smooth object is up to seven times worse than for the sharp object. This can be explained with how the frequency spectrum of these two objects overlap with the phase transfer function, as detailed in Section 2.1.

 figure: Fig. 12.

Fig. 12. Sensitivity simulation at several NAobj values, for the USAF target and the glass microbead.

Download Full Size | PDF

5. Conclusion

The first part of this manuscript focused on the parameters that influence the phase contrast in PC-DPC. Assuming a half ring illumination, it was observed that the NA of the system and the inner radius of the ring have an impact on the resulting phase contrast. With simulations and experiments, it was noted that this effect can be strikingly different based on the sample under observation: in particular, decreasing the NA can help increasing the contrast for samples whose spatial frequency spectrum is mostly low-frequency, as observed in Fig. 3. This is an important factor to keep in mind when planning the parameters for a DPC setup. If the goal is to obtain highly sensitive measurements, it might be necessary to trade off contrast and resolution. A system with variable NA can provide the flexibility to adapt to the needs of each measurement.

The second goal of this study was to develop an approach to provide preliminary prediction of sensitivity for a given setup and sample. It was demonstrated that it is sufficient to know the main optical parameters of the setup, and to model the main source of noise [18, 19] to test the sensitivity performance of a system. Since no actual DPC images are necessary for this calculation of sensitivity, this algorithm can be used in the design stage of a DPC microscope, to help choose the best illumination profile and camera for the goal. A synthetic sample of choice can be used in the simulation to precisely evaluate the sensitivity in the specific use-case scenario, and the user can verify whether their expectations are realistic.

Our simulations showed that with a simple, 8-bit CMOS camera, it is possible to reach a sensitivity above 1 nm of Optical Path Length (OPL).

Other phase microscopy techniques, especially those based on interferometry, demonstrated spatial phase sensitivity, from 0.7 nm down to 0.14 nm [3234]. On the other hand, these techniques require more sophisticated hardware and software, or are slower due to components like liquid crystal modulators [33].

Still, it is possible to increase sensitivity in PC-DPC, for example by employing higher end cameras with a bigger well capacity or with more bits for encoding, thus decreasing the impact of shot noise. This is because a background intensity close to the dynamic range limit of the camera optimizes the ratio ${I / {\sqrt I }}$ of the SNR due to shot noise. Indeed, the LEDs that we employed in our experiments had to be used well under their maximum rated driving current, as the camera pixels were otherwise saturating even at short exposure time. With a bigger well capacity, more of this available power can be used, without incurring in saturation. For two cameras with different saturation levels ${I_{saturation,2}} > {I_{saturation,1}}$, where the subscripts 1,2 refer to the two cameras, if we illuminate both detectors close to saturation, we would gain a factor of $\sqrt {{{{I_{saturation,2}}} / {{I_{saturation,1}}}}} $. Using a thinner ring of illumination increases contrast thus allowing to reach better sensitivities, and for smooth samples, decreasing the NA can be a solution if within the resolution need. As pointed out in Section 2.3, using multiple images at different illumination conditions also allows to reconstruct samples with single-image CNR<1. The conditions and number of images needed should be investigated with a similar approach as shown here for the forward problem.

Finally, the sensitivity simulation approach demonstrated here, can be easily translated to other imaging techniques. Imaging systems that can be linearized, for example according to the Born approximation, can be analyzed in a similar manner, but also other types of models can be adapted, since only the forward model is used here [5]. Moreover, extensions to absorptive and 3D samples are interesting future applications [23].

Appendix A: quantitative DPC simulation

To perform the DPC simulations shown throughout this paper, a MATLAB algorithm was used. More details on this are given here for the interested reader.

It is first necessary to establish the equations that describe the forward problem of image formation in DPC. We report here the complete equation of image formation in DPC, including scaling due to the magnification of the system, assuming that the setup is as in Fig. 5:

$$\widetilde I({\vec{u}_c}) = \delta ({\vec{u}_c})B + \tilde{\mu }( - M{\vec{u}_c}){H_{abs}}({\vec{u}_c}) + \tilde{\phi }( - M{\vec{u}_c}){H_{ph}}({\vec{u}_c})$$
where ${\vec{u}_c}$ is the transverse spatial frequency coordinate at the camera, $\tilde{\mu }( - M{\vec{u}_c})$ and $\tilde{\phi }( - M{\vec{u}_c})$ are the scaled Fourier transforms of the absorption and phase profile of the object, respectively, and M is the magnification of the microscope. The background term and the transfer functions are, respectively:
$$B = {\left( {\frac{1}{{\lambda {f_c}M}}} \right)^2}{\int\!\!\!\int {\left|{P\left( {\frac{{{f_o}}}{{{f_c}}}{{\vec{r}}_s}} \right)} \right|} ^2}S({{{\vec{r}}_s}} )d{\vec{r}_s} = \frac{{|{{H_{abs}}(0,0)} |}}{{2{M^2}}}$$
$${H_{abs}}({{{\vec{u}}_c}} )={-} 2{M^2}{\Im ^{ - 1}}_{{{\vec{r}}_c} \to {{\vec{u}}_c}}\{{Re [{{\Im_{{{\vec{u}}_c} \to - {{\vec{r}}_c}}}{{\{{P({\lambda {f_t}{{\vec{u}}_c}} )} \}}^\ast }{\Im_{{{\vec{u}}_c} \to - {{\vec{r}}_c}}}\{{S({\lambda M{f_c}{{\vec{u}}_c}} )P({\lambda {f_t}{{\vec{u}}_c}} )} \}} ]} \}$$
$${H_{ph}}({{{\vec{u}}_c}} )= 2{M^2}{\Im ^{ - 1}}_{{{\vec{r}}_c} \to {{\vec{u}}_c}}\{{{\mathop{\rm Im}\nolimits} [{{\Im_{{{\vec{u}}_c} \to - {{\vec{r}}_c}}}{{\{{P({\lambda {f_t}{{\vec{u}}_c}} )} \}}^\ast }{\Im_{{{\vec{u}}_c} \to - {{\vec{r}}_c}}}\{{S({\lambda M{f_c}{{\vec{u}}_c}} )P({\lambda {f_t}{{\vec{u}}_c}} )} \}} ]} \}$$
where ${\vec{r}_c}$ is the transverse coordinate at the camera, ${\vec{r}_s}$ is the transverse coordinate at the source plane, ${f_o}$ is the focal length of the objective, ${f_t}$ is the focal length of the tube lens, and ${f_c}$ is the focal length of the condenser lens.

The operations to calculate the background, transfer functions, and image, are straightforward and can be easily implemented in Matlab using matrix multiplications and discrete Fourier transforms. Nevertheless, some care must be taken, as the transforms are performed in Matlab over a generic system of coordinates, so appropriate factors must be used to retain the quantitative nature of the simulation [35]. As a consequence, the previous equations can be implemented in Matlab as follows:

$$\begin{array}{l} \overline{\overline I} = B + \textrm{IFFT}\{{d{x_o}^2\textrm{FFT}\{{\bar{\bar{\mu }}} \}\cdot {{\overline{\overline H} }_{abs}}} \}+ \textrm{IFFT}\{{d{x_o}^2\textrm{FFT}\{{\bar{\bar{\phi }}} \}\cdot {{\overline{\overline H} }_{ph}}} \}\\ B = \frac{{\max |{{{\overline{\overline H} }_{abs}}} |}}{{2{M^2}}}\\ {\overline{\overline H} _{abs}} ={-} \frac{2}{{d{x_o}^2}}\textrm{IFFT}\{{Re [{\textrm{FFT}{{\{{\overline{\overline P} } \}}^\ast } \cdot \textrm{FFT}\{{\overline{\overline P} \cdot \overline{\overline S} } \}} ]} \}\\ {\overline{\overline H} _{ph}} = \frac{2}{{d{x_o}^2}}\textrm{IFFT}\{{{\mathop{\rm Im}\nolimits} [{\textrm{FFT}{{\{{\overline{\overline P} } \}}^\ast } \cdot \textrm{FFT}\{{\overline{\overline P} \cdot \overline{\overline S} } \}} ]} \}\end{array}$$
where $\overline{\overline {}} $ indicates a 2D matrix, FFT and IFFT are the 2D discrete Fourier transform and inverse discrete Fourier transform operators, $d{x_o}$ is the pixel size (assuming a square pixel) and ${\cdot} $ indicates an element-wise matrix multiplication. Further care should be taken when using the FFT and IFFT operators to make sure whether the frequency coordinates are centered. In our simulations, the absorption profile of the sample $\tilde{\mu }$ is assumed to be null.

To obtain the quantitative simulated images, we used the measured illumination profile and the measured NA for $\overline{\overline P} $ and $\overline{\overline S} $ to calculate ${\overline{\overline H} _{abs}}$ and ${\overline{\overline H} _{ph}}$. For the simulation to yield the same result as the experiment, the background term B should be the same as the average background intensity in the experiment. Thus, we assign to B the value of the background intensity (in our experiments this value was usually between 170 and 190, on a 8-bit camera), and we use the relation between B and ${\overline{\overline H} _{abs}}$ in Eq. (1)3 to renormalize the transfer functions as follows:

$$\begin{array}{l} {\overline{\overline H} _{abs,\textrm{normalized}}} = \frac{{2{M^2}B}}{{\max |{{{\overline{\overline H} }_{abs}}} |}}{\overline{\overline H} _{abs}}\\ {\overline{\overline H} _{ph,\textrm{normalized}}} = \frac{{2{M^2}B}}{{\max |{{{\overline{\overline H} }_{abs}}} |}}{\overline{\overline H} _{ph}} \end{array}$$
In this way, the simulated image corresponds to the real pixel values of the experiments, including the background. This is very important for the last step: using the algorithm for noise modeling [18, 19], we map the intensity-dependent noise σ(I). Since the simulated image is consistent with the range of gray levels of the camera, the contrast in the simulated image can be directly compared with the standard deviation of the noise at the background intensity, σ(B).

This simulation allows to obtain quantitative CNR and sensitivity values, given the knowledge of some basic optical parameters of the setup, namely the magnification, pixel size, NA, wavelength and source profile (measured or analytical).

Disclosures

The authors declare no conflicts of interest.

References

1. D. K. Hamilton and C. J. R. Sheppard, “Differential Phase-Contrast in Scanning Optical Microscopy,” J Microsc-Oxford 133(1), 27–39 (1984). [CrossRef]  

2. Z. F. Phillips, M. Chen, and L. Waller, “Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC),” PLoS One 12, e0171228 (2017). [CrossRef]  

3. Y.-H. Chuang, Y.-Z. Lin, S. Vyas, Y.-Y. Huang, J. A. Yeh, and Y. Luo, “Multi-wavelength quantitative differential phase contrast imaging by radially asymmetric illumination,” Opt. Lett. 44(18), 4542–4545 (2019). [CrossRef]  

4. D. Carpentras, T. Laforest, M. Künzi, and C. Moser, “Effect of backscattering in phase contrast imaging of the retina,” Opt. Express 26(6), 6785–6795 (2018). [CrossRef]  

5. Y. Fan, J. Sun, Q. Chen, J. Zhang, and C. Zuo, “Wide-field anti-aliased quantitative differential phase contrast microscopy,” Opt. Express 26(19), 25129 (2018). [CrossRef]  

6. L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

7. P. Casteleiro Costa, P. Ledwig, A. Bergquist, J. Kurtzberg, and F. E. Robles, “Noninvasive white blood cell quantification in umbilical cord blood collection bags with quantitative oblique back-illumination microscopy,” Transfusion 60(3), 588–597 (2020). [CrossRef]  

8. W. Lee, J. H. Choi, S. Ryu, D. Jung, J. Song, J. S. Lee, and C. Joo, “Color-coded LED microscopy for quantitative phase imaging: Implementation and application to sperm motility analysis,” Methods 136, 66–74 (2018). [CrossRef]  

9. H.-H. Chen, Y.-Z. Lin, and Y. Luo, “Isotropic differential phase contrast microscopy for quantitative phase bio-imaging,” J. Biophotonics 11(8), e201700364 (2018). [CrossRef]  

10. C. Yurdakul, O. Avci, A. Matlock, A. J. Devaux, M. V. Quintero, E. Ozbay, R. A. Davey, J. H. Connor, W. C. Karl, L. Tian, and M. S. Ünlü, “High-Throughput, High-Resolution Interferometric Light Microscopy of Biological Nanoparticles,” ACS Nano 14(2), 2002–2013 (2020). [CrossRef]  

11. T. Laforest, M. Künzi, L. Kowalczuk, D. Carpentras, F. Behar-Cohen, and C. Moser, “Transscleral optical phase imaging of the human retina,” Nat. Photonics 14(7), 439–445 (2020). [CrossRef]  

12. R. S. Jonnal, O. P. Kocaoglu, R. J. Zawadzki, Z. Liu, D. T. Miller, and J. S. Werner, “A Review of Adaptive Optics Optical Coherence Tomography: Technical Advances, Scientific Applications, and the Future,” Invest. Ophthalmol. Vis. Sci. 57, OCT51 (2016). [CrossRef]  

13. Y. Fan, J. Sun, Q. Chen, X. Pan, L. Tian, and C. Zuo, “Optimal illumination scheme for isotropic quantitative differential phase contrast microscopy,” Photonics Res. 7(8), 890–904 (2019). [CrossRef]  

14. L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Opt. Express 23(9), 11394–11403 (2015). [CrossRef]  

15. W. Lee, D. Jung, S. Ryu, and C. Joo, “Single-exposure quantitative phase imaging in color-coded LED microscopy,” Opt. Express 25(7), 8398–8411 (2017). [CrossRef]  

16. H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express 24(22), 25345–25361 (2016). [CrossRef]  

17. Y. Ma, S. Y. Guo, Y. Pan, R. Fan, Z. J. Smith, S. Lane, and K. Q. Chu, “Quantitative phase microscopy with enhanced contrast and improved resolution through ultra-oblique illumination (UO-QPM),” J. Biophotonics 12(10), e201900011 (2019). [CrossRef]  

18. L. Azzari and A. Foi, “Gaussian-Cauchy Mixture Modeling for Robust Signal-Dependent Noise Estimation,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 5357–5361 (2014).

19. A. Foi, M. Trimeche, V. Katkovnik, and K. Egiazarian, “Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data,” IEEE Trans. on Image Process. 17(10), 1737–1754 (2008). [CrossRef]  

20. R. A. Claus, P. P. Naulleau, A. R. Neureuther, and L. Waller, “Quantitative phase retrieval with arbitrary pupil and illumination,” Opt. Express 23(20), 26672–26682 (2015). [CrossRef]  

21. M. H. Jenkins and T. K. Gaylord, “Quantitative phase microscopy via optimized inversion of the phase optical transfer function,” Appl. Opt. 54(28), 8566–8579 (2015). [CrossRef]  

22. M. Chen, D. Ren, H.-Y. Liu, S. Chowdhury, and L. Waller, “Multi-layer Born multiple-scattering model for 3D phase microscopy,” Optica 7(5), 394 (2020). [CrossRef]  

23. R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomed. Opt. Express 9(5), 2130–2141 (2018). [CrossRef]  

24. D. K. Hamilton, C. J. R. Sheppard, and T. Wilson, “Improved imaging of phase gradients in scanning optical microscopy,” J. Microsc-Oxford 135(3), 275–286 (1984). [CrossRef]  

25. F. Timischl, “The contrast-to-noise ratio for image quality evaluation in scanning electron microscopy,” Scanning 37(1), 54–62 (2015). [CrossRef]  

26. H. Jiang, N. Lu, and L. Yao, “A High-Fidelity Haze Removal Method Based on HOT for Visible Remote Sensing Images,” Remote Sens. 8(10), 844 (2016). [CrossRef]  

27. P. Hosseini, R. J. Zhou, Y. H. Kim, C. Peres, A. Diaspro, C. F. Kuang, Z. Yaqoob, and P. T. C. So, “Pushing phase and amplitude sensitivity limits in interferometric microscopy,” Opt. Lett. 41(7), 1656–1659 (2016). [CrossRef]  

28. M. Chen, L. Tian, and L. Waller, “3D differential phase contrast microscopy,” Biomed. Opt. Express 7(10), 3940–3950 (2016). [CrossRef]  

29. P. C. Hansen, “The L-curve and its use in the numerical treatment of inverse problem,” in Computational Inverse Problems in Electrocardiology, (WIT Press, 2001), pp. 119–142.

30. G. H. Golub, P. C. Hansen, and D. P. O’Leary, “Tikhonov regularization and total least squares,” SIAM J. Matrix Anal. & Appl. 21(1), 185–194 (1999). [CrossRef]  

31. D. P. O’Leary, “Near-optimal parameters for Tikhonov and other regularization methods,” SIAM J. Sci. Comput. 23(4), 1161–1171 (2001). [CrossRef]  

32. G. Popescu, T. Ikeda, R. R. Dasari, and M. S. Feld, “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett. 31(6), 775–777 (2006). [CrossRef]  

33. Z. Wang, L. Millet, M. Mir, H. F. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Opt. Express 19(2), 1016–1026 (2011). [CrossRef]  

34. R. Shang, S. Chen, and Y. Zhu, “High-Sensitivity Quantitative Phase Microscopy Using Spectral Encoding,” in Frontiers in Optics, 2014, paper FW4G.3.

35. D. Voelz, “Sampled Functions and the Discrete Fourier Transform,” in Computational fourier optics: a MATLAB tutorial, (SPIE Press, 2011).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. (a) Pupil function: circle of radius NAobj/(nλ); (b) Source function: half ring with external radius αouterNAobj/(nλ) and internal radius is αinnerNAobj /(nλ), with αouter=1 and αinner<1; (c) Normalized phase transfer function obtained with the pupil of Fig. 1(a) and the source of Fig. 1(b).
Fig. 2.
Fig. 2. (a) Sharp circular object, (b) a cross-section along the blue line, and (c) a cross section of its spectrum in log scale; (d) smooth circular object, (e) a cross-section along the blue line, and (f) a cross section of its spectrum in log scale.
Fig. 3.
Fig. 3. (a) Normalized contrast vs. NAobj and αinner for the object of Fig. 2(a) (shown in the inset). (b) Normalized contrast vs. NAobj and αinner for the object of Fig. 2(d) (shown in the inset). (c) Normalized contrast for αinner=0.6. The line with circle markers refers to the sharp object, while the cross markers refer to the smooth object. (c) Normalized contrast for NAobj = 0.6. The line with circle markers refers to the sharp object, while the cross markers refer to the smooth object. The lines in (a) and (b) show where the plots of (c) and (d) have been obtained.
Fig. 4.
Fig. 4. (a) Cross-section of the normalized modulus of the phase transfer function Hph for two values of NAobj at fixed αinner, compared with the cross-section spatial frequency spectrum of the sharp and smooth object, ÕSharp and ÕSmooth respectively; assuming that the axis of asymmetry of illumination is in direction (0, uy), the cross-section is taken in the perpendicular direction (ux, 0). (b) Cross-section of the normalized modulus of the phase transfer function Hph for two values of αinner at fixed NAobj, compared with the spatial frequency spectrum of the sharp and smooth object.
Fig. 5.
Fig. 5. The microscope setup. Yellow dashed planes are conjugated with the sample plane, while grey dashed planes are conjugated with the Fourier plane.
Fig. 6.
Fig. 6. The illumination configuration. The output of an LED is imaged onto the sample plane using a 4f system. A diffusing glass is located at the Fourier plane of this system. The glass is partially obstructed with black tape to form a half ring illumination. Thanks to the diffusing properties of the glass, the illumination at the sample plane is uniform. By using lenses with different focal length, it is possible to achieve several scaled versions of the same angular profile.
Fig. 7.
Fig. 7. (a) USAF target etched in glass. (b) Glass microbeads immersed in index-matching oil. The blue dashed squares represent the regions of interest of each sample.
Fig. 8.
Fig. 8. Normalized contrast for varying NAobj for the USAF sample of Fig. 7(a). The blue line represents the simulated contrast, while the orange line represents the experimental contrast. Error bars for the experiment indicate the standard deviation of the contrast over several measurement. The ROI considered is shown in the inset.
Fig. 9.
Fig. 9. Normalized contrast for varying NAobj for the glass microbead sample of Fig. 7(b). The blue line represents the simulated contrast, while the orange line represents the experimental contrast. Error bars for the experiment indicate the standard deviation of the contrast over several measurement. The ROI considered is shown in the inset.
Fig. 10.
Fig. 10. (a) Single DPC image of a USAF target with phase difference of 55.34 mrad, imaged in the setup with NAobj=0.19. (b) Corresponding phase reconstruction performed with 4-axis illumination. (c) Phase object used for simulation, representing the same portion of USAF target from the experiment. (d) Simulated image for the same illumination and collection conditions as in (a). Noise is added using the algorithm proposed by Foi et al. [18, 19]. (e) Phase reconstruction from 4-axis simulated DPC images. (f) CNR calculated for both simulation and experiment at several NAobj values. In order to calculate the CNR, a low noise cross-section was obtained by averaging the DPC image in the vertical direction, as shown in (a): the yellow square shows the ROI, and the dotted yellow line shows the direction of the cross section. The maximum and minimum of the cross section are used to compute the contrast. For the noise, the standard deviation of a featureless area was computed, shown in white in (a). The CNR is then calculated using Eq. (4). This process is repeated to obtain the data points in (f).
Fig. 11.
Fig. 11. (a) Single DPC image of a glass microbead immersed in oil, giving a maximum phase difference of 463.4 mrad, imaged in the setup with NAobj=0.375. (b) Corresponding phase reconstruction performed with 4-axis illumination. (c) Phase object used for simulation, representing the same bead size and refractive index mismatch from the experiment. (d) Simulated image for the same illumination and collection conditions as in (a). Noise is added using the algorithm proposed by Foi et al. [18, 19]. (e) Phase reconstruction from 4-axis simulated DPC images. (f) CNR calculated for both simulation and experiment at several NAobj values. In order to calculate the CNR, a low noise cross-section was obtained by averaging the DPC image as shown in (a): the yellow square shows the ROI, and the dotted yellow line shows the direction of the cross section. The maximum and minimum of the cross section are used to compute the contrast. For the noise, the standard deviation of a featureless area was computed, shown in white in (a). The CNR is then calculated using Eq. (4). This process is repeated to obtain the data points in (f).
Fig. 12.
Fig. 12. Sensitivity simulation at several NAobj values, for the USAF target and the glass microbead.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

I ~ ( u c ) = B δ ( u c ) + H a b s ( u c ) μ ~ ( u c ) + H p h ( u c ) ϕ ~ ( u c )
H p h ( u c ) i [ S ( u s ) P ( u s ) P ( u c + u s ) d 2 u s S ( u s ) P ( u s ) P ( u c + u s ) d 2 u s ]
I ( r c ) = B + 1 { H p h ( u c ) ϕ ~ ( u c ) } = B + H p h ( u c ) ϕ ~ ( u c ) e i 2 π u c r c d 2 u c
c = max [ 1 { H p h ( u c ) ϕ ~ ( u c ) } ] min [ 1 { H p h ( u c ) ϕ ~ ( u c ) } ]
N o r m a l i z e d   C o n t r a s t = max [ 1 { H p h ( u c ) ϕ ~ ( u c ) } ] min [ 1 { H p h ( u c ) ϕ ~ ( u c ) } ] max [ 1 { H p h ( u c ) ϕ ~ ( u c ) } ] + min [ 1 { H p h ( u c ) ϕ ~ ( u c ) } ]
c = max [ 1 { H p h p ϕ ~ 01 } ] min [ 1 { H p h p ϕ ~ 01 } ]
C N R = max [ 1 { H p h p ϕ ~ 01 } ] min [ 1 { H p h p ϕ ~ 01 } ] σ ( I b ) = 1
p s e n s i t i v i t y = σ ( I b ) max [ 1 { H p h ϕ ~ 01 } ] min [ 1 { H p h ϕ ~ 01 } ]
I ~ ( u c ) = δ ( u c ) B + μ ~ ( M u c ) H a b s ( u c ) + ϕ ~ ( M u c ) H p h ( u c )
B = ( 1 λ f c M ) 2 | P ( f o f c r s ) | 2 S ( r s ) d r s = | H a b s ( 0 , 0 ) | 2 M 2
H a b s ( u c ) = 2 M 2 1 r c u c { R e [ u c r c { P ( λ f t u c ) } u c r c { S ( λ M f c u c ) P ( λ f t u c ) } ] }
H p h ( u c ) = 2 M 2 1 r c u c { Im [ u c r c { P ( λ f t u c ) } u c r c { S ( λ M f c u c ) P ( λ f t u c ) } ] }
I ¯ ¯ = B + IFFT { d x o 2 FFT { μ ¯ ¯ } H ¯ ¯ a b s } + IFFT { d x o 2 FFT { ϕ ¯ ¯ } H ¯ ¯ p h } B = max | H ¯ ¯ a b s | 2 M 2 H ¯ ¯ a b s = 2 d x o 2 IFFT { R e [ FFT { P ¯ ¯ } FFT { P ¯ ¯ S ¯ ¯ } ] } H ¯ ¯ p h = 2 d x o 2 IFFT { Im [ FFT { P ¯ ¯ } FFT { P ¯ ¯ S ¯ ¯ } ] }
H ¯ ¯ a b s , normalized = 2 M 2 B max | H ¯ ¯ a b s | H ¯ ¯ a b s H ¯ ¯ p h , normalized = 2 M 2 B max | H ¯ ¯ a b s | H ¯ ¯ p h
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.