We describe various methods to process the data collected with a digital confocal microscope (DCM) in order to get more information than what we could get from a conventional confocal system. Different metrics can be extracted from the data collected with the DCM in order to produce images that reveal different features of the sample. The integrated phase of the scattered field allows for the three-dimensional reconstruction of the refractive index distribution. In a similar way, the integration of the field intensity yields the absorption coefficient distribution. The deflection of the digitally reconstructed focus reveals the sample-induced aberrations and the RMS width of the focus gives an indication on the local scattering coefficient. Finally, in addition to the conventional confocal metric, which consists in integrating the intensity within the pinhole, the DCM allows for the measurement of the phase within the pinhole. This metrics is close to the whole-field integrated phase and thus gives a qualitative image of the refractive index distribution.
© 2013 Optical Society of America
The original idea of the confocal microscope, proposed by M. Minsky in 1957 , included both the transmission and reflection geometries for the imaging of scattering samples, in which the scattering itself was the contrast agent. Since that time, the confocal microscope has been extensively analyzed in the literature [2, 3]. Nowadays, the confocal microscope is mostly used in reflection and primarily with one or two-photon fluorescence. This is especially true in biological imaging mainly because the fluorophores can be functionalized to specific biological targets and because the signal from the fluorescence can be easily separated from the excitation light. In general, the backscattered light is weaker than the fluorescence, which is omnidirectional. The transmission geometry  is only rarely used for the practical reason that the sample is caught between two objectives, the confocality of which has to be carefully adjusted. People have proposed ideas to improve the transmission mode [5–7] but these did not change the basic trend. Recently, we introduced digital confocal microscopy (DCM), which is a scanning holographic technique that can be used both in reflection and in transmission, greatly simplifying the implementation in the latter case . The principle is that a digital hologram of the scattered light is made as the sample is illuminated with a focused spot. The hologram is subsequently processed digitally. We have shown that the dynamic placement of the pinhole in the digital domain can dramatically improve the performance particularly in the transmission geometry. In this paper we focus on another promising feature of DCM: its ability to provide a variety of metrics to obtain contrast that can give us additional information about the object.
The schematic diagram of the DCM in the transmission geometry is shown in Fig. 1(a). The light is focused into the sample by lens MO1 and the object, which is mounted on a translation stage, is scanned in three dimensions. The transmitted and the forward scattered field are collected by lens MO2 the back focal plane of which is imaged by lenses L3 and L4 onto a CCD camera where an off-axis digital hologram is recorded. The extracted optical field is digitally propagated (Fourier-transformed) to focus again in the digital domain. In order to simulate the conventional confocal microscope, we place a digital pinhole at the image position of the illuminating focal point. The image of the object is then formed by assigning a value to each pixel proportional to the integrated intensity passing through the digital pinhole.
In our experiment, we scan by moving the sample in order to simplify the implementation of the optical apparatus. It is also possible to scan the beam as in most commercial systems, which allows for faster frame rate. The consequence of scanning the beam in the DCM is that the spatial frequency of the interference fringes of the off-axis hologram will change according to the angle between the signal and the reference. It is possible to compensate digitally for this angle, which corresponds to a shift in the frequency domain. The scanning angle is bounded on the upper side by the spatial resolution of the detector and, on the lower side, by the minimal frequency of the fringes that is required to extract the full bandwidth of the signal. Alternatively, a scanned-beam microscope can be implemented with the modified apparatus shown in Fig. 1(b), where the reference and signal beams pass through the same optical elements. The two beams can be scanned together while the angle between them remains constant. The spectrum of the detected field remains thus fixed in the Fourier domain.
In a confocal image, we would normally interpret dark regions as places in the cell with large absorption, reflectivity or structured scattering. It is not possible to distinguish which is which and, in fact, optical microscopy in general does not give a quantitative measurement of a particular optical property of the object. Things are even more complex for the confocal microscope. If a point on the image is dark then it is because the light was absorbed, scattered, or reflected at the point of focus, but it could also be due to the fact that the beam becomes aberrated (deflected, defocused) as it propagates through the object. Such aberrations can change the shape and the position of the digital focus, which, in turn, can cause the amount of light that goes through the pinhole to change. This introduces additional ambiguity since the brightness of the pixel depends not only on local information in the vicinity of the focused spot.
In this paper we discuss approaches to resolve some of these ambiguities with the flexibility afforded by the DCM. Specifically, we demonstrate how, by extracting different metrics from the same data recorded in the DCM, we can form images that all have the basic appearance of the cell structure but which differ from the classic confocal image. We offer an interpretation for the differences between the images obtained with the different metrics, which provide qualitative information about the 3D properties of the object beyond what is possible with the conventional confocal microscope. As shown below, one of the metrics (the integrated phase) yields a quantitative estimate of the refractive index.
2. Digital data processing
The field in the back focal plane of the imaging lens is relayed onto a CCD camera on which the digital hologram is actually recorded. The complex optical field is extracted from a digital off-axis hologram using well-known holography techniques . The extracted field thus corresponds to the Fourier transform of the focused probe beam as it appears when it exits the sample. A fast Fourier transform (FFT) is applied to the extracted field in order to calculate the field in the focal plane. The optical field is then calculated in the 3D space around the focus point. The field is propagated from the focal plane over some distance along the optical axis (z coordinate) in the +z and −z direction using a standard Fourier beam propagation method , as illustrated in Fig. 2(a). In Figs. 2(b) and 2(c), we show the reconstructed field amplitude near the focus for an ideal beam (unaffected by scattering) and an experimentally measured beam respectively. Note that the physical values of x, y and z in the Fourier-transformed space depend on the focal length of the virtual tube lens of the digital microscope. Because this length is arbitrary, we simply consider that the imaging system has a magnification of one.
The position zopt of the effective focus along the z axis, is where the 2D (x − y) correlation function between the ideal focus and the experimental beam reaches its maximum:Fig. 2(d), we compare the c(z) function for an ideal beam and a scattered beam. The shift in z, Δz, is defined by the distance between the maxima of these functions and is equal to zopt by definition. The shift in x and y, Δx and Δy, are defined by the position of the center of mass of the experimental focus in the plane z = zopt:
Figure 3(b) shows the image obtained by plotting the x position of the focused spot with respect to the center of the nominal pinhole (i.e. the pinhole centered on an ideal beam not affected by scattering). Comparing Fig. 3(b) (the x-image) with Fig. 3(a), we can clearly see that the basic shape of the cell is reproduced. Similarly, the y-image obtained by plotting the y position and shown in Fig. 3(c) forms a recognizable shape of the cell. In Fig. 3(d) we show the image corresponding to Δz. The image in Fig. 3(e) is the d⊥-image obtained by plotting the Euclidian distance . Finally, Fig. 3(f) shows the d3D-image with .
We get an insight by comparing the confocal image (Fig. 3(a)) with the d⊥-image. At first glance they are contrast-inverted versions of each other. This can be explained as follows. In confocal microscopy, a region of the image is dark if the sample absorbs a lot, reflects or scatters light. In the cells we are imaging in this experiment, there is relatively little absorption or reflection. Therefore, dark areas are those that have some high spatial frequency content and scatter light. These same areas then are likely to have index gradients that steer the beam away from the detection pinhole and therefore a dark signal is plotted. On the d⊥-image a large deflection from the pinhole translates into a bright pixel on the reconstructed image. On the other hand, a clear sample simply leaves the beam unaffected to pass through the digital pinhole. A strong (bright) signal is recorded on the confocal image and a dark signal in the d⊥-image. This is generally true but other things may affect the system and, therefore, it is only approximately true that the d⊥-image is the contrast-inverted image of the confocal image. The differences may contain interesting information. In order to explore this possibility, we plotted, in Fig. 4(a), the pixel values of the corresponding points in the d⊥ versus the confocal image. In such a plot if the two images were linear contrast inverted versions of each other then the relationship would be:Fig. 4(a) is used to fit the model expressed by Eq. (7). We get α = −2.13 and β = −α as the line is forced through the point (f̂DC = 1, d̂⊥ = 0). The color-coded image shown in Fig. 4(d) shows the samples from the scatter diagram of Fig. 4(a) plotted back in the x − y coordinates of the image. The scatter diagram has been divided into three groups: The group above the regression line, in red, contains the pixels with a large beam deflection but only a small loss in the dynamic confocal signal. This suggest that, in these locations, the probe beam was only weakly scattered close to the focus but still deflected by larger scale structures. The group below the line, in blue, corresponds to the locations where stronger scattering occurred close to the focus, thus leading to a weaker dynamic confocal signal.
The weakly affected pixels (plotted in black in Fig. 4(a)), most of which are located outside the cell, are defined as the pixel for which f̂DC > 0.93. In order to display the segregation of the data in a more continuous way, the data can be projected into a coordinate system (U, V) associated to the regression line. The variable U is the distance along the line, from the point (f̂DC = 1, d̂⊥ = 0) and the variable V is the signed distance from the line (see Fig. 4(a)). Images in the U and V metrics are shown in Figs. 4(b) and 4(c) respectively. The U-image represents the amount of scattering, i.e. the extent to which the beam has been altered through the sample. The V-image gives information about the nature of the scattering, i.e. whether it consists in large scale deflection of the beam (e.g. close to the edge of the cell) or in smaller scale scattering close to the focus.
In Fig. 5, we have also plotted the focus width versus the dynamic confocal signal. It is clear that there these two measures are strongly anti-correlated. This is because the confocal signal naturally decreases as the beam gets wider. For an ideal Gaussian beam of normalized waist, the confocal signal is equal to:
4. Tomographic interpretation of the data
An important feature of the digital confocal imaging system is that the data recorded is in complex form. We therefore have access to the phase and it is possible to use it to extract information about the object. It is possible to form images by unwrapping and integrating the phase, ϕ(x, y), of the detected 2D field for each recorded hologram:
A 3D image can then be plotted with the values of each voxel corresponding to this integrated phase Φint, exactly as we do for the digital confocal images we already discussed with the integrated phase being a different metric. The main difference with conventional optical phase tomography [11–16] is that the measurement is done with a focused beam scanned in three dimensions rather than with a collimated beam scanned in angle. Dillon and Fainman  proposed this concept in 2010 and presented experimental results with an absorptive object (fiber and absorbing stripes on a reflecting surface) . In the DCM, as not only the intensity but also the phase is recorded, we can quantitatively reconstruct the refractive index distribution. We consider that the light rays propagate straight through the sample and accumulate a phase shift and attenuation corresponding to properties of the regions they went through. The region close to the focus is common to all rays whereas regions farther away affect only a restricted subset of the rays, as illustrated in Fig. 6. We reject the contribution of the out-of-focus regions by integrating (or averaging) the phase of the complex field in the detection plane. On average, the contribution from a specific region is reduced by a factor 1/r2 where r is the distance from the focus. Integration of the amplitude gives the absorption at the focus and the integration of the phase gives the refractive index at the focus.
The fact that the integration of the phase can actually produce an image and corresponds to a tomographic measurement can be formalized in mathematical terms. This is done by introducing the X-ray transform (denoted by the operator 𝒳), also known as John transform. It consists in an integral transform similar to the Radon transform . The quantity we want to measure in the sample is described by the function f(x,y,z), which corresponds to the absorption coefficient in optical absorption tomography and to the refractive index in the case of optical phase tomography. The X-ray transform is defined as the integral of f(x,y,z) along rays that all pass through point x = (x,y,z) where the probe beam is focused:Eq. (10). The X-ray transform can be inverted by following a two-step procedure. The first step consists in integrating 𝒳(f)(x, Θ) over all the directions Θ, on the half sphere S2. From this integration, we get the function g(x): 20]: Eq. (13). Using Eq. (14), the refractive index can simply be calculated from Φint as:
This filtering operation corresponds to the classical tomography high-pass ramp filter. As opposed to phase tomography with parallel ray illumination, there is no need to project the data in the Radon space or to perform a filtered back-projection. Intuitively speaking, the back-projection is naturally performed by the focused probe beam itself. The two-step inversion procedure of the X-ray transform is exact provided that the straight ray assumption is fulfilled and that the illumination light cones covers the whole sphere around the sample (2π solid angle on the illumination side and 2π on the detection side). The limited numerical aperture in our experimental system limits the resolution in a manner similar as what would happen for confocal imaging.
The absolute phase of the scattered field is necessary to perform the tomographic reconstruction. It may happen that the phase shift undergone by some part of the field exceeds 2π. In this case, the detected two-dimensional phase function has to be unwrapped. Two-dimensional unwrapping is a difficult and time-consuming process that has to be carried out for each pixel, which is not practical in most experimental conditions. Instead, the phase integration metric can be approximated by the phase at the center the digital confocal pinhole. The principle of using the phase in confocal microscopy has been proposed previously in  and . The field in the center of the confocal pinhole uc is given by the Fourier transform of the measured field u = exp[jϕ(x,y)], the amplitude of which we consider as approximately constant. We have:
Images of the confocal phase are shown in Fig. 7(d), where they are compared to the dynamic confocal images (Fig. 7(a)), the Δz-images (Fig. 7(b)) and the d⊥-images (Fig. 7(c)). The confocal phase images were obtained by first calculating the complex optical field at the position of the virtual pinhole and then by measuring the phase of the pixels that falls within the pinhole, as defined by Eq. (6). In our experiment the pinhole was chosen to have the “optimum” diameter equal to one diffraction-limited spot.
The confocal phase is also subject to phase wrapping which lead to artifacts in the image. This has been solved through the use of a two-dimensional phase-unwrapping algorithm on the final image . The main advantage of the confocal phase metric is that the phase unwrapping can be applied once on the final image, whereas in the integrated phase metric, a two-dimensional phase has to be unwrapped for each pixel.
The confocal phase images also display vertical streaks, which are artifacts due to fluctuations in the relative phase between the reference and the signal beam. In the confocal geometry, all the rays that contribute to the signal beam pass through the focus in the sample. Therefore, it is not possible to discriminate between a variation in the signal and a simple offset in the reference phase due, for example, to mechanical vibrations. The quality of the phase image therefore critically depends on the stability of the optical apparatus. In order to enhance the stability for the acquisition of phase images, the digital confocal microscope was modified as described in Fig. 1(b). In the modified configuration, the reference and signal beams pass through the same optical components thus reducing the relative optical path variations between them. In Fig. 7(d), the vertical streaks are a consequence of the remaining phase instability.
With the digital confocal microscope, for each scanning position, we can capture the entire optical field falling within the numerical aperture of the imaging objective. This represents a lot more information than the signal from a conventional confocal microscope in which the out-of-focus contribution is lost. As we have shown, multiple metrics can be obtained from the DCM data in order to produce images of the sample. Interestingly, all the metrics proposed, i.e. the confocal, the dynamic confocal (amplitude and phase), the focus deviations Δx, Δy, Δz, d⊥ and d3D, the focus width and the integration of the phase, yield a recognizable image of the cell with different features emphasized. In particular, the integrated phase represents approximately the local refractive index. A quantitative value can be calculated by applying a classical tomography high-pass ramp filter on the three-dimensional data. In a similar way, the local absorption coefficient can be derived from the integrated intensity. The confocal phase (i.e. the phase within the dynamic pinhole) is closely related to the integrated phase and yields similar images. The reason for this similar appearance is that the integration of the phase acts as an averaging process. The averaging itself is a low-pass filter operation similar to the effect of the pinhole in which the confocal phase is measured.
The position of the image of the focus can be measured in three dimensions. This can be done either by calculating the tilt and defocus Zernike factors on the scattered field or by calculating the three-dimensional field as we have shown above (Fig. 2). The deflection of the focus away from its ideal position is a measure of the overall sample-induced aberration. The width of the focus defined by Eq. (3) is a measure of the scattering properties of the sample in a similar way as the dynamic confocal signal, which is strongly correlated to it (Fig. 5). In Fig. 4, we compare the confocal signal (or equivalently the inverse of the focus width) and the deflection of the focus. This gives an indication on the amount of scattering that occurs close to the focus as opposed to more global refraction effects. The different metrics and the physical properties that they reveal are summarized in Table 1.
All these metrics show some sectioning capability as illustrated in Fig. 7, though some seem to have a better discrimination such as the dynamic confocal. The deflection of the focus (quantified by Δx, Δy and Δz), and the dynamic confocal (or the focus width to which it is well correlated) characterize two distinct types of aberrations undergone by the probe beam as it propagates through the sample. The reduction of the confocal signal without major deflection of the beam is most likely associated with scattering close to the focus. As such in non-absorbing media it is a measure of how scattering the sample is locally. Conversely, large deflections of the beam with minor loss in confocal signal suggest that the beam underwent only weak scattering but was refracted at some interface. This is corroborated by the correspondence between particular zones of the cell image shown in Fig. 4(d) and the different regions of the scatter diagram shown in Fig. 4(a). It is likely that the integrated phase, which corresponds to a tomographic back-projection, provides the most reliable images especially regarding the refractive index distribution within the cell. The refractive index distribution can be quantified with this method. There is a very clear distinction between the cytoplasm and nucleus, which is known to have a slightly larger refractive index. Although the confocal images are more difficult to interpret than the tomographic images, they are affected by a variety of spatial structures like irregularities in the cell membrane, which are known to exist for such type of epithelial cell. On the other hand, tomography may be less sensitive to surface ripples as it relies on the line integral along rays which is not much affected by thin surface ripples that do not yield a substantial phase deviation on the transmitted ray.
References and links
1. M. Minsky, “Microscopy apparatus,” U.S. patent 3,013,467 (1961).
2. C. J. R. Sheppard and A. Coudhury, “Image formation in the scanning microscope,” Opt. Acta 24(10), 1051–1073 (1977) [CrossRef] .
3. C. Sheppard and D. Shotton, Confocal Laser Scanning Microscopy (BIOS Scientific Publishers, 1997).
4. A. E. Dixon, S. Damaskinos, and M. R. Atkinson, “A scanning confocal microscope for transmission and reflection imaging,” Nature 351, 551–553 (1991) [CrossRef] .
5. G. Barbastathis, M. Balberg, and D. J. Brady, “Confocal microscopy with a volume holographic filter,” Opt. Lett. 24, 811–813 (1999) [CrossRef] .
7. J. W. OByrne, P. W. Fekete, M. R. Arnison, H. Zhao, M. Serrano, D. Philp, W. Sudiarta, and C. J. Cogswell, “Adaptive optics in confocal microscopy,” in Proceedings of the 2nd International Workshop on Adaptive Optics for Industry and Medicine, G. D. Love, ed. (World Scientific, 1999).
9. E. Cuche, P. Marquet, and C. Depeursinge, “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography,” App. Opt. 39, 4070–4075 (2000) [CrossRef] .
10. M. D. Feit and J. A. Fleck, “Beam nonparaxiality, filament formation, and beam breakup in the self-focusing of optical beams,” J. Opt. Soc. Am. B 5(3), 633–640 (1988) [CrossRef] .
11. G. N. Vishnyakov, G. G. Levin, V. L. Minaev, V. V. Pickalov, and A. V. Likhachev, “Tomographic interference microscopy of living cells,” Microscopy and Analysis 18, 15–17 (2004).
12. F. Charrière, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett. 31, 178–180 (2006) [CrossRef] [PubMed] .
13. F. Charrière, N. Pavillon, T. Colomb, and C. Depeursinge, “Living specimen tomography by digital holographic microscopy: morphometry of testate amoeba,” Opt. Express 14, 7005–7013 (2006) [CrossRef] [PubMed] .
15. W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Extended depth of focus in tomographic phase microscopy using a propagation algorithm,” Opt. Lett. 33, 171–173 (2008) [CrossRef] [PubMed] .
16. M. Debailleul, B. Simon, V. Georges, O. Haeberle, and V. Lauer, “Holographic microscopy and diffractive microtomography of transparent samples,” Meas. Sci. Technol. 19, 074009 (2008) [CrossRef] .
17. K. Dillon and Y. Fainman, “Depth sectioning of attenuation,” J. Opt. Soc. Am. A 27, 1347–1354 (2010) [CrossRef] .
18. K. Dillon and Y. Fainman, “Computational confocal tomography for simultaneous reconstruction of objects, occlusions and aberrations,” App. Opt. 49(13), 2529–2538 (2010) [CrossRef] .
19. S. Helgason, The Radon Transform, 2nd ed. (Birkhauser, 1999).
20. N. S. Landkof, Foundation of Modern Potential Theory (Springer Verlag, 1972) [CrossRef] .
21. S. Lai, R. A. McLeod, P. Jacquemin, S. Atalick, and R. Herring, “An algorithm for 3-D refractive index measurement in holographic confocal microscopy,” Ultramicroscopy 107, 196–201 (2007) [CrossRef] .
23. R. M. Goldstein, H. A. Zebken, and C. L. Werner, “Satellite radar interferometry: two-dimensional phase unwrapping,” Radio Sci. 23(4), 713–720 (1988) [CrossRef] .