This paper presents an experimental investigation of the possibility of transverse resolution improvement combined with effective numerically focused 3D imaging in full-field swept-source optical coherence microscopy (OCM) by using structured illumination and specific numerical post-processing. The possibility of transverse resolution improvement of the OCM coherence signal combined with the possibility of numerical focusing is demonstrated by imaging a resolution test target in the optical focus and defocus regions. The possibility of numerically focused 3D imaging with high transverse resolution is further demonstrated by imaging a 3D phantom and a biological sample. The results obtained demonstrate the feasibility and prospects of the combination of structured illumination and numerical focusing in Fourier domain OCM.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
For diagnostic and research purposes, optical coherence tomography/microscopy (OCT/OCM) techniques face a growing requirement of increased spatial resolution . For a given wavelength, the transverse resolution of an OCT/OCM system is determined by both the numerical aperture (NA) of the imaging system and the illumination properties in both scanning confocal and full-field (FF) OCT/OCM [2,3]. Increasing the numerical aperture of illumination (NAi) allows for increasing the transverse resolution  and improving other imaging properties, e.g. decreasing the effect of shadowing of inner sample structures by strong scatterers above .
Increasing the NA causes a decrease of the focus depth, depriving high-speed Fourier domain (FD) imaging of its parallel acquisition advantage along depth . Numerical focusing with different algorithms helps solving this problem for different kinds of OCT/OCM setups [5–8]. However, in the case of high NAi, the signal attenuation with defocus may hinder the effective numerically focused imaging of the required sample volume [8,9]. Recently it has been shown in [10,11], that this problem can be solved in Linnik-type OCT/OCM by using a special structure of illumination aperture, providing a discrete set of off-axis illumination directions, in combination with numerical focusing, taking into account the particular system parameters [8,9]. However, no experimental proof has been presented yet.
The principle of using structured off-axis illumination for improved transverse resolution has been used in a range of interference microscopy systems, e.g., [12–16]. In OCT/OCM various systems with off-axis illumination have been proposed for specific purposes, such as improved imaging of directionally reflective structures  or extending the depth of focus . In particular, in  scanning OCT imaging with a series of signals recorded with different illumination angles at each scanner position has been performed. Digital refocusing has been claimed, applying numerical phase correction between the individual signals for each illumination direction. However numerical focusing requires correction of phases, taking into account both the illumination and imaging (detection) apertures, which questions the feasibility of this approach. To the best of our knowledge, the possibility of numerically focused 3D OCM imaging combined with increased transverse resolution by structured illumination has not been experimentally demonstrated.
This paper presents an experimental investigation of the possibility of transverse resolution improvement combined with effective numerically focused 3D imaging in FF swept-source (SS) OCM by using structured illumination (SI) and specific numerical post-processing. The approach to SI FF SS OCM imaging is based on the theoretical papers [10,11], with modifications for improvement of the experimental performance. With the SI FF SS OCM we demonstrate numerically focused imaging of a resolution test target, 3D phantom and a biological sample.
2. Experimental setup and numerical processing
2.1 Experimental setup and preliminary processing
Figure 1 presents the optical scheme of the experimental setup. The light from the swept-source laser (SS) (Superlum BroadSweeper 840HP) is directed via a single-mode fiber into a 1→4 fiber optic coupler (FC) (Thorlabs TWQ850HA). Three of the outputs of this coupler are extended with additional fibers (OF) of different lengths in order to make all the four output beams mutually incoherent. The resultant four output fibers are fixed in a holder (FH), realizing thereby a set of four point illumination sources.
The light coming from the illumination system through the objectives MOS and MOR (Mitutoyo Plan Apo NIR5x with NA = 0.14) illuminates a sample and an optical wedge, which serves as a low-reflectance reference mirror (RM). The light reflected from RM and backscattered from the sample is recombined and passes an achromatic tube lens (TL) with a focal length of 250 mm. A Photonfocus MV1-D1312I-160-CL-12 camera is used as a matrix photodetector (PD). For the data acquisition a region of 512 × 512 pixels of the camera is used providing a field of view of 655 × 655 μm.
The illumination points are imaged onto the aperture plane of the objective MOS, resulting in illumination plane waves impinging on the sample from four directions. For numerical processing the numerical aperture value corresponding to the distance from the optical axis to the centers of the illumination points was assumed NAic = 0.139. The illumination system field stop (FS) is optically conjugated with the front focal planes of the objectives MOS and MOR. MOR and RM are shifted as whole in the direction of the beamsplitter in order to displace the coherence gate with respect to the focus position in the sample arm for the FD reconstruction (similarly to ).
In order to avoid the artifacts related to the interaction of the signals originating from different illumination directions in the total signal , we use a combination of two recordings with two different illumination directions each. For this sake an additional screen (SC) is blocking either the vertically or horizontally aligned illumination points.
For obtaining a single 3D image, one wavelength scan from 878 to 800 nm with 1000 sampling points is performed for each set of illumination directions. A background wavelength scan, obtained without a sample, is subtracted from each wavelength scan recorded with a sample; at each transverse position the mean of the wavelength scan after the background subtraction is subtracted as well and zero-padding to 1024 pixels is performed. Then interpolation and resampling for linearization in wavenumbers is performed followed by an apodization with a Hanning window.
As described in , the abovementioned shift of MOR and RM leads to a phase modulation of the coherence signal at each circular temporal frequency due to a factor , where and are the wavelength and light speed in vacuum, is the back focal distance of the tube lens, and are the transverse coordinates in the photodetector plane. It can be noted, that the background phase modulation (not related to the sample) across the field of view can also be affected by other factors, e.g. a not perfectly symmetric beamsplitter. For removing the background parabolic phase modulation from the coherence signal, the obtained array was multiplied at each temporal (optical) frequency by an appropriate complex conjugate exponential phase function.
Finally a fast Fourier transform (FFT) along the frequency (wavenumber) dimension is performed and a part of 512 pixels is selected from each depth scan (at each transverse position), excluding the half of the FFT result containing the complex conjugate signal and excluding large depths related to the initial signal interpolation. The absolute value of this array represents a 3D OCT image of the sample for the particular set of illumination directions, which we further refer to as conventional OCT processing result. The absolute value of the sum of two arrays with different sets of illumination directions provides the respective 3D OCT image with totally 4 illumination directions.
2.2 Numerically focused imaging
For numerically focused imaging, at each transverse position of the obtained complex array an inverse FFT is computed along depth, returning back to the frequency dimension. 500 pixels corresponding to the actual spectrum width (without the part corresponding to the initial zero-padding from 1000 to 1024 pixels) are used for the further processing.
For correction of the phase differences between the signals, corresponding to different illumination directions, and numerically focused imaging we combine the approach proposed in  with an additional numerical processing procedure, as described further in this section. For each temporal frequency of the obtained complex signal, a 2D FFT along the transverse coordinates is taken after zero-padding to 1024 × 1024 pixels. Then for each longitudinal position the following procedure is performed: the obtained spatial spectra are multiplied by correction functions 8]. For this case of SI, the illumination aperture function for each set of illumination directions is considered simply as consisting of infinitely small illumination points (two points for each set of illumination directions). The value for the use in Eq. (2) is selected in such a way, that multiplication of the signal along the frequency dimension with the phase modulation factor shifts the zero delay position to the focus position in the absence of a sample. It can be noted, that if some defocus of RM is present in the experiment ( deviates from ), the respective optical delay is factually also included in such value in addition to the doubled shift of MOR with RM.
Defocus of an imaged structure leads to a phase and in some cases an amplitude modulation of the transverse spatial spectrum of the coherence signal, with the phase modulation being primarily responsible for image blurring. The main purpose of the multiplication of the spatial spectra with the functions of Eq. (1) is the cancellation of this phase modulation for a particular imaging depth within the sample. Therefore these functions are computed specifically for each longitudinal position and for each temporal frequency. It can be noted, that if the aperture function or illumination aperture function have some phase modulation themselves (e.g. caused by aberrations), this procedure can also help in correcting the respective phase modulation of the spatial spectrum.
As a first step for computing these correction functions , we assume all the four illumination points in the two illumination aperture functions to be equidistant from the optical axis and use identical (unity) coefficients as their complex values.
After multiplication of the transverse spatial spectra of the signals at each temporal frequency with the appropriate correction functions , a sum along the temporal frequency dimension is computed (separately for each of the two recordings with different sets of illumination directions). This leads to two spatial spectrum arrays and , corresponding to the two horizontally and the two vertically aligned illumination points respectively.
In theory, 2D inverse FFTs of the two spatial spectrum arrays may produce two numerically focused images with different sets of illumination directions and their sum may provide a numerically focused image with totally four illumination directions. However in practice the setup adjustment is not perfect, the absolute values and phases of the illumination points of the illumination aperture function may be different from the initial assumption, and their positions are determined with finite accuracy, which may lead to phase errors in the phase correction functions . To overcome this effect we introduce an additional procedure analyzing the phase and absolute value differences of the signals, corresponding to different illumination directions.
Each illumination point of the illumination aperture function produces a circle in the transverse spatial spectrum, resulting in two circles from each of the two recordings. Bearing this in mind, four regions (shown with orange dashed lines in Fig. 2(c)) are selected within the overlapping regions of adjacent spatial spectrum circles (the regions where more than two circles overlap are avoided).
For estimating the phase difference between the spatial spectrum circles 1 and 2 (indicated with orange digits in Fig. 2(c)), the values of are computed point-wise (where and are not equal to zero) and summed within the respective selected overlapping region. This sum is then divided by its absolute value, providing a complex exponential phase factor which argument can be considered corresponding to the mean phase difference between the first and second spatial spectrum circles within the selected overlapping region. Similar analysis in the selected overlapping region of the spatial spectrum circles 3 and 4 provides a phase factor . Similar analysis of in the other two selected overlapping regions (for circles 2 and 3 and circles 4 and 1) provides phase factors and . Then the following coefficients are computed:
These coefficients can be already used in this form or further normalized as , , , , e.g. with . Such normalization affects the mean phase without changing the signal absolute values.
For estimating the absolute values ratios between the spatial spectrum circles 1 and 2, the ratios are computed point-wise (where and are not equal to zero) and averaged within the respective selected overlapping region. This average is denoted . Similar analysis in the selected overlapping region of the spatial spectrum circles 3 and 4 provides a factor . Similar analysis of in the other two selected overlapping regions (for circles 2 and 3 and circles 4 and 1) provides factors and . Then the following coefficients are computed:
Then new correction functions are computed similarly to the first step of applying Eqs. (1) and (2), but using now coefficients , , , as the complex values of the illumination points of the illumination aperture functions, instead of identical coefficients for all points. The initial transverse spatial spectra (without multiplication by the initial functions) are multiplied by these new functions for each temporal frequency for each of the two recordings with different sets of illumination directions. Then a sum along the temporal frequency dimension was computed, followed by inverse 2D FFT, for each recording with different sets of illumination directions. The sum of these two arrays provided the final complex numerically focused OCM signal with totally four illumination directions.
It can be noted that the coefficients - contain not only the information on the illumination aperture functions. These coefficients may be varying with depth, even with a constant illumination aperture function, e.g. as a result of inaccuracy of the initial determination of the illumination points positions or due to a non-uniform sample structure, when imaging in depth of a volumetric sample.
It can be shown, that in the case of plane-wave off-axis illumination (with the off-axis angle determined by NAic) and imaging a mirror perpendicular to the optical axis in a medium with a refractive index , the axial shift of the mirror, corresponding to a one pixel shift of the respective peak in the A-scans, can be described as , where is the total width of the temporal spectrum array (including the zero-padded part) in linear temporal frequencies before the FFT used for obtaining this A-scan. This relation was used to compute the geometrical distances in the longitudinal (axial) direction (such as the distance from the initial focus position to the sample surface and the distances from the surface position to particular depths within the sample ) for the use in the above described numerical processing algorithm.
3. Results and discussion
Figure 2 presents the results of imaging a USAF 1951 resolution test target with the SI FF SS OCM presented in Fig. 1 and a conventional FF SS OCM with plane-wave illumination. The latter was implemented in the same setup by using only one illumination point centered on the optical axis, which corresponds to illumination of the sample with a plane wave propagating along the optical axis. Figures 2(a)-2(g) correspond to an optically focused resolution target, Figs. 2(h) and 2(i) correspond to an optically defocused resolution target with a defocus of µm.
Figures 2(a)-2(c) and 2(f) present the absolute values of the transverse spatial spectra of the coherence signals reconstructed after conventional OCT processing (without numerical focusing) at the longitudinal position of the resolution target: Figs. 2(a) and 2(b) correspond to two horizontally and two vertically aligned illumination points respectively, Fig. 2(c) corresponds to the sum of these two signals and hence totally four illumination directions, Fig. 2(f) corresponds to a single illumination point centered on the optical axis (plane-wave illumination over the sample).
Figures 2(d), 2(e) and 2(g)-2(i) present the absolute values of the coherence signals (en face images) reconstructed at the resolution target longitudinal position in a fragment of the field of view of 328 × 328 µm. Figures 2(d)-2(e) and 2(h)-2(i) correspond to combining two recordings with totally four illumination directions; Fig. 2(g) corresponds to a single illumination point centered on the optical axis (plane-wave illumination over the sample). Figures 2(d), 2(g) and 2(h) correspond to conventional OCT processing (without numerical focusing); Figs. 2(e) and 2(i) correspond to the full numerical processing procedure described above. It should be mentioned however, that in the case of Fig. 2(e) this numerical processing procedure with the defocus value of does not act as numerical focusing, but as a correction of the spatial spectrum phase modulation caused by the phase differences between the signals, corresponding to different illumination directions. The insets in Figs. 2(d), 2(e) and 2(g)-2(i) present magnified fragments with elements 4-6 of group 7 after the application of FFT-based interpolation to avoid pixelated images.
As can be seen from comparison of Figs. 2(e) and 2(g), the SI provided a significant improvement of the transverse resolution, making resolvable the 5th and 6th elements of group 7 of the resolution target (with the line thickness of the 6th element 2.19 µm), which are not resolved in the respective image corresponding to conventional plane-wave illumination along the optical axis. The comparison of Figs. 2(h) and 2(i) demonstrates the feasibility of the numerical focusing approach.
For the further investigation of the applicability of this approach to 3D imaging of volumetric samples, we have imaged a phantom (iron oxide particles dispersed in resin). Figure 3 presents the B-scans (Figs. 3(a)-3(c)) and en face images (Figs. 3(d)-3(f)) obtained with conventional OCT processing (Figs. 3(a) and 3(d)), additional spatial Fourier filtration at each longitudinal position (by setting to zero the transverse spatial spectra in points, where for the maximum in the absence of defocus equals zero) (Figs. 3(b) and 3(e)) and the whole processing described above (Figs. 3(c) and 3(f)) (assuming that the sample has a uniform refractive index of 1.5).
As has been noted in , the part of the correction functions with can help with reducing some noise. This effect can be seen in Figs. 3(b) and 3(e) where a simplified Fourier filtration is applied in addition to conventional OCT processing. For estimating the effect of Fourier filtration on the noise, the noise floor was estimated as follows: the standard deviation of the absolute values of the signal was computed for each of the three 3D arrays, corresponding to different types of processing, within the indicated region (dashed rectangle in Fig. 3(a)) spanning 400 adjacent B-scans (where ideally no signal should be present). The ratio of the noise floors represented by the standard deviations computed in the cases of conventional OCT processing and processing with additional Fourier filtration was approximately 1.7. The respective ratio for conventional processing and the whole numerical processing procedure (including numerical focusing) was approximately 1.8. The difference was presumably caused by a slightly stronger spatial Fourier filtration effect in the whole processing, considering for each particular , rather than the maximum used in the simplified Fourier filtration.
The full processing combines the Fourier filtration effect with the numerical focusing of the signals. As can be seen from Figs. 3(c) and 3(f), this numerical processing procedure did allow for numerical extension of the high transverse resolution region beyond the original focus depth.
Finally for investigating the applicability of this approach to imaging biological samples, an orange slice has been imaged. Figures 3(g)-3(i) present the en face images reconstructed from a defocused plane with conventional OCT processing (Fig. 3(g)), additional spatial Fourier filtration (Fig. 3(h)) and the whole numerical processing procedure (Fig. 3(i)) (assuming that the sample has a uniform refractive index of 1.4).
A current limitation of the SI FF SS OCM system considered in this paper is the sequential recording of the two orthogonal illumination patterns. Further parallelization of the detection of signals from different illumination directions is necessary for high speed imaging and in vivo applications. Still, the paper demonstrates, how to combine sequential recordings, with particular correction of phase and amplitude. Furthermore, the principle of numerically focused imaging with SI FD OCM is not restricted to a particular setup, but provides a wide field for further development and designing setups with improved illumination, detection and processing. It is prospective for high resolution numerically focused 3D OCM imaging, especially for the retinal imaging applications, where the NA is limited by the eye and the SI provides an opportunity for further increase of the transverse resolution preserving the possibility of numerical focusing.
In this paper the possibility of numerically focused 3D OCM imaging with improved transverse resolution by structured illumination has been experimentally demonstrated. For obtaining a single 3D image with in total four illumination directions, a configuration with the acquisition of two wavelength (frequency) scans, each corresponding to two different illumination directions, has been implemented. Using the properties of such signals, an additional numerical processing procedure has been developed and integrated with the algorithm for numerically focused imaging of . This procedure analyzes the phase and absolute value differences of the coherence signals, corresponding to the different illumination directions. In the limit of NAic→NA this SI approach utilizing illumination and imaging through the same objectives allows up to two times increase in the width of the transverse spatial spectrum (and hence up to two times increase in the transverse resolution) compared to the plane-wave illumination. The results obtained show the feasibility and prospects of the combination of structured illumination and numerical focusing in 3D FD OCM imaging and potential for future application to in vivo imaging.
The Austrian Federal Ministry for Digital and Economic Affairs and the National Foundation for Research, Technology and Development.
We acknowledge support from Carl Zeiss Meditec Inc.; also we thank Michael Pircher for helpful discussions, Andreas Hodul for the workshop support and Tilman Schmoll and Matthias Salas for assistance.
1. W. Drexler and J. G. Fujimoto, eds., Optical Coherence Tomography (Springer, 2008).
2. R. A. Leitgeb, M. Villiger, A. H. Bachmann, L. Steinmann, and T. Lasser, “Extended focus depth for Fourier domain optical coherence microscopy,” Opt. Lett. 31(16), 2450–2452 (2006). [CrossRef] [PubMed]
3. A. A. Grebenyuk and V. P. Ryabukho, “Theory of imaging and coherence effects in full-field optical coherence microscopy,” in Handbook of full-field optical coherence microscopy, A. Dubois ed. (Pan Stanford Publishing, 2016), pp. 53–89.
8. A. Grebenyuk, A. Federici, V. Ryabukho, and A. Dubois, “Numerically focused full-field swept-source optical coherence microscopy with low spatial coherence illumination,” Appl. Opt. 53(8), 1697–1708 (2014). [CrossRef] [PubMed]
9. A. A. Grebenyuk and V. P. Ryabukho, “Numerical reconstruction of 3D image in Fourier domain confocal optical coherence microscopy,” in Proceedings of the International Conference on Advanced Laser Technologies (ALT)2012 (Bern Open Publishing, 2013), pp. 1–5.
10. A. A. Grebenyuk and V. P. Ryabukho, “Illumination structure and three-dimensional imaging properties in optical coherence microscopy,” in Proceedings of the International School-Conference for Young Scientists and Specialists “Modern Problems of Physics” (B. I. Stepanov Institute of Physics of the National Academy of Sciences of Belarus, 2014), pp. 243–247 [in Russian].
11. A. A. Grebenyuk and V. P. Ryabukho, “Numerically focused optical coherence microscopy with structured illumination aperture,” Comput. Opt. 42(2), 248–253 (2018). [CrossRef]
14. J. R. Price, P. R. Bingham, and C. E. Thomas Jr., “Improving resolution in microscopic holography by computationally fusing multiple, obliquely illuminated object waves in the Fourier domain,” Appl. Opt. 46(6), 827–833 (2007). [CrossRef] [PubMed]
15. S. Chowdhury and J. Izatt, “Structured illumination quantitative phase microscopy for enhanced resolution amplitude and phase imaging,” Biomed. Opt. Express 4(10), 1795–1805 (2013). [CrossRef] [PubMed]
16. P. Lehmann, J. Niehues, and S. Tereschenko, “3-D optical interference microscopy at the lateral resolution,” Int. J. Optomechatronics 8(4), 231–241 (2014). [CrossRef]
17. A. Wartak, M. Augustin, R. Haindl, F. Beer, M. Salas, M. Laslandes, B. Baumann, M. Pircher, and C. K. Hitzenberger, “Multi-directional optical coherence tomography for retinal imaging,” Biomed. Opt. Express 8(12), 5560–5578 (2017). [CrossRef] [PubMed]
18. E. Bo, Y. Luo, S. Chen, X. Liu, N. Wang, X. Ge, X. Wang, S. Chen, S. Chen, J. Li, and L. Liu, “Depth-of-focus extension in optical coherence tomography via multiple aperture synthesis,” Optica 4(7), 701–706 (2017). [CrossRef]