Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Automated fast computational adaptive optics for optical coherence tomography based on a stochastic parallel gradient descent algorithm

Open Access Open Access

Abstract

The transverse resolution of optical coherence tomography is decreased by aberrations introduced from optical components and the tested samples. In this paper, an automated fast computational aberration correction method based on a stochastic parallel gradient descent (SPGD) algorithm is proposed for aberration-corrected imaging without adopting extra adaptive optics hardware components. A virtual phase filter constructed through combination of Zernike polynomials is adopted to eliminate the wavefront aberration, and their coefficients are stochastically estimated in parallel through the optimization of the image metrics. The feasibility of the proposed method is validated by a simulated resolution target image, in which the introduced aberration wavefront is estimated accurately and with fast convergence. The computation time for the aberration correction of a 512 × 512 pixel image from 7 terms to 12 terms requires little change, from 2.13 s to 2.35 s. The proposed method is then applied for samples with different scattering properties including a particle-based phantom, ex-vivo rabbit adipose tissue, and in-vivo human retina photoreceptors, respectively. Results indicate that diffraction-limited optical performance is recovered, and the maximum intensity increased nearly 3-fold for out-of-focus plane in particle-based tissue phantom. The SPGD algorithm shows great potential for aberration correction and improved run-time performance compared to our previous Resilient backpropagation (Rprop) algorithm when correcting for complex wavefront distortions. The fast computational aberration correction suggests that after further optimization our method can be integrated for future applications in real-time clinical imaging.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography [1] (OCT) is a non-invasive imaging technique based on low coherence interferometry. The continued development of OCT over the past three decades has enabled real-time in-vivo high resolution tomography, with applications in ophthalmology, dermatology, and oncology, among many others [25].

In ophthalmology applications, different eye structures and the inevitable movement will introduce unknown aberrations, because the wavefront will be distorted differently when the light passes different paths through the eyes. These unknown aberrations change dramatically leading to blurred or distorted OCT images and reduction in signal-to-noise ratio (SNR). Therefore, like incoherent imaging systems such as scanning laser ophthalmoscopy (SLO) [6], adaptive optics (AO) hardware and systems [7] are necessary for correcting the aberrations and obtaining a high resolution image. A widely used AO aberration correction method is based on hardware systems that employ a Shack-Hartmann wavefront sensor to measure the wavefront distortion introduced by the human eye and correct the measured aberrations by deformable mirrors (DM). This hardware-based AO (HAO) can correct the aberration dynamically in the imaging process due to the high-speed control of the DM using parallel schemes [8], but these systems are complex and costly.

In contrast to hardware-based wavefront sensing and correction, computational approaches that are based on phase-stable OCT systems have been developed to correct aberrations digitally [913]. In these systems, the original interference signal is obtained and post-processed by a numerical algorithm to remove aberrations and obtain high resolution images, which is similar to numerical aberration correction in digital holography [14,15]. Compared with more traditional HAO, computational AO (CAO) [9] is a simple and effective technique implemented without the need to employ any extra expensive optics or increase the sample exposure time. The three common methods for CAO in OCT systems include sub-aperture correlation [16,17], the guide-star method  [18], and incorporating the use of image metrics for determining the aberration correction filter  [19]. For the sub-aperture correlation method, the cross-correlation accuracy depends on the sub-aperture size and the scattering property of the sample, and unsuccessful correction may occur for more highly scattering samples due to the insufficiently resolved images. The iterative sub-aperture correlation method [17] was recently proposed for retina imaging, but it still suffers from the trade-off between sub-aperture size and correlation accuracy. For the guide-star method, a suitable point-like structure in the object is necessary, but often unavailable in real biological or clinical imaging scenarios. Among these three methods, image-metrics-based CAO is the most extensive due to its correction ability and wide applicability. This method is similar to sensor-less AO [20,21], by optimizing the image quality metrics such as sharpness, Shannon entropy, spatial frequency content, and maximum intensity, among many others [9,22,23], the wavefront aberrations modeled as a linear combination of Zernike polynomials can be estimated. But instead of adjusting DM or other wavefront correctors, this method updates and the pupil function computationally for aberration correction. Therefore, the key issue to computational aberration-corrected imaging is to estimate the coefficients of Zernike polynomials both rapidly and accurately.

Grid searching (GS) is the simplest method for one-dimensional searching of Zernike coefficients to obtain the best image quality [24]. Although GS is an effective approach for low-order aberration correction, one-dimensional searching is not suitable for more complex multi-dimensional optimization for high-order aberrations [25,26]. Furthermore, searching orders of Zernike polynomials is inconclusive for different applications [24,27,28] and its accuracy and efficiency are greatly affected by the searching boundary and step for each coefficient. Meanwhile, a variety of methods for aberration estimation have been reported, such as by using neural network methods [19] and genetic algorithms [29]. However, in complex imaging systems with high-order aberrations, more degrees of freedom are needed for aberration correction. The aforementioned methods will result in slow convergence due to the computing complexity of those algorithms. The gradient descent (GD) method [30] can also be employed to recover high quality images. An incremental adjustment of control parameters is estimated by calculating the gradient components from the entire dataset, which is time-consuming. To solve this problem, stochastic gradient descent (SGD) [31] was proposed by replacing the actual gradient from the entire dataset with a simplified gradient calculated from a randomly selected data subset. The critical issue for both GD and SGD is that the relationship between image metrics and Zernike coefficients is complicated for OCT systems, which will increase the computational complexity for the actual gradient needed in typical gradient searching methods. Although a computationally efficient, closed-form expression for the gradient [22] has been proposed to make efficient search algorithms for image optimization, the computational efficiency will be decreased dramatically for high-order aberrations with numerous Zernike terms. To overcome this problem, we introduced stochastic parallel gradient descent (SPGD) that can provide multichannel parallel signal processing for fast CAO aberration correction. SPGD is an optimization algorithm which was originally proposed for model-free supervised learning [32] and is now widely used for DM control in HAO systems [8,33,34]. Stochastic perturbation in each step will circumvent the predetermination of Zernike terms in order to give the optimum in multidimensional space. Parallel-perturbations are simultaneously applied to all selected optimization channels. This technique can be regarded as a parallel analog implementation of the conventional gradient-descent method [22,30,31,35], while the computing of gradients is simplified by replacing the true gradient with the numerical calculation of a stochastic approximation. Due to the parallel optimization, the convergence of the SPGD adaption process is almost independent of the number of control channels [36]. Therefore, this feature of the SPGD algorithm is a clear advantage for high-order aberration correction with many degrees of the freedom in CAO. The SPGD algorithm is leveraged in this paper to correct complex aberrations.

In our SPGD-based algorithm, considering the multiple modes of high-order aberrations, the number and order of Zernike terms are chosen randomly in each iteration to find the optimal solution for complex aberrations. In addition, high-order aberration correction of samples with different scattering properties can be achieved at fast convergence speeds due to the parallel optimization, which suggests its potential use for real-time clinical applications.

2. Principle of CAO and SPGD algorithm

2.1 Principle of CAO

In Fourier-domain OCT, the depth-resolved spectral data obtained by the spectrometer is first pre-processed by a conventional OCT signal processing procedure [37] with the DC offset elimination, numerical dispersion compensation, and Fourier transform to obtain the complex en-face images of the sample at different depths. From coherent imaging theory, the en-face image $g(x,y)$ at a specific depth can be expressed as

$$g(x,y)=o(x,y)\otimes h(x,y),$$
where $o(x,y)$ is the ideal geometric image, $h(x,y)$ is the point spread function (PSF) of the system, $\otimes$ represents convolution, and $(x,y)$ are the cartesian coordinates of the image plane.

The convolution theorem can be utilized to rewrite Eq. (1) in the pupil plane as

$$G(u,v)=O(u,v)H(u,v),$$
where $G(u,v)$ and $O(u,v)$ are the Fourier transform of $g(x,y)$ and $o(x,y)$, respectively. $H(u,v)$ is the coherent transfer function (CTF), and $(u,v)$ are the coordinates of the pupil plane.

For the aberration-free system, the PSF $h(x,y)$ is the inverse Fourier transform of the pupil function $p(u,v)$. So, the CTF can be described as

$$H(u,v)=\mathcal{F}[h(x,y)]=\mathcal{F}\left \{ \mathcal{F}^{{-}1}[p(u,v)] \right \}=p(u,v).$$

For systems with aberration, we assume that the phase difference between the real wavefront and the ideal wavefront in the pupil plane can be expressed as $\varphi _{a}(u,v)$, then the generalized pupil function $P(u,v)$ is shown as

$$P(u,v)=p(u,v)\textrm{exp}[i\varphi _{a}(u,v)].$$
From Eq. (4), the CTF $H_{a}(u,v)$ for systems with aberration can be modeled as
$$H_{a}(u,v)=P(u,v)=p(u,v)\textrm{exp}[i\varphi _{a}(u,v)].$$

Compared with the aberration-free system, the effect of aberration is evident by introducing the wavefront distortion in the frequency band pass, which decreases the imaging quality. By combining Eq. (2) with Eq. (5), the aberrated image $G_{a}(u,v)$ in the pupil plane can be expressed as

$$G_{a}(u,v)=O(u,v)p(u,v)\textrm{exp}[i\varphi _{a}(u,v)].$$

Ideally, if $\varphi _{a}(u,v)$ is estimated, the aberration can be eliminated, and the high resolution image $G_{c}(u,v)$ can be recovered according to the principle of phase conjugation modeled as

$$G_{c}(u,v)=G_{a}(u,v)\textrm{exp}[{-}i\varphi _{a}(u,v)].$$
Therefore, the aberration-corrected image $g_{c}(x,y)$ in the image plane is given by
$$g_{c}(x,y)=\mathcal{F}^{{-}1}[G_{c}(u,v)].$$

The wavefront distortion $\varphi _{a}(u,v)$ is usually represented by a sum of weighted Zernike polynomials

$$\varphi _{a}(u,v)=\sum_{n=1}^{N}a_{n}Z_{n}(u,v)=\mathbf{A}\mathbf{Z}(u,v),$$
where $\mathbf {A}=[a_{1},a_{2},\ldots ,a_{N}]$ is the coefficient matrix of wavefront distortion, $\mathbf {Z}=[Z_{1},Z_{2},\ldots ,Z_{N}]^\textrm {T}$ are the Zernike polynomials, and N is the number of Zernike terms.

To estimate the coefficient matrix $\mathbf {A}$, the image entropy E is chosen as the evaluation function of wavefront distortion correction. For a normalized intensity image $i_{n}(x,y)$, the image entropy is modeled as:

$$E[i_{n}(x,y)]={-}\sum_{x,y}i_{n}(x,y)\mathrm{log}i_{n}(x,y).$$

When the entropy reaches the minimum, the phase aberration is accurately estimated, and the computational aberration correction of the en-face image is achieved. Therefore, the CAO process can be converted into the optimization of $[a_{1},a_{2},\ldots ,a_{N}]=\mathrm {argmin}(E)$.

A representative schematic of CAO aberration correction based on the SPGD algorithm is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic of the SPGD algorithm-based computational adaptive optics.

Download Full Size | PDF

2.2 Aberration correction with SPGD algorithm

The SPGD algorithm is a model-free optimization technique suitable for complex systems with multiple variables because of its superior optimization ability and fast convergence. In this paper, the SPGD algorithm is used to estimate wavefront distortion, and then the aberrations are corrected directly by updating the general pupil function computationally.

The first step is to initialize the coefficient matrix $\mathbf {A}$ of wavefront distortion, and then the random perturbation $\sigma$ and the gain coefficient $\gamma$ are set. These two values empirically are determined to obtain the highest image quality. According to Vorontsov’s derivation [36], the coefficient matrix in the $(\textit {m}+1)$ th iteration is determined by

$$\mathbf{A}^{(m+1)}=\mathbf{A}^{(m)}+\gamma \delta E^{(m)}\delta \mathbf{A}^{(m)}, m=1,2,\ldots \textrm{M},$$
where M is the number of iterations. The term $\delta \mathbf {A}^{(m)}$ is the stochastic parallel-perturbation obeying the Bernoulli distribution ($p=0.5$) to ensure that the number and type of Zernike terms are chosen randomly at each iteration. If the n-th Zernike term is picked, $\delta \mathbf {A}^{(m)}(n)=\sigma$, otherwise, $\delta \mathbf {A}^{(m)}(n)=0$. The term $\delta E^{(m)}$ is the entropy variation caused by $\delta \mathbf {A}^{(m)}$, which can be expressed as
$$\delta E^{(m)}=E_{+}-E_{-}=E(\mathbf{A}^{(m)}+\delta \mathbf{A}^{(m)})-E(\mathbf{A}^{(m)}-\delta \mathbf{A}^{(m)}).$$
The searching direction is determined by the sign symbol (positive or negative) of $\gamma$. When $\gamma >0$, the optimization goal is to find the maximum of image entropy E, while $\gamma <0$ implies that the searching direction is the minimum of E. In this paper, the aberration correction is achieved when E reaches the minimum, so we choose $\gamma <0$ here. As shown in Fig. 2, the optimization process of the SPGD algorithm can be described as follows.

 figure: Fig. 2.

Fig. 2. Flowchart for the stochastic parallel gradient descent (SPGD) algorithm.

Download Full Size | PDF

First, the coefficient matrix $\mathbf {A}$ is initialized, and the random perturbation $\sigma$ as well as the gain coefficient $\gamma$ is set. Next, the Bernoulli stochastic perturbation generator is employed to randomly select Zernike terms and assign $\delta \mathbf {A}^{(m)}$ for each iteration. Then, image entropies $E_{+}=E(\mathbf {A}^{(m)}+\delta \mathbf {A}^{(m)})$ and $E_{-}=E(\mathbf {A}^{(m)}-\delta \mathbf {A}^{(m)})$ with positive and negative perturbations are calculated, respectively. After that, the coefficient matrix is updated by $\mathbf {A}^{(m+1)}=\mathbf {A}^{(m)}+\gamma \delta E^{(m)}\delta \mathbf {A}^{(m)}$, and the optimization stops when the terminal condition is satisfied. With the decreasing entropy, the aberration reduces and the image quality improves. Meanwhile, the convergence of entropy indicates that the aberration has been corrected. Finally, the image after CAO correction is obtained.

3. Numerical simulation

3.1 Correction performance for low-order aberration

In order to demonstrate the feasibility of the SPGD algorithm for CAO, we carried out simulation analysis on a simulated resolution target image. Gaussian noise and speckle noise, with variance 0.005 and 0.1, respectively, were added to the simulated resolution chart to simulate a noisy image. We calculated the 2D Fourier transform of the noise-added image to propagate it to the pupil plane. Aberrations were introduced by adding wavefront distortion $\varphi _{a}(u,v)$ in the pupil plane, which were expressed by a weighted sum of 3rd-order Noll Zernike polynomials corresponding to 7 terms [38]. Here piston, tip and tilt were ignored because they were considered only as misalignment errors but not aberrations. The simulated aberrated image is shown in Fig. 3(a). Aberrations lead to image deterioration and loss of fine details, such as the central groups in the resolution target (group 8-9 in Fig. 3(c)). The image after aberration correction is shown in Fig. 3(b) and 3(d), where the random perturbation $\sigma$ and the gain coefficient $\gamma$ was 4 and 1, respectively. The entropy during the iterative correction progress is given in Fig. 3(g). Compared with the aberrated image, the entropy of the corrected image has been greatly decreased, and the lateral resolution enhanced significantly. The image entropy reached the minimum after about 20 iterations. The estimated coefficients of all Zernike terms are shown in Fig. 3(h) illustrated by the orange bars, which correspond highly with the introduced coefficients indicated by the blue-colored bars. The phase map of the introduced 3rd-order aberration is shown in Fig. 3(e). According to the Zernike coefficients calculated from the proposed method, the residual wavefront between the introduced phase and the SPGD estimated phase is shown in Fig. 3(f). The root mean square (RMS) value of the residual wavefront is $2.7\times 10^{-3}$ ${\mathrm{\mu}}$m, corresponding to 0.32% of the original wavefront distortion, which also demonstrates that SPGD-based CAO can accurately estimate the wavefront distortion.

 figure: Fig. 3.

Fig. 3. (a) Simulated aberrated USAF resolution target image with 3rd-order Zernike polynomials. (b) Corrected image after SPGD-based CAO. (c) and (d) Zoomed-in images of the central region illustrated by white squares in (a) and (b), respectively. (e) Phase map of the introduced 3rd-order aberration distortion. (f) Phase deviation between recovered and introduced wavefront aberration. (g) Image entropy during the correction process. (h)  Coefficients of introduced and computed Zernike polynomials.

Download Full Size | PDF

3.2 Robustness of the SPGD-based CAO

For every run of the proposed algorithm, the permutation of Zernike terms is different in each iteration step, corresponding to the stochastic optimization progress. To simulate the effect of this randomness on the final correction result, we executed the SPGD-based CAO aberration correction algorithm 500 separate times. The difference between the calculated Zernike coefficients and the introduced coefficients during these 500 random operations is shown in Fig. 4(a). For all 500 operations, the final coefficients are highly consistent with the introduced ones. The ratios of RMS$_{residual}$ to RMS$_{introduced}$ are shown in Fig. 4(b). Here RMS$_{residual}$ is obtained from the residual phase between the calculated and introduced wavefront distortion. The mean ratio of the 500 operations is 0.08%, which indicates that the distortion phase estimated from the proposed method is highly consistent with the introduced one. The final distribution of the corrected image entropy is shown in Fig. 4(c). We find that the PV of the entropy for all 500 operations is 0.0524, standard deviation is 1.05%. Figure 4(d)-(f) show three example images selected from 500 random runs. The entropies for them are 3.7767, 3.801 and 3.8291 respectively, corresponding to the minimum, mean, and maximum of entropy. Figure 4(g) is the zoomed-in image of the central group marked by red square in Fig. 4(e). The profile of elements in group #9 of Fig. 4(e) is shown in Fig. 4(h) illustrated by blue solid line. The profile difference of group #9 between Fig. 4(d) and (e) is illustrated by orange solid line while the orange dotted line represents the profile difference between Fig. 4(f) and (e). There are only slight differences in the profiles and all these images are almost with the same resolution, which means that the small entropy variance of 500 random runs is immaterial to the final aberration correction performance. This demonstrated that although the permutation of Zernike terms differs for each correction event, the algorithm still produces well corrected image each time. This simulation result demonstrates that the proposed algorithm is reproducible for computational aberration correction.

 figure: Fig. 4.

Fig. 4. Correction results of 500 random operations of the proposed algorithm. (a) Differences between the calculated and introduced coefficients of Zernike terms in each stochastic run. (b) Ratios of RMS$_{residual}$ to RMS$_{introduced}$. (c) Distribution of the entropy for the corrected images. (d)-(f) Representative corrected images with different entropies. (g) Zoomed-in image of the central group marked by red square in (e). (h) Profile of elements in group #9 after correction in (e) (illustrated by blue solid line); profile differences between (d) and (e) (illustrated by orange solid line) and between (f) and (e) (illustrated by orange dotted line).

Download Full Size | PDF

3.3 Convergence for high-order aberration correction

To demonstrate the fast convergence of our method for high-order degree aberration correction, we expand the introduced aberration from 3rd-order to 4th-order, corresponding to 12 Zernike terms. The correction results are shown in Fig. 5. The blurred image shown in Fig. 5(a) includes the aberration expressed by 12 Zernike terms, and the corrected image after applying the proposed algorithm is illustrated in Fig. 5(b). The patterns on the resolution target become more clearly identifiable, making it possible to visualize the high-resolution groups. Figure 5(c) shows the change of the entropy during the process of aberration correction. The entropy converges after about 35 iterations. As shown in Fig. 5(f), the estimated coefficients of the Zernike terms illustrated by orange bars are correspond highly with the introduced coefficients shown by the blue bars. The phase map of the introduced 4th-order aberration distortion is shown in Fig. 5(d), and the residual phase difference between the introduced aberration and the computed one is shown in Fig. 5(e). The RMS value of the residual phase is $6.5\times 10^{-3}$ ${\mathrm{\mu}}$m, corresponding to 0.67% of the introduced wavefront distortion, which verifies the high-order aberration correction ability of the proposed method.

 figure: Fig. 5.

Fig. 5. Simulated aberrated USAF resolution target image with 4th-order Zernike polynomials. (b) Corrected image after SPGD-based CAO. (c) Image entropy during the correction process. (d) Phase map of the introduced 4th-order aberration distortion. (e) Phase deviation between recovered and introduced wavefront aberration. (f) Coefficients of introduced and computed Zernike polynomials. (g) Comparison of the computing time of different number of Zernike terms between the proposed SPGD method and the Rprop algorithm.

Download Full Size | PDF

As shown in Fig. 5(g), a comparison of the computing time for different numbers of Zernike terms is performed between the proposed method and our previous Rprop algorithm [19]. The implementation of Rprop was stochastic, but not paralleled like SPGD. The histogram shows that when we operate high-order degree aberration correction, the computing time for the proposed algorithm only increased from 2.13 s to 2.35 s, while the Rprop algorithm increases from 19.7 s to 79.5 s. The fast convergence speed of the proposed method is therefore demonstrated from this comparison, verifying that the proposed method has a much better run-time performance even for high-order aberration correction. The correction process was operated using MATLAB on a Windows desktop with an Intel Core i7 2.9 GHz and 16 GB RAM, and the size of the simulated resolution target image is $512\times 512$ pixel.

4. Experimental results and discussions

With the feasibility of aberration correction using the proposed SPGD algorithm being demonstrated, we next apply this approach with a spectral-domain optical coherence microscopy (SD-OCM) system. This system is described in detail in previous work [39]. Briefly, a superluminescent diode (SLD) centered at 860 nm with a bandwidth of 80 nm full-width-at-half-maximum (FWHM) was used as the light source. To show the aberration correction ability of the SPGD algorithm, we first carried out imaging of a low scattering tissue phantom. Subsequently, to demonstrate the potential for clinical application, we used this algorithm for correcting images from ex-vivo rabbit adipose tissue and in-vivo human photoreceptors in the retina.

4.1 Particle phantom

To demonstrate the proposed algorithm for point-like objects, it was first tested using a low-scattering phantom consisting of copper zinc iron oxide nanoparticles (mean diameter <100  nm) in PDMS. The complex en-face image, shown in Fig. 6(a), was extracted from a 3D dataset located 15.8 ${\mathrm{\mu}}$m above the focal plane, which corresponds to 4.2 Rayleigh ranges. The image was blurred and the SNR drops due to the existence of aberrations. Figure 6(b) shows the image after applying the 3rd-order aberration correction ($\sigma =20,\gamma =3)$, where the point-like nanoparticles can now be clearly resolved. Figure 6(c) and the (d) are the zoomed-in images of the areas indicated by the blue and yellow squares in Fig. 6(a) and (b), respectively. The intensity of the particle becomes more concentrated after aberration correction and the maximum intensity increases about 3 times (From $7.7\times 10^{-5}$ to $3.1\times 10^{-4}$). The line-profiles of this particle are shown in Fig. 6(e), before and after applying the correction, showing a significant improvement in the PSF. The FWHM of the PSF is 0.8 ${\mathrm{\mu}}$m, which is consistent with the diffraction limit of the OCM system. The evolution of Shannon entropy during the correction process is illustrated in Fig. 6(f), where the entropy converges to its minimum value after about 20 iterations (see Visualization 1).

 figure: Fig. 6.

Fig. 6. En-face images of a nanoparticle phantom (a) before and (b) after CAO. (e) 2D line-profiles of the particle indicated by the blue and yellow zoomed-in images (c) and (d). (f) Image entropy during the correction process. See Visualization 1.

Download Full Size | PDF

4.2 Ex-vivo rabbit adipose tissue

Figure 7 shows the aberration correction results from imaging ex-vivo rabbit adipose tissue. The en-face image located at about 43.6 ${\mathrm{\mu}}$m (about 11.6 Rayleigh ranges) above the focal plane is shown in Fig. 7(a). From the aberrated image, the structure of the adipose is difficult to recognize. After the CAO 3rd-order aberration correction ($\sigma =20,\gamma =3)$, shown in Fig. 7(b), structural information was recovered across the field-of-view (FOV). The cell membrane boundaries appear sharper and the honeycomb structures are clearly reconstructed. It can be clearly seen that boundaries of the adipocytes become distinct and thinner in comparison with the original OCM image. Highly-scattering regions indicated by arrows become more focused and brighter after aberration correction. This demonstrates that the CAO aberration correction could improve the resolution for cellular-level OCM imaging.

 figure: Fig. 7.

Fig. 7. Normalized en-face images of rabbit adipose tissue and individual adipocytes (a)  before and (b) after 3rd-order Zernike polynomial CAO. (c) Image entropy during the correction process.

Download Full Size | PDF

4.3 In-vivo human retina photoreceptors

Finally, in-vivo imaging of human retina photoreceptors was performed under Institutional Review Board protocols approved by the University of Illinois at Urbana-Champaign. Subject was a 28-year-old male volunteer with no known retinal pathologies. The cones between the perifoveal and parafoveal region ($3.5^\circ$ to $4.7^\circ$ from the fovea) were imaged by the high-speed en-face OCT system introduced in [40]. The original en-face image is shown in Fig. 8(a), and Fig. 8(c) is the zoomed-in image of the region marked by the white square. The photoreceptors are clearly blurred by the aberrations not only from the OCT system but also from the tested human eye. It is difficult to recognize each photoreceptor from the zoomed-in image. Due to the high-order aberration introduced by the subject’s eye, 4th-order Zernike polynomials, up to 12 degrees of freedom, were applied for the aberration correction. Figure 8(b) and 8(d) are the reconstruction results after CAO aberration correction ($\sigma =0.3,\gamma =3)$. The 2D profiles of the photoreceptors illustrated by the green dashed lines in Fig. 8(b) and (d) are shown in Fig. 8(f). The phase map of the estimated aberration is shown Fig. 8(e). We can find that after correction, the cone photoreceptors are distinctly visualized in the whole FOV. The shape of the photoreceptor is more clearly recognizable, and we can use this high resolution image for photoreceptor counting. The improvement in image quality is also validated by the identification of Yellott’s ring [40]. The angularly averaged power spectrum in the polar coordinates of the photoreceptors is shown in Fig. 8(g). A peak from the Yellott’s ring indicated by the black arrow manifests that the increasement of the regularly arranged cone photoreceptors. Meanwhile, the amplitude of the high spatial frequency content increases after correction as shown in Fig. 8(g), which also demonstrates the improvement of the image resolution across the entire FOV. Therefore, high quality imaging of human retina photoreceptors after CAO correction could provide valuable image-based data for diagnoses of fundus oculi diseases, and potentially for the observation of neuron activities. The size of the image is $340\times 340$ pixel, and the correction time for 12 degrees of freedom is about 2 s. The computation time is mainly affected by the image size, and parallel computing on FPGA can further significantly accelerate the correction process, which shows the potential in clinical real-time imaging in the future.

 figure: Fig. 8.

Fig. 8. Normalized in-vivo images of the cone photoreceptor mosaic in the human retina (a) before and (b) after CAO. (c) and (d) are zoomed-in images of the regions indicated by the white squares in (a) and (b), respectively. (e) Phase map of the computed aberration. (f) 2D line profiles of the photoreceptors indicated by the green dashed lines in (c) and (d). (g) Plots of the radial average power spectra of the original image before CAO (a) and the corrected image after CAO (b). The black arrow indicates an enhanced spatial frequency indicative of Yellott’s ring. The scale bar is $0.25^\circ$.

Download Full Size | PDF

4.4 Robustness for different samples

To demonstrate that the proposed aberration correction is robust for different samples, we ran the algorithm 100 times for the three samples mentioned above. These images were acquired from different samples at different time, or on different imaging systems. Thus, the aberrations and noise in these images are different from each other. The entropy distributions after 100 runs for these three samples are shown in Fig. 9. The entropies are highly concentrated and the standard deviations are $8.9\times 10^{-3}$, $2.1\times 10^{-2}$, and $3.3\times 10^{-3}$ for particle phantomn, ex-vivo rabbit adipose tissue, and in-vivo human retina photoreceptors, respectively. These results proved that, even for samples and systems with different aberrations and noise, the variance of entropy is kept at a remarkably similar amount, which demonstrates that we can obtain very robust optimization results for samples with different properties.

 figure: Fig. 9.

Fig. 9. Entropy distributions after 100 runs of the SPGD algorithm for (a) particle phantom, (b) ex-vivo rabbit adipose tissue, and (c) in-vivo human retina photoreceptors.

Download Full Size | PDF

5. Conclusions

In this paper, we present an automated fast CAO aberration correction algorithm based on the SPGD method in optical coherence tomography. Aberration-corrected images can be obtained by CAO through ordinary phase-stable OCT systems without the addition of any extra hardware elements. The SPGD algorithm is a model-free method, and the convergence of the optimization process is nearly independent of the number of variables. This enables complex high-order aberration correction for dynamic samples. The SPGD-based CAO updates the phase filter in each iteration by stochastically choosing the number and order of Zernike terms to keep the image entropy decreasing until it converges. The performance of the proposed SPGD-based CAO was first demonstrated by a simulated USAF resolution target image. The coefficients of the Zernike terms estimated through our method were equally accurate for both low-order and high-order aberrations. Importantly, the computing time for 12 aberration modes will only increase by about 10% compared to correction with 7 terms. The feasibility of SPGD algorithm was demonstrated by correcting the images of samples with different scattering properties, we conducted aberration correction experiments for a low-scatting particle phantom, ex-vivo rabbit adipose tissue with individual adipocytes or cells, and in-vivo human retina photoreceptors. The images after aberration correction all showed a significant improvement in contrast and signal-to noise-ratio.

Compared to the Rprop algorithm, under the premise of good correction performance, the proposed SPGD method shows better run-time performance due to the parallel computation. Moreover, because of the low computing complexity for updating the gain coefficient matrix, the correction time for high-order aberrations will increase only slightly. The proposed method, therefore, shows great potential for future real-time aberration-corrected imaging in clinical applications.

Funding

National Key Research and Development Program of China (2019YFB2005500); National Natural Science Foundation of China (U1931120); Foundation of Key Laboratory of Optical System Advanced Manufacturing Technology, Chinese Academy of Sciences (KLOMT190201); Fundamental Research Funds for the Central Universities (30919011277, 30919011278); National Institutes of Health (R01CA213149, R01EB028615); Air Force Office of Scientific Research (FA9550-17-1-0387).

Acknowledgments

We thank the former members of the Biophotonics Imaging Laboratory for their contributions to the OCM system and en-face OCT system, including Y. Z. Liu, S. G. Adie, and N. D. Shemonski. The authors would also like to thank R. R. Iyer for his useful comments and discussions. Additional information can be found at http://biophotonics.illinois.edu.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

2. F. Zhang, K. Kurokawa, A. Lassoued, J. A. Crowell, and D. T. Miller, “Cone photoreceptor classification in the living human eye from photostimulation-induced phase dynamics,” Proc. Natl. Acad. Sci. 116(16), 7951–7956 (2019). [CrossRef]  

3. Y. Jia, S. T. Bailey, T. S. Hwang, S. M. McClintic, S. S. Gao, M. E. Pennesi, C. J. Flaxel, A. K. Lauer, D. J. Wilson, J. Hornegger, J. G. Fujimoto, and D. Huang, “Quantitative optical coherence tomography angiography of vascular abnormalities in the living human eye,” Proc. Natl. Acad. Sci. 112(18), E2395–E2402 (2015). [CrossRef]  

4. A. Aneesh, B. Považay, B. Hofer, S. V. Popov, C. Glittenberg, S. Binder, and W. Drexler, “Multispectral in-vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt. 15(01), 1–15 (2010). [CrossRef]  

5. F. A. South, E. J. Chaney, M. Marjanovic, S. G. Adie, and S. A. Boppart, “Differentiation of ex-vivo human breast tissue using polarization-sensitive optical coherence tomography,” Biomed. Opt. Express 5(10), 3417–3426 (2014). [CrossRef]  

6. A. Roorda, F. Romero-Borja, W. J. Donnelly III, H. Queener, T. J. Hebert, and M. C. Campbell, “Adaptive optics scanning laser ophthalmoscopy,” Opt. Express 10(9), 405–412 (2002). [CrossRef]  

7. R. J. Zawadzki, S. M. Jones, S. S. Olivier, M. Zhao, B. A. Bower, J. A. Izatt, S. Choi, S. Laut, and J. S. Werner, “Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in-vivo imaging,” Opt. Express 13(21), 8532–8546 (2005). [CrossRef]  

8. H. Hofer, N. Sredar, H. Queener, C. Li, and J. Porter, “Wavefront sensorless adaptive optics ophthalmoscopy in the human eye,” Opt. Express 19(15), 14160–14171 (2011). [CrossRef]  

9. S. G. Adie, B. W. Graf, A. Ahmad, P. S. Carney, and S. A. Boppart, “Computational adaptive optics for broadband optical interferometric tomography of biological tissue,” Proc. Natl. Acad. Sci. 109(19), 7175–7180 (2012). [CrossRef]  

10. A. Ahmad, N. D. Shemonski, S. G. Adie, H.-S. Kim, W.-M. W. Hwu, P. S. Carney, and S. A. Boppart, “Real-time in-vivo computed optical interferometric tomography,” Nat. Photonics 7(6), 444–448 (2013). [CrossRef]  

11. N. D. Shemonski, S. S. Ahn, Y.-Z. Liu, F. A. South, P. S. Carney, and S. A. Boppart, “Three-dimensional motion correction using speckle and phase for in-vivo computed optical interferometric tomography,” Biomed. Opt. Express 5(12), 4131–4143 (2014). [CrossRef]  

12. F. A. South, Y.-Z. Liu, A. J. Bower, Y. Xu, P. S. Carney, and S. A. Boppart, “Wavefront measurement using computational adaptive optics,” J. Opt. Soc. Am. A 35(3), 466–473 (2018). [CrossRef]  

13. F. A. South, Y.-Z. Liu, P.-C. Huang, T. Kohlfarber, and S. A. Boppart, “Local wavefront mapping in tissue using computational adaptive optics OCT,” Opt. Lett. 44(5), 1186–1189 (2019). [CrossRef]  

14. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25(4), 983–994 (2008). [CrossRef]  

15. A. E. Tippie and J. R. Fienup, “Phase-error correction for multiple planes using a sharpness metric,” Opt. Lett. 34(5), 701–703 (2009). [CrossRef]  

16. A. Kumar, W. Drexler, and R. A. Leitgeb, “Subaperture correlation based digital adaptive optics for full field optical coherence tomography,” Opt. Express 21(9), 10850–10866 (2013). [CrossRef]  

17. D. Hillmann, C. Pfäffle, H. Spahr, S. Burhan, L. Kutzner, F. Hilge, and G. Hüttmann, “Computational adaptive optics for optical coherence tomography using multiple randomized subaperture correlations,” Opt. Lett. 44(15), 3905–3908 (2019). [CrossRef]  

18. S. G. Adie, N. D. Shemonski, B. W. Graf, A. Ahmad, P. Scott Carney, and S. A. Boppart, “Guide-star-based computational adaptive optics for broadband interferometric tomography,” Appl. Phys. Lett. 101(22), 221117 (2012). [CrossRef]  

19. P. Pande, Y.-Z. Liu, F. A. South, and S. A. Boppart, “Automated computational aberration correction method for broadband interferometric imaging techniques,” Opt. Lett. 41(14), 3324–3327 (2016). [CrossRef]  

20. M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett. 32(1), 5–7 (2007). [CrossRef]  

21. D. Débarre, E. J. Botcherby, T. Watanabe, S. Srinivas, M. J. Booth, and T. Wilson, “Image-based adaptive optics for two-photon microscopy,” Opt. Lett. 34(16), 2495–2497 (2009). [CrossRef]  

22. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003). [CrossRef]  

23. D. Hillmann, H. Spahr, C. Hain, H. Sudkamp, G. Franke, C. Pfäffle, C. Winter, and G. Hüttmann, “Aberration-free volumetric high-speed imaging of in-vivo retina,” Sci. Rep. 6(1), 35209 (2016). [CrossRef]  

24. S. Bonora and R. J. Zawadzki, “Wavefront sensorless modal deformable mirror correction in adaptive optics: optical coherence tomography,” Opt. Lett. 38(22), 4801–4804 (2013). [CrossRef]  

25. F. Karimian, S. Feizi, and A. Doozande, “Higher-order aberrations in myopic eyes,” J. Ophthalmic Vis. Res. 5(1), 3–9 (2010).

26. M. Pircher and R. J. Zawadzki, “Review of adaptive optics OCT (AO-OCT): principles and applications for retinal imaging,” Biomed. Opt. Express 8(5), 2536–2562 (2017). [CrossRef]  

27. Y. Jian, J. Xu, M. A. Gradowski, S. Bonora, R. J. Zawadzki, and M. V. Sarunic, “Wavefront sensorless adaptive optics optical coherence tomography for in-vivo retinal imaging in mice,” Biomed. Opt. Express 5(2), 547–559 (2014). [CrossRef]  

28. J. Polans, D. Cunefare, E. Cole, B. Keller, P. S. Mettu, S. W. Cousins, M. J. Allingham, J. A. Izatt, and S. Farsiu, “Enhanced visualization of peripheral retinal vasculature with wavefront sensorless adaptive optics optical coherence tomography angiography in diabetic patients,” Opt. Lett. 42(1), 17–20 (2017). [CrossRef]  

29. P. Villoresi, S. Bonora, M. Pascolini, L. Poletto, G. Tondello, C. Vozzi, M. Nisoli, G. Sansone, S. Stagira, and S. D. Silvestri, “Optimization of high-order harmonic generation by adaptive control of a sub-10-fs pulse wave front,” Opt. Lett. 29(2), 207–209 (2004). [CrossRef]  

30. M. Avriel, Nonlinear programming: analysis and methods (Courier Corporation, 2003).

31. L. Bottou, “Large-scale machine learning with stochastic gradient descent,” in Proceedings of COMPSTAT’2010, (Springer, 2010), pp. 177–186.

32. A. Dembo and T. Kailath, “Model-free distributed learning,” IEEE Trans. Neural Netw. 1(1), 58–70 (1990). [CrossRef]  

33. M. A. Vorontsov, G. W. Carhart, and J. C. Ricklin, “Adaptive phase-distortion correction based on parallel gradient-descent optimization,” Opt. Lett. 22(12), 907–909 (1997). [CrossRef]  

34. T. DuBose, D. Nankivil, F. LaRocca, G. Waterman, K. Hagan, J. Polans, B. Keller, D. Tran-Viet, L. Vajzovic, A. N. Kuo, C. A. Toth, J. A. Izatt, and S. Farsiu, “Handheld adaptive optics scanning laser ophthalmoscope,” Optica 5(9), 1027–1036 (2018). [CrossRef]  

35. T. R. O’Meara, “The multidither principle in adaptive optics,” J. Opt. Soc. Am. 67(3), 306–315 (1977). [CrossRef]  

36. M. A. Vorontsov and V. P. Sivokon, “Stochastic parallel-gradient-descent technique for high-resolution wave-front phase-distortion correction,” J. Opt. Soc. Am. A 15(10), 2745–2758 (1998). [CrossRef]  

37. M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S. Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation,” Opt. Express 12(11), 2404–2422 (2004). [CrossRef]  

38. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976). [CrossRef]  

39. Y.-Z. Liu, N. D. Shemonski, S. G. Adie, A. Ahmad, A. J. Bower, P. S. Carney, and S. A. Boppart, “Computed optical interferometric tomography for high-speed volumetric cellular imaging,” Biomed. Opt. Express 5(9), 2988–3000 (2014). [CrossRef]  

40. N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       Aberration correction performance of particle phantom during iteration.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic of the SPGD algorithm-based computational adaptive optics.
Fig. 2.
Fig. 2. Flowchart for the stochastic parallel gradient descent (SPGD) algorithm.
Fig. 3.
Fig. 3. (a) Simulated aberrated USAF resolution target image with 3rd-order Zernike polynomials. (b) Corrected image after SPGD-based CAO. (c) and (d) Zoomed-in images of the central region illustrated by white squares in (a) and (b), respectively. (e) Phase map of the introduced 3rd-order aberration distortion. (f) Phase deviation between recovered and introduced wavefront aberration. (g) Image entropy during the correction process. (h)  Coefficients of introduced and computed Zernike polynomials.
Fig. 4.
Fig. 4. Correction results of 500 random operations of the proposed algorithm. (a) Differences between the calculated and introduced coefficients of Zernike terms in each stochastic run. (b) Ratios of RMS$_{residual}$ to RMS$_{introduced}$. (c) Distribution of the entropy for the corrected images. (d)-(f) Representative corrected images with different entropies. (g) Zoomed-in image of the central group marked by red square in (e). (h) Profile of elements in group #9 after correction in (e) (illustrated by blue solid line); profile differences between (d) and (e) (illustrated by orange solid line) and between (f) and (e) (illustrated by orange dotted line).
Fig. 5.
Fig. 5. Simulated aberrated USAF resolution target image with 4th-order Zernike polynomials. (b) Corrected image after SPGD-based CAO. (c) Image entropy during the correction process. (d) Phase map of the introduced 4th-order aberration distortion. (e) Phase deviation between recovered and introduced wavefront aberration. (f) Coefficients of introduced and computed Zernike polynomials. (g) Comparison of the computing time of different number of Zernike terms between the proposed SPGD method and the Rprop algorithm.
Fig. 6.
Fig. 6. En-face images of a nanoparticle phantom (a) before and (b) after CAO. (e) 2D line-profiles of the particle indicated by the blue and yellow zoomed-in images (c) and (d). (f) Image entropy during the correction process. See Visualization 1.
Fig. 7.
Fig. 7. Normalized en-face images of rabbit adipose tissue and individual adipocytes (a)  before and (b) after 3rd-order Zernike polynomial CAO. (c) Image entropy during the correction process.
Fig. 8.
Fig. 8. Normalized in-vivo images of the cone photoreceptor mosaic in the human retina (a) before and (b) after CAO. (c) and (d) are zoomed-in images of the regions indicated by the white squares in (a) and (b), respectively. (e) Phase map of the computed aberration. (f) 2D line profiles of the photoreceptors indicated by the green dashed lines in (c) and (d). (g) Plots of the radial average power spectra of the original image before CAO (a) and the corrected image after CAO (b). The black arrow indicates an enhanced spatial frequency indicative of Yellott’s ring. The scale bar is $0.25^\circ$.
Fig. 9.
Fig. 9. Entropy distributions after 100 runs of the SPGD algorithm for (a) particle phantom, (b) ex-vivo rabbit adipose tissue, and (c) in-vivo human retina photoreceptors.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

g ( x , y ) = o ( x , y ) h ( x , y ) ,
G ( u , v ) = O ( u , v ) H ( u , v ) ,
H ( u , v ) = F [ h ( x , y ) ] = F { F 1 [ p ( u , v ) ] } = p ( u , v ) .
P ( u , v ) = p ( u , v ) exp [ i φ a ( u , v ) ] .
H a ( u , v ) = P ( u , v ) = p ( u , v ) exp [ i φ a ( u , v ) ] .
G a ( u , v ) = O ( u , v ) p ( u , v ) exp [ i φ a ( u , v ) ] .
G c ( u , v ) = G a ( u , v ) exp [ i φ a ( u , v ) ] .
g c ( x , y ) = F 1 [ G c ( u , v ) ] .
φ a ( u , v ) = n = 1 N a n Z n ( u , v ) = A Z ( u , v ) ,
E [ i n ( x , y ) ] = x , y i n ( x , y ) l o g i n ( x , y ) .
A ( m + 1 ) = A ( m ) + γ δ E ( m ) δ A ( m ) , m = 1 , 2 , M ,
δ E ( m ) = E + E = E ( A ( m ) + δ A ( m ) ) E ( A ( m ) δ A ( m ) ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.