Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Gabor optical coherence tomographic angiography (GOCTA) (Part I): human retinal imaging in vivo

Open Access Open Access

Abstract

Recently, parallel high A-line speed and wide field imaging for optical coherence tomography angiography (OCTA) has become more prevalent, resulting in a dramatic increase of data quantity which poses a challenge for real time imaging even for GPU in data processing. In this manuscript, we propose a new OCTA processing technique, Gabor optical coherence tomographic angiography (GOCTA), for label-free human retinal angiography imaging. In spectral domain optical coherence tomography (SDOCT), k-space resampling and Fourier transform (FFT) are required for the entire data set of interference fringes to calculate blood flow information in previous OCTA algorithms, which are computationally intensive. As adults' eye anterior-posterior radii are nearly constant, only 3 A-scan lines need to be processed to obtain the gross orientation of the retina by using a sphere model. Subsequently, the en face microvascular images can be obtained by using the GOCTA algorithm from interference fringes directly without the steps of k-space resampling, numerical dispersion compensation, FFT, and maximum (mean) projection, resulting in a significant improvement of the data processing speed by 4 to 20 times faster than the existing methods. GOCTA is potentially suitable for SDOCT systems in en face preview applications requiring real-time microvascular imaging.

© 2017 Optical Society of America

1. Introduction

Optical coherence tomography (OCT) [1] technique, proposed in 1990s, is an emerging imaging modality for medical diagnostics and treatments. Due to the advantages of non-invasiveness, high resolution and high imaging speed, OCT is widely used for various tissue, e.g. human retina, brain, cardiology and dermatology. In addition to microstructural imaging, OCT based microvascular imaging algorithms are also used widely in medical imaging and play an increasingly important role. The first algorithm for extracting blood flow information is optical Doppler tomography (ODT) [2, 3] or color Doppler OCT (CDOCT) [4] which is able to calculate the axial velocity component of moving scattering particles.

Morphological OCT microvasculature imaging, collectively termed OCT angiography (OCTA), was also developed. In general, OCTA algorithms available now can be divided into two categories according to processing mode. The first is inter-line mode, such as Doppler variance phase resolved (DVPR) [3, 5], intensity-based modified Doppler variance (IBDV) [6, 7], optical micro-angiography (OMAG) [8]. For inter-line mode, the blood flow information was extracted from one frame of interference fringes at each position. For DVPR and IBDV, the statistical information of a small window was calculated to contrast microvasculature, which needs high A-line density. For OMAG, a piezo-stage was used in reference arm for modulating of interference fringes, which increased complexity of OCT setup.

The second processing mode is inter-frame, which extracts blood flow information from multi-frames of structural images at each position, such as phase variance OCT (PVOCT) [9–11], speckle variance OCT (SVOCT) [12–15], correlation mapping OCT (cmOCT) [16–18], split-spectrum amplitude-decorrelation angiography (SSADA) [19] and differential standard deviation of log-scale intensity (DSDLI) [20], and ultrahigh sensitivity optical micro-angiography (UHS-OMAG) [21–24]. For this mode, the sensitivity for microvasculature detection can be improved due to the time interval between two frames is longer than that between two A-scans, but the motion artifacts is also more significant due to increase of time interval. PVOCT, SVOCT, cmOCT, SSADA, and DSDLI obtain blood vessel contrast by calculating statistical information from either phase or intensity images in spatial domain. PVOCT calculates the variance of phase difference between two frames. SVOCT and DSDLI calculate the variances of intensity and the differential intensity between two frames, respectively. Both cmOCT and SSADA calculate the decorrelation coefficients, but in SSADA, the full spectrum is divided into four sub-bands to improve microvascular image quality. For UHS-OMAG, OMAG algorithm is performed in the slow scanning direction and blood flow signal is calculated from both amplitude and phase signals, resulting in an improvement of sensitivity.

Recently, parallel imaging [25–27] and wide field imaging [28] have become more prevalent, resulting in a dramatic increase of data quantity which poses a challenge for real time imaging even when using GPU for data processing. For all of the above mentioned algorithms, the blood flow information is obtained from spatial domain. SDOCT systems all require k-space resampling, dispersion compensation, Fourier transform (FFT), and maximum (or mean) projection (MIP) for reconstructing en face microvascular images, which appears to be the most useful display mode for clinical applications. Some of these steps require long processing times, which poses challenge for real-time imaging even when using GPUs for data processing. For clinical applications such as retinal imaging, we recognize that OCTA images are typically used as en face image sets for clinical decision making, such as identifying an area of microvascular abnormality, after which depth resolved information, such as cross-sectional structural OCT images of the retina at the particular region, are reviewed. Therefore, rapid en face OCTA image display, at the time of scanning, may be advantageous to screen retinal pathology as well as to focus detailed examination to a smaller region of interest. Such capability may also be useful for less cooperative patients where motion artefacts degrade OCTA images. In such scenarios, rapid en face OCTA may allow immediate feedback and re-scanning.

It is thus interesting to reflect that most existing OCTA algorithms go through the computationally intensive steps of depth resolved image processing, and in the last steps, perform intensity projection in the depth direction and remove the depth information. In this work, we proposed a Gabor optical coherence tomographic angiography (GOCTA) algorithm extracting blood flow information from interference fringes directly without the time consuming steps mentioned above, which can decrease data processing time substantially.

2. Method

Figure 1 illustrated the data processing steps of GOCTA. In SDOCT, the interference fringes between the light backscattered from sample and reflected by reference mirror are detected by spectrometer camera. The three dimensional (3D) data set of spectral fringes can then be acquired by scanning of the x- and y-galvo-mirrors. The direct component (DC) of the interference (auto-correlation of reference beam) can be measured by blocking sample arm and the auto-correlation of sample beam is negligible. After subtracting the DC component, the captured signal can be simplified by

I(x,λ,y)=S(λ)R(x,z,y)Rrγsγrcos(4πλnz+ϕ(x,y)+ϕdis(λ))dz,
where x and y represent the scanning directions of the two galvos, λ is wavelength, S(λ) is the power spectral density of light source, R(x,z,y) and Rr are the backscattering coefficient of sample and the reflectivity reference mirror, respectively. γs and γr are the input power in the sample and reference arms, n is the refractive index, z represents depth, ϕ(x,y) and ϕdis(λ) are the initial phase and the dispersion mismatch between sample arm and reference arm. In the case of moving particles, the amplitude and the frequency of the fringes vary with time. For two consecutive B-scans acquired from the same position, the amplitude or frequency of the components corresponding to moving particles is different. Subtracting the two B-scans, the components corresponding to static tissue can be removed, and the resultant signal originates from the moving particles. The differential signal can be expressed by
I'(x,λ,y)=I(x,λ,y1)I(x,λ,y2),
where I(x,λ,y1) and I(x,λ,y2) are two consecutive B-scans from the same position.

 figure: Fig. 1

Fig. 1 Processing flow chart of Gabor optical coherence tomographic angiography (GOCTA). The right side showed 3 A-scans calculated once to determine the approximate retinal surface location of the entire 3D data set and provided Gabor filter parameters for the B-scan processing on the left side.

Download Full Size | PDF

As shown in Fig. 2, human eye as an optical system has a curved image plane on the retina near the fovea, with the optical nerve head in the vicinity. The anterio-posterior (AP) diameter of an emmetropic human adult eye is approximately 22 to 24 mm, which is relatively invariant between sex and age groups [29]. While the transverse diameter varies more according to the width of the orbit, for the area near the fovea and optical nerve head, the curvature can be approximated by the AP diameter. For the work presented below, we have chosen an AP diameter of 22 mm (or radius of curvature of 11 mm) for the human eyes.

 figure: Fig. 2

Fig. 2 Structure of human eye. For the region covered by the dashed box, the curvature of the retinal surface can be approximated by the anterio-posterior (AP) diameter.

Download Full Size | PDF

In GOCTA, the orientation of retina is needed for generating Gabor filters and evaluated by the spherical model which can be expressed by

(xx0)2+(yy0)2+(zsz0)2=R2,
where (x0 , y0 , z0) and R are the center position and the radius, respectively, zs is the depth of retinal surface in structural images. To calculate the center location (x0 , y0 , z0), three sets of (x, y, zs) are needed. Here, we performed FFT on the three A-scans at corners and calculated the depth of surface. The retinal surface zs(x, y) can be then obtained by Eq. (3), approximating the region marked by the dashed box in Fig. 1.

Within the measured interference fringes, the sample information at different depth is modulated by different frequency component. As Gabor filter is a linear filter, the frequency component within a specific frequency range can be obtained directly by convolution [30], which is equivalent to multiplying a Gaussian function in spatial domain. For example, Gaussian function g(z)=exp[4ln2(zδz)2/Δz2] can be used to extract the sample information within the depth range of δzΔz/2 to δz+Δz/2, where δz and Δz are the depth and depth range respectively. Taking the refractive index and round optical path into account, the filter can be obtained by performing FFT on the above mentioned Gaussian function and expressed by

G(x,k,y)=exp[π2(nΔz)2(kk0)2ln2]cos[2π(kk0)(zs(x,y)+2nδz)+φ0],
where kand k0 are wavenumber and center wavenumber, φ0 is the initial phase. The Gabor filter based on wavelength G(x,λ,y) is then calculated by performing a reverse sampling on G(x,k,y). By changing the values of Δz and δz, the en face microvascular images at different depth and within different depth ranges can be obtained. By performing convolution on the differential fringes with Gabor filter, the new fringes can be obtained by
I''(x,λ,y)=I'(x,λ,y)G(x,λ,y),
GOCTA signal can then be obtained by calculating the standard deviation (STD) of differential fringesI''(x,λ,y), which is expressed by
GOCTA(x,y)=1Mn=1M[I''(x,λn,y)I''mean(x,y)]2,
where M is the pixel number of CCD, I''mean(x,y) is the mean value of each A-scan of the filtered fringe. By calculating GOCTA signal for each position in the 3D data set of spectral fringes, en face microvascular images can be directly obtained.

3. Results and discussions

3.1 OCT imaging system

We performed GOCTA algorithm on the data set of healthy human eye from a commercial SDOCT (AngioVue, OptoVue Inc.) to verify its performance. This system operated at a center wavelength of 840 nm with the axial resolution and lateral resolution of ~5 μm and ~15 μm, respectively. The A-scan rate is 70,000 A-scans per second. In this work, the scanning range is 3 × 3 mm and each position was scanned twice.

3.2 Human retina imaging

We performed retinal OCT scanning on ten healthy volunteers. Example data for two local regions (optical nerve head region and fovea region) are shown in Figs. 3 and 4, respectively. The scanning ranges were 3 × 3 mm with 608 × 304 A-scans. SVOCT, UHS-OMAG and SVOCT algorithms were performed on the same data set to calculate microvascular images for comparison and the en face images were obtained by using mean projection within the depth range, same as the result obtained by Gabor filters. All of the en face microvascular images were calculated within depth of 0 - 350 μm below the retinal surface. Signal to noise ratio (SNR) and contrast to noise ratio (CNR) of the en face micro-vascular images were also calculated for quantitative comparison, SNR and CNR were calculated by

SNR=I¯dy/σbg,
and
CNR=(I¯dyI¯bg)/σbg,
where I¯dy and I¯bg represent the mean values within the dynamic flow region and background region, respectively, and σbg is the standard deviation within the background region.

 figure: Fig. 3

Fig. 3 Comparison of the microvascular images at optical nerve head region. (a) The structural surface calculated by using Eq. (3), the three corners marked by black circles were calculated by FFT. (b) The images outputted from the commercial system. (c) The mask for dynamic blood flow signals (red) and background (blue) on a local region marked by the dashed rectangles in (d) - (g). (d) - (g) are the microvascular images obtained by GOCTA, SVOCT, UHS-OMAG and SSADA, respectively. (h), (j), (l) and (n) are the zoomed-in local regions marked by the dashed white rectangles in (d) - (g), respectively. (i), (k), (m) and (o) are the histograms of the intensity values covered by mask (c), where the red and the blue represent dynamic flow signal and background, respectively. (b) and (d) - (g) share the scale bar.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Comparison of the microvascular images at fovea region. (a) The structural surface calculated by using Eq. (3), the three corners marked by black circles were calculated by FFT. (b) The images outputted from the commercial system. (c) The mask for dynamic blood flow signals (red) and background (blue) on a local region marked by the dashed rectangles in (d) - (g). (d) - (g) are the microvascular images obtained by GOCTA, SVOCT, UHS-OMAG and SSADA, respectively. (h), (j), (l) and (n) are the zoomed-in local regions marked by the dashed white rectangles in (d) - (g), respectively. (i), (k), (m) and (o) are the histograms of the intensity values covered by mask (c), where the red and the blue represent dynamic flow signal and background, respectively. (b) and (d) - (g) share the scale bar.

Download Full Size | PDF

To quantitatively assess the microvascular and background signals for comparison, we double threshold the marked regions to obtain the masks for dynamic signals (red) and background (blue), as shown in Fig. 3(c) and Fig. 4(c), which include vessels of different sizes. The results demonstrate that GOCTA can provide comparable image quality compared to the other three algorithms in the vicinity of both the optical nerve head and fovea regions, as shown by the comparable SNRs and CNRs. The output images from commercial SSADA algorithm use duplicated optical scanning in two directions (x and y) with post-processing applied on these two data sets to suppress motion artifact. We measured and calculated SNRs and CNRs on all ten volunteers’ data sets from healthy volunteers and the average and standard deviation of SNRs and CNRs for GOCTA, SVOCT, UHS-OMAG and SSADA are 25 ± 2, 23 ± 2, 23 ± 2, 22 ± 1 and 14 ± 2, 9 ± 2, 11 ± 3, 9 ± 2, respectively. The results demonstrate that the proposed GOCTA algorithm can provide comparable image quality.

3.3 Comparison of data processing time for the four algorithms

It is important to recognize that the commercial system output of SSADA images as shown in Fig. 3(b) and 4(b) are based on Fig. 3(f) and 4(f), upon which corrections of motion artefact, sub-regional registration, and additional contrast enhancement are applied to two directions (x and y) of scanning data set. We noted the GOCTA images quality, both qualitatively and quantitatively, were at least on par (or possibly superior) in comparison with the unidirectional scan results of SSADA. These subsequent data processing techniques on the commercial system are proprietary and thus not compared for computational complexity and time. In principle, these would be equivalent when applied to any en face data set.

The main advantage of our newly proposed GOCTA algorithm is the speed of processing. We processed the data set on the same computer using published SVOCT, UHS-OMAG, and SSADA algorithms, in MatLab. Note that in order to obtain data sets used to post-process the commercial SSADA image, both scanning in the x and y directions were performed and the SSADA algorithm must be repeated, which doubled the numerical processing time. The data processing was accomplished on a laptop (CPU: i7-4720HQ, memory: 16 G, GPU: NVIDIA Geforce (GTX 970M), operating system: windows 8.1). The data processing time for each 2 B-scans from the same position was calculated and the results are shown in Table 1.

Tables Icon

Table 1. The comparison of data processing time for each two B-scans from the same position

GOCTA is the only algorithm that can directly provide en face microvascular images without FFT. In SVOCT, UHS-OMAG, and SSADA, the steps of k-space resampling, dispersion compensation and FFT are computationally costly, resulting in 6, 4, and 20 times slower than GOCTA, respectively. Since GOCTA does not require resampling, dispersion compensation, and FFT, the total processing time decreases dramatically.

Using GPU based parallel computing library on Matlab, we also measured the data processing time for the entire 3D (608 × 2048 × 304) data set and the results are shown in Table 2. Note these results are only illustrative as additional GPU algorithm optimization is possible outside of the Matlab programming environment, however, the comparison here serves to show the different components of the overall computational can be GPU accelerated to different extents. We simulated the data processing on GPU, assuming the acquired interference fringes were transferred to GPU frame by frame (2048 × 304), as each B-mode image data set became available. For such scheme to work, the real-time OCT imaging system will require the three corner A-scans to be performed first, obtain the necessary spherical approximation of the retina, and apply the GOCTA as well the other processing algorithms for comparison. The processing time for these three A-scans is negligible with respect to the overall processing time.

Tables Icon

Table 2. The data processing time for entire 3D (608 × 2048 × 304) data set by CPU and GPU

It is also important to note that the steps of k-space resampling, numerical dispersion compensation, and image alignment required both matrix amplitude and matrix indices adjustment using algorithms such as spline fitting, no GPU based parallel computing Matlab library was readily available. Hence these were kept as CPU operations in the current analysis but could be further improved outside of the Matlab environment. Nevertheless, since the overall computational complexity of GOCTA was simpler than SVOCT, OMAG, and SSADA, the above analysis showed GOCTA was faster to compute under GPU acceleration. In this work, k-space resampling was accomplished by using cubic spline interpolation.

4. Limitations and conclusion

In the proposed algorithm, the Gabor filter parameters were deliberately chosen such that large number of zeros was encountered, thus simplifying the computational complexity and reduce the time needed for the convolution in digital filtering. In this work, the microvascular images within depth range of 350 µm (10% of the total OCT ranging depth) below spherically fitted retinal surface are calculated for analyzing and comparison. In this case, the non-zero segment length of Gabor filter Eq. (4) is just 16 pixels, resulting in a substantial decrease of computation complexity.

Due to the proposed GOCTA algorithm skipping the step of calculating depth resolved structural images (in the z-direction), one of the limitations is that the structural image alignment in the z-direction cannot be performed for motion artefact removal. However, x- and y-direction based en face image registration and alignment can still be applied (e.g., [31, 32]). The curvature of lens system can affect the accuracy of the evaluated retinal orientation, for slight curvature, GOCTA's images will not be affected due to the depth range of Gabor filter is 10% of the total OCT ranging depth. In the case of significant curvature, the relative shifting distance at each pixel can be obtained by scanning a mirror and the evaluated retinal orientation can be compensated in software. Another limitation is that GOCTA can only provide en face images, but for preview images, as our results shown above, it can still provide useful diagnosis information for the ophthalmologist. Finally, we noted commercial systems may use algorithms other than cubic spline interpolation for k-space resampling for processing efficiency with trade-off in image quality.

As the results shown, the SNRs and CNRs obtained by GOCTA are slightly higher than the other three algorithms. The reason might be that the proposed algorithm uses a large range of frequency component (the sample information within depth range of δzΔz/2 to δz+Δz/2 in spatial domain) to calculate the blood flow information, which is more robust compared to the other three algorithms where only the sample information at the same depth is used and then perform maximum (mean) projection to generate en face microvascular images.

In conclusion, we proposed a novel Gabor optical coherence tomographic angiography (GOCTA) algorithm to obtain the microvasculature on human retina on a standard ophthalmic SDOCT system. Obviating the need of resampling, dispersion compensation and FFT, the proposed algorithm can achieve a 6, 4 and 20 times of the data processing speed compared to SVOCT, UHS-OMAG and SSADA, respectively. GOCTA is ideally suited for SDOCT systems used in wide field scanning, ultra-high spectral resolution or parallel high A-line speed applications where large data amount is generated. For real time imaging, GOCTA can also be performed on graphics processing units (GPUs) to increase data processing speed further. We consider the proposed algorithm potentially useful for calculating the preview OCTA images as the first line en face display for the clinician, and improve the efficiency of disease screening and diagnosis in busy clinical environment.

Acknowledgment

The authors thank Joel Ramjist of Biophotonics and Bioengineering Laboratory, Ryerson University, for acquiring the image data, and Dr. Sandra Black of Brain Sciences Program, Sunnybrook Research Institute, for partial funding contribution to the OptoVue OCT system. This research work is supported by the Canada Research Chair program of Natural Sciences and Engineering Research Council of Canada.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

2. Z. Chen, T. E. Milner, S. Srinivas, X. Wang, A. Malekafzali, M. J. C. van Gemert, and J. S. Nelson, “Noninvasive imaging of in vivo blood flow velocity using optical Doppler tomography,” Opt. Lett. 22(14), 1119–1121 (1997). [CrossRef]   [PubMed]  

3. V. Yang, M. Gordon, B. Qi, J. Pekar, S. Lo, E. Seng-Yue, A. Mok, B. Wilson, and I. Vitkin, “High speed, wide velocity dynamic range Doppler optical coherence tomography (Part I): System design, signal processing, and performance,” Opt. Express 11(7), 794–809 (2003). [CrossRef]   [PubMed]  

4. J. A. Izatt, M. D. Kulkarni, S. Yazdanfar, J. K. Barton, and A. J. Welch, “In vivo bidirectional color Doppler flow imaging of picoliter blood volumes using optical coherence tomography,” Opt. Lett. 22(18), 1439–1441 (1997). [CrossRef]   [PubMed]  

5. Y. Zhao, Z. Chen, C. Saxer, Q. Shen, S. Xiang, J. F. de Boer, and J. S. Nelson, “Doppler standard deviation imaging for clinical monitoring of in vivo human skin blood flow,” Opt. Lett. 25(18), 1358–1360 (2000). [CrossRef]   [PubMed]  

6. G. Liu, L. Chou, W. Jia, W. Qi, B. Choi, and Z. Chen, “Intensity-based modified Doppler variance algorithm: application to phase instable and phase stable optical coherence tomography systems,” Opt. Express 19(12), 11429–11440 (2011). [CrossRef]   [PubMed]  

7. G. Liu, A. J. Lin, B. J. Tromberg, and Z. Chen, “A comparison of Doppler optical coherence tomography methods,” Biomed. Opt. Express 3(10), 2669–2680 (2012). [CrossRef]   [PubMed]  

8. R. K. Wang, S. L. Jacques, Z. Ma, S. Hurst, S. R. Hanson, and A. Gruber, “Three Dimensional Optical Angiography,” Opt. Express 15(7), 4083–4097 (2007). [CrossRef]   [PubMed]  

9. J. Fingler, D. Schwartz, C. Yang, and S. E. Fraser, “Mobility and transverse flow visualization using phase variance contrast with spectral domain optical coherence tomography,” Opt. Express 15(20), 12636–12653 (2007). [CrossRef]   [PubMed]  

10. D. Y. Kim, J. Fingler, J. S. Werner, D. M. Schwartz, S. E. Fraser, and R. J. Zawadzki, “In vivo volumetric imaging of human retinal circulation with phase-variance optical coherence tomography,” Biomed. Opt. Express 2(6), 1504–1513 (2011). [CrossRef]   [PubMed]  

11. I. Gorczynska, J. V. Migacz, R. J. Zawadzki, A. G. Capps, and J. S. Werner, “Comparison of amplitude-decorrelation, speckle-variance and phase-variance OCT angiography methods for imaging the human retina and choroid,” Biomed. Opt. Express 7(3), 911–942 (2016). [CrossRef]   [PubMed]  

12. J. Barton and S. Stromski, “Flow measurement without phase information in optical coherence tomography images,” Opt. Express 13(14), 5234–5239 (2005). [CrossRef]   [PubMed]  

13. A. Mariampillai, B. A. Standish, E. H. Moriyama, M. Khurana, N. R. Munce, M. K. K. Leung, J. Jiang, A. Cable, B. C. Wilson, I. A. Vitkin, and V. X. D. Yang, “Speckle variance detection of microvasculature using swept-source optical coherence tomography,” Opt. Lett. 33(13), 1530–1532 (2008). [CrossRef]   [PubMed]  

14. A. Mariampillai, M. K. K. Leung, M. Jarvi, B. A. Standish, K. Lee, B. C. Wilson, A. Vitkin, and V. X. D. Yang, “Optimized speckle variance OCT imaging of microvasculature,” Opt. Lett. 35(8), 1257–1259 (2010). [CrossRef]   [PubMed]  

15. C. Chen, K. H. Y. Cheng, R. Jakubovic, J. Jivraj, J. Ramjist, R. Deorajh, W. Gao, E. Barnes, L. Chin, and V. X. D. Yang, “High speed, wide velocity dynamic range Doppler optical coherence tomography (Part V): Optimal utilization of multi-beam scanning for Doppler and speckle variance microvascular imaging,” Opt. Express 25(7), 7761–7777 (2017). [CrossRef]   [PubMed]  

16. J. Enfield, E. Jonathan, and M. Leahy, “In vivo imaging of the microcirculation of the volar forearm using correlation mapping optical coherence tomography (cmOCT),” Biomed. Opt. Express 2(5), 1184–1193 (2011). [CrossRef]   [PubMed]  

17. C. Chen, W. Shi, and W. Gao, “Imaginary part-based correlation mapping optical coherence tomography for imaging of blood vessels in vivo,” J. Biomed. Opt. 20(11), 116009 (2015). [CrossRef]   [PubMed]  

18. C. Chen, J. Liao, and W. Gao, “Cube data correlation-based imaging of small blood vessels,” Opt. Eng. 54(4), 043104 (2015). [CrossRef]  

19. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]   [PubMed]  

20. W. Shi, W. Gao, C. Chen, and V. X. D. Yang, “Differential standard deviation of log-scale intensity based optical coherence tomography angiography,” J. Biophoton. (2017).

21. L. An, J. Qin, and R. K. Wang, “Ultrahigh sensitive optical microangiography for in vivo imaging of microcirculations within human skin tissue beds,” Opt. Express 18(8), 8220–8228 (2010). [CrossRef]   [PubMed]  

22. L. An, T. T. Shen, and R. K. Wang, “Using ultrahigh sensitive optical microangiography to achieve comprehensive depth resolved microvasculature mapping for human retina,” J. Biomed. Opt. 16(10), 106013 (2011). [CrossRef]   [PubMed]  

23. Z. Zhi, Y. Jung, Y. Jia, L. An, and R. K. Wang, “Highly sensitive imaging of renal microcirculation in vivo using ultrahigh sensitive optical microangiography,” Biomed. Opt. Express 2(5), 1059–1068 (2011). [CrossRef]   [PubMed]  

24. S. Yousefi, J. Qin, and R. K. Wang, “Super-resolution spectral estimation of optical micro-angiography for quantifying blood flow within microcirculatory tissue beds in vivo,” Biomed. Opt. Express 4(7), 1214–1228 (2013). [CrossRef]   [PubMed]  

25. J. Barrick, A. Doblas, M. R. Gardner, P. R. Sears, L. E. Ostrowski, and A. L. Oldenburg, “High-speed and high-sensitivity parallel spectral-domain optical coherence tomography using a supercontinuum light source,” Opt. Lett. 41(24), 5620–5623 (2016). [CrossRef]   [PubMed]  

26. B. Grajciar, Y. Lehareinger, A. F. Fercher, and R. A. Leitgeb, “High sensitivity phase mapping with parallel Fourier domain optical coherence tomography at 512 000 A-scan/s,” Opt. Express 18(21), 21841–21850 (2010). [CrossRef]   [PubMed]  

27. O. P. Kocaoglu, T. L. Turner, Z. Liu, and D. T. Miller, “Adaptive optics optical coherence tomography at 1 MHz,” Biomed. Opt. Express 5(12), 4186–4200 (2014). [CrossRef]   [PubMed]  

28. J. Xu, W. Wei, S. Song, X. Qi, and R. K. Wang, “Scalable wide-field optical coherence tomography-based angiography for in vivo imaging applications,” Biomed. Opt. Express 7(5), 1905–1919 (2016). [CrossRef]   [PubMed]  

29. P. Riordeon-Ewa and J. P. Whitcher, Vaughan & Asbury’s General Ophthalmology, 18th ed. (New York: Langer, 254–273 2011).

30. P. Meemon, J. Widjaja, and J. P. Rolland, “Spectral fusing Gabor domain optical coherence microscopy,” Opt. Lett. 41(3), 508–511 (2016). [CrossRef]   [PubMed]  

31. M. Heisler, S. Lee, Z. Mammo, Y. Jian, M. Ju, A. Merkur, E. Navajas, C. Balaratnasingam, M. F. Beg, and M. V. Sarunic, “Strip-based registration of serially acquired optical coherence tomography angiography,” J. Biomed. Opt. 22(3), 036007 (2017). [CrossRef]   [PubMed]  

32. P. Zang, G. Liu, M. Zhang, C. Dongye, J. Wang, A. D. Pechauer, T. S. Hwang, D. J. Wilson, D. Huang, D. Li, and Y. Jia, “Automated motion correction using parallel-strip registration for wide-field en face OCT angiogram,” Biomed. Opt. Express 7(7), 2823–2836 (2016). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Processing flow chart of Gabor optical coherence tomographic angiography (GOCTA). The right side showed 3 A-scans calculated once to determine the approximate retinal surface location of the entire 3D data set and provided Gabor filter parameters for the B-scan processing on the left side.
Fig. 2
Fig. 2 Structure of human eye. For the region covered by the dashed box, the curvature of the retinal surface can be approximated by the anterio-posterior (AP) diameter.
Fig. 3
Fig. 3 Comparison of the microvascular images at optical nerve head region. (a) The structural surface calculated by using Eq. (3), the three corners marked by black circles were calculated by FFT. (b) The images outputted from the commercial system. (c) The mask for dynamic blood flow signals (red) and background (blue) on a local region marked by the dashed rectangles in (d) - (g). (d) - (g) are the microvascular images obtained by GOCTA, SVOCT, UHS-OMAG and SSADA, respectively. (h), (j), (l) and (n) are the zoomed-in local regions marked by the dashed white rectangles in (d) - (g), respectively. (i), (k), (m) and (o) are the histograms of the intensity values covered by mask (c), where the red and the blue represent dynamic flow signal and background, respectively. (b) and (d) - (g) share the scale bar.
Fig. 4
Fig. 4 Comparison of the microvascular images at fovea region. (a) The structural surface calculated by using Eq. (3), the three corners marked by black circles were calculated by FFT. (b) The images outputted from the commercial system. (c) The mask for dynamic blood flow signals (red) and background (blue) on a local region marked by the dashed rectangles in (d) - (g). (d) - (g) are the microvascular images obtained by GOCTA, SVOCT, UHS-OMAG and SSADA, respectively. (h), (j), (l) and (n) are the zoomed-in local regions marked by the dashed white rectangles in (d) - (g), respectively. (i), (k), (m) and (o) are the histograms of the intensity values covered by mask (c), where the red and the blue represent dynamic flow signal and background, respectively. (b) and (d) - (g) share the scale bar.

Tables (2)

Tables Icon

Table 1 The comparison of data processing time for each two B-scans from the same position

Tables Icon

Table 2 The data processing time for entire 3D (608 × 2048 × 304) data set by CPU and GPU

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I ( x , λ , y ) = S ( λ ) R ( x , z , y ) R r γ s γ r cos ( 4 π λ n z + ϕ ( x , y ) + ϕ d i s ( λ ) ) d z ,
I ' ( x , λ , y ) = I ( x , λ , y 1 ) I ( x , λ , y 2 ) ,
( x x 0 ) 2 + ( y y 0 ) 2 + ( z s z 0 ) 2 = R 2 ,
G ( x , k , y ) = exp [ π 2 ( n Δ z ) 2 ( k k 0 ) 2 ln 2 ] cos [ 2 π ( k k 0 ) ( z s ( x , y ) + 2 n δ z ) + φ 0 ] ,
I ' ' ( x , λ , y ) = I ' ( x , λ , y ) G ( x , λ , y ) ,
G O C T A ( x , y ) = 1 M n = 1 M [ I ' ' ( x , λ n , y ) I ' ' m e a n ( x , y ) ] 2 ,
S N R = I ¯ d y / σ b g ,
C N R = ( I ¯ d y I ¯ b g ) / σ b g ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.