Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Noise and bias in optical coherence tomography intensity signal decorrelation

Open Access Open Access

Abstract

Functional optical coherence tomography (OCT) imaging based on the decorrelation of the intensity signal has been used extensively in angiography and is finding use in flowmetry and therapy monitoring. In this work, we present a rigorous analysis of the autocorrelation function, introduce the concepts of contrast bias, statistical bias and variability, and identify the optimal definition of the second-order autocorrelation function (ACF) g(2) to improve its estimation from limited data. We benchmark different averaging strategies in reducing statistical bias and variability. We also developed an analytical correction for the noise contributions to the decorrelation of the ACF in OCT that extends the signal-to-noise ratio range in which ACF analysis can be used. We demonstrate the use of all the tools developed in the experimental determination of the lateral speckle size depth dependence in a rotational endoscopic probe with low NA, and we show the ability to more accurately determine the rotational speed of an endoscopic probe to implement NURD detection. We finally present g(2)-based angiography of the finger nailbed, demonstrating the improved results from noise correction and the optimal bias mitigation strategies.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The coherent nature of optical coherence tomography (OCT) [1,2] imparts specific statistical properties to the speckle signal obtained in OCT tomograms [3]. The temporal statistics of the OCT signal, which evolves in time following a random process, is dictated by the dynamics of scatterers inside the resolution volume [4]. The analysis of these temporal statistics is generally carried out using autocorrelation analysis, although many techniques define their own correlation metrics. The measured decorrelation can be related to either changes in the microscopic configuration of the scatterers (such as in diffusion) or to the relative motion between an ensemble of scatterers and the OCT system (such as in translational motion). For a system with stationary optics, the most direct application of speckle decorrelation contrast is highlighting blood flow in angiographic techniques [59] and quantitative blood flow —flowmetry— [1015]. Decorrelation arising from dynamic microscopic configurations of scatterers, such as during protein denaturation due to thermal damage, has been used to monitor thermal therapy [1618]. For a system with moving optics, such as an endoscopic catheter, the decorrelation can be used to determine the motion of the catheter with respect to the tissue. In a rotating catheter with a flexible driveshaft and a distal rotation mechanism, dynamic changes in friction cause non-uniform rotation of the scanning optics. In this case decorrelation can be used to determine the rotation speed of the catheter and correct for non-uniform rotation distortion (NURD) to provide an accurate representation of the imaged tissue [19,20].

In autocorrelation analysis, the OCT intensity or complex-amplitude signal autocorrelation function (ACF) is estimated at different time points to determine the local dynamics of scatterers [see Figs. 1(a)–(c)]. Implicit to the definition of the ACF is the concept of ensemble averaging: the magnitude of the correlation can only be estimated when there exists an ensemble of observations. With finite sampling there is no direct access to the actual ACF, but a mere estimation. Although the first-order ACF $g^{(1)}$ (that is, of the complex-valued electric field) is universally defined, there exist multiple definitions of the second-order ACF $g^{(2)}$ (that is, of the intensity). Several definitions have been used in OCT [12,14] and in dynamic light scattering (DLS) literature [21]. Bias is a known problem in the estimation of any statistical parameter, and the second-order ACF is known to be biased [22,23]. In Figs. 1(d)–(e) we show how statistical bias modifies the shape of the ACF, with shorter time series exhibiting stronger bias. Bias has been studied in the context of time series used in stellar interferometers and DLS experiments, but both are very different with respect to experiments in OCT in terms of number of samples, total sampling times and evaluated time lags; we thus expect bias to arise in regimes not yet identified in OCT applications. These experimental differences arise because, in general, the OCT signal is acquired at different lateral locations, each for a short period of time before scanning continues to enable the creation of cross-section functional images. Thus, time series usually contain just a few samples in angiography and $\lesssim 100$ in flowmetry. However, to our knowledge, no systematic study has been carried out on the optimal approach for calculating the autocorrelation function in OCT: its optimal definition, its bias and approaches to reduce it.

 figure: Fig. 1.

Fig. 1. Autocorrelation analysis in OCT is used to determine local dynamics of scatterers, which adds functional contrast such as blood flowmetry. The evolution of the complex-amplitude OCT signal reveals presence of moving scatterers [upper blue curve in (a)], which can be quantified using the first-order ACF $g^{(1)}$ [blue curve in (b)]. When analyzing the OCT intensity signal [lower blue curve in (a)], the second-order ACF $g^{(2)}$ is used for quantification [blue curve in (c)]. In both cases, the decay of the ACF can be related to the scatterer dynamics —including diffusive and translational motion, indicated generically as $v$ in the figure. Noise in the OCT signal [green curves in (a)] is known to affect $g^{(1)}$ and $g^{(2)}$ [green curves in (b) and (c)], most commonly seen as a sharp initial drop in the ACF. Furthermore, $g^{(1)}(\tau =0)=1$ by definition, but $g^{(2)}(\tau =0)$ value is dependent on the contrast of the OCT intensity signal time series. Statistical bias manifests as a different estimated ACF decay for the same process when using time series with different lengths.

Download Full Size | PDF

Apart from bias, noise is known to affect the temporal statistics of the OCT speckle signal [see Figs. 1(a)–(c)]. Makita et al. [24] developed a noise-corrected version of the complex-amplitude cross-correlation coefficient based on an additive noise model for the complex OCT signal. This model is straightforward to apply to the first-order ACF, however no analytical expression is known for any of the definitions of the second-order ACF. Uribe-Patarroyo et al. [14] developed an empirical, numerical correction scheme for $g^{(2)}$. However, apart from requiring a cumbersome calibration procedure, its main drawback is the halving of the temporal resolution by dropping the first point in the autocorrelation function: this effectively halves the maximum decorrelation rate measurable with a given system. The new approach we present here does not suffer from any of these drawbacks.

In this work, we aim to study bias and the effect of noise in the second-order autocorrelation function, and to present strategies for correcting noise analytically and to prevent bias in the context of OCT. First, we introduce the model and identify the multiplicative nature of noise in the intensity signal. We then show an extension of the additive complex-amplitude noise model to calculate analytically the noise-corrected second-order ACF $g^{(2)}$ using three different commonly used definitions. We also introduce the concept of contrast bias and show the dependence of each definition on the contrast of a finite-sampled OCT speckle signal. Experimentally, we first validated the additive zero-mean complex circular Gaussian distribution model proposed by Makita et al. [24]. We then carried out a systematic study on the performance of the different definitions of $g^{(2)}$ and their statistical and contrast biases, and determined the best averaging strategy to reduce bias in $g^{(2)}$. Critically, we found that the ensemble size and averaging used can strongly determine the decay of the correlation and thus account for differences between experimental $g^{(2)}$ values and models. Finally, we show the impact of these optimal strategies in two exemplary applications: NURD detection in a rotational endoscopic system and angiographic contrast in a benchtop system. In the former, we used our results to determine the depth-resolved speckle size in a rotational-scan endoscopic probe, for which, to our knowledge, no model currently exists. As opposed to previous assumptions based on the behavior of benchtop raster-scan OCT systems [14,25], we show that the speckle size in a rotational configuration with low numerical aperture (NA) is roughly constant in depth in angular units, or linearly increasing in linear units. We make use of this result to show strategies to improve the robustness of NURD detection in endoscopic systems based on an ACF approach. In the second exemplary application, we show the improvement in image quality from the bias mitigation and noise correction in a $g^{(2)}$-based angiography metric in the finger nail vasculature of a healthy volunteer. For further information on the implementation of NURD correction —rather than just detection— and on the interpretation of angiographic contrast, we refer the reader to relevant publications in each corresponding section of this work. Together, these results will be valuable for the optimized implementation of many types of functional imaging which make use of the intensity fluctuations of the OCT signal, such as angiography, flowmetry and velocimetry.

2. Mathematical model

We first describe some generalities of the models that will be used in this work and then we present the noise-corrected versions of $g^{(1)}$ and the different definitions of $g^{(2)}$ in the following subsections. The detailed derivations for each case are presented in Appendix A.

In what follows, we defined a coordinate system $(z,x,y)$, where $z$ represents depth along an A-line, $x$ denotes the in-plane lateral coordinate (within a B-scan) and $y$ denotes the out-of-plane lateral coordinate; $s$ represents any one or more of the spatial coordinates; $t$ represents time and $\tau$ a time lag.

2.1 Noise model

2.1.1 Noise in the complex OCT signal

We define the noisy complex-valued OCT signal $F$ as the addition of the noiseless signal $S$ and the noise $N$, which follows a zero-mean complex circular Gaussian distribution [24]

$$F=S+N.$$
Note that $S$ is also given by a zero-mean complex circular Gaussian distribution due to the speckle nature of the OCT signal [4,26], but in general has different distribution parameters from that of $N$ and different temporal dynamics. By definition, two random variables $X$ and $Y$ are uncorrelated if and only if $\langle XY \rangle =\langle X \rangle \langle Y \rangle$, where $\langle \cdots \rangle$ denotes an ensemble average. In terms of Eq. (1), we assume that noise and signal are uncorrelated and therefore
$$\langle S N \rangle =\langle S \rangle \langle N \rangle.$$
Note that uncorrelation does not imply a zero-valued expected value of the cross-product above —the covariance is zero, but it is defined differently: $\langle \left (S - \langle S \rangle \right ) \left (N - \langle N \rangle \right )\rangle$. A discussion on temporal ($\langle \cdots \rangle _t$) and spatial ($\langle \cdots \rangle _s$) ensemble averaging is in order. Autocorrelation functions are defined in terms of ensembles of realizations of the signal. In practice, this corresponds to spatial averaging, where the signal from different scatterers undergoing the same dynamics are grouped into an ensemble. In an ergodic system, scatterers are not restricted in their configuration, and given enough time, a single group of scatterers will traverse the whole phase space and spatial averaging can be replaced by temporal averaging. In our model, we assume that the OCT signal represents that of an ergodic system. We also assume that $N$ corresponds to white noise: it has a flat power spectrum, and therefore $N$ is completely decorrelated across time and space. From the flat power spectrum of $N$ and its zero-mean complex circular Gaussian distribution, it follows that $\langle N \rangle _s=\langle N \rangle _t=0$ for sufficiently large ensemble sizes. We assume, to an excellent approximation, that all ensemble sizes used in practice are large enough to satisfy these equalities.

In terms of the signal $S$, $\langle S \rangle _s=\langle S \rangle _t=0$ by definition for sufficiently large ensembles as well. In this case the power spectrum is not flat, and thus the critical figure is not the number of samples within an ensemble, but the number of independent speckle. The speckle size is defined: in the axial dimension, by the zero-padding on the OCT fringe data; in the lateral coordinates, by the lateral sampling and its relation to the diffraction-limited resolution; and in the time dimension, by the dynamics of the signal. For these reasons, ensembles sizes for which $\langle N \rangle _s=\langle N \rangle _t=0$ holds do not guarantee the same for $S$. This is clear in applications such as angiography: static structures, such as non-perfused tissue, presents a nearly constant value of $S$ and thus $\langle S \rangle _t\neq 0$. Similarly, in flowmetry areas with slow flow will evolve only partially over the temporal extent of the ensemble and in general $\langle S \rangle _t \neq 0$. For these reasons, we do not make any assumptions on the statistics of the signal, in particular about the value of $\langle S \rangle _s$ or $\langle S \rangle _t$ in our model. Our model is thus general and it is not limited to specific applications. This aspect of the model also highlights an alternative point of view for the interpretation of autocorrelation analysis in OCT: in essence, the goal is to determine the speckle size in the time dimension at different locations in the sample, either considering complex-valued speckle with $g^{(1)}$ or its squared magnitude with $g^{(2)}$.

2.1.2 Noise in the intensity OCT signal

Moving to $g^{(2)}$, we now define the following intensities

$$T=\lvert S \rvert^2,\, Q=\lvert N \rvert^2,\, I=\lvert F \rvert^2;$$
importantly, the noisy tomogram intensity is given by
$$I = T + Q + 2\textrm{Re}{\{S^\ast N\}} = T + Q + 2\lvert S \rvert\lvert N \rvert\cos{\varphi},$$
where $\varphi$ is the relative phase between $S$ and $N$. Given the zero-mean complex circular Gaussian statistics of $N$, $Q$ —the additive component of the noise— is described by an exponential distribution with a given average value $\langle Q \rangle$, which is customarily defined as the “noise floor” in OCT. The signal-to-noise ratio (SNR) is traditionally calculated experimentally as $\hat {R}=\frac {\langle I \rangle }{\langle Q \rangle }$, although we will use the more rigorous definition ${R}=\frac {\langle T \rangle }{\langle Q \rangle }$ throughout this work. Given the nature of $N$, it is clear that the last term in Eq. (4) —the multiplicative component of the noise— is a rapidly oscillating term driven by the relative phase between noise and signal. Therefore, a given noiseless signal with intensity $T$ will be corrupted across different realizations of the noise by: the offset given by $Q$ and a rapidly fluctuating multiplicative term of magnitude $4|S||N|$. This implies that although the average contrast (ratio) of a signal with respect to the noise floor (noise in absence of signal) is indeed given by $R$, the noise of the signal itself (its fluctuations in time as a result of different realizations of the noise) is of magnitude $4|S||N|$, which is a more important metric when analyzing the temporal fluctuations of $I$ rather than the contrast of a static image. This noise scales with the square root of the signal, and we quantify it by means of the relative standard deviation (RSD) of the signal $\rho =\sigma _I/\langle I \rangle _t$, where $\sigma _I\equiv \sqrt {\langle I^2\rangle _t-\langle I \rangle _t^2}$ is the standard deviation in a time series. We note that given that $\sigma _{\cos {\varphi }}=\sqrt {2}/2$ when $\varphi$ is uniformly distributed, the model predicts
$$\rho=\sqrt{2}\frac{\langle |N|\rangle_t}{\langle |S|\rangle_t}.$$
Apart from the expectation that the RSD scales with the inverse square root of $I$, the fact that this component of the intensity noise is multiplicative makes it particularly deleterious to the calculation of the autocorrelation function of the signal and difficult to mitigate. We emphasize that we have not made any assumptions about the noise in the intensity signal, but rather propagated into the intensity the properties of the complex circular Gaussian noise $N$ defined in the complex-amplitude signal.

With the definition of the noise model considered above, we now move onto the development of analytical corrections to the different ACFs definitions to account for noise-induced decorrelation. We first consider that all ACFs compare the value of a function $K$ at different time points $K(t)$ and $K(t+\tau )$. We define the correlation window size $n_{\omega }$ as the number of time points contained in the time series without regard to the extent of the spatial ensemble; that is, the number of available time differences $\tau$ to calculate an ACF. We now clarify an important concept for which there are common misconceptions: although generally uniform time sampling is used when calculating ACFs, nothing in the definition of the different ACFs imposes this restriction. Non-uniform sampling is compatible with autocorrelation analysis as long as care is taken to select the correct sample pairs that share the same time difference $\tau$ before ensemble averaging. In uniform sampling, the number of members for temporal ensemble averaging $\langle \ \rangle _t$ for a given time difference $\tau$ is given by $n_t(n_{\omega }, \tau )=n_{\omega } - \tau$, which implies a decreasing ensemble size (and thus a less accurate estimation) with increasing $\tau$. Non-uniform time sampling will have a different function $n_t(n_{\omega }, \tau )$, which could be tailored to improve the ensemble averaging at the range of $\tau$ of interest while enabling, for instance, the parallelization of the lateral sampling, which has been demonstrated in custom decorrelation metrics [27]. Here we highlight the fact that such non-uniform acquisition approaches can also work in the context of traditional ACFs.

In a stationary process, the temporal statistics of $K$ are assumed to be constant, and therefore the autocorrelation does not depend on the specific time $t$ but only on the time difference $\tau$; in a dynamic process, the statistics evolve in time and the autocorrelation depends on both $t$ and $\tau$. In what follows, we define for simplicity all ACFs as those of a stationary process; extension to a dynamic process is straightforward. We also simplify the notation by using subscripts 1 and 2 to represent $(t)$ and $(t+\tau )$, respectively. We also denote all ACFs of the noisy signal with a tilde accent ${}^{\sim}$.

In certain cases, the signal $S$ may also include a static component $S_S$ in addition to the dynamic scattering component $S_D$ of interest, $S = S_S+S_D$. This is the case, for example, when imaging blood flow partially overlapping the vessel wall. As we do not make assumptions about $S$, all models and analysis still apply regardless of the constituents of $S$. If there is interest in separating different components from $S$, the reader is encouraged to read publications in which this is the subject matter [10,28].

2.2 First-order autocorrelation function

The first-order ACF of the noiseless and the noisy complex-amplitude signals are given by

$$g^{(1)}(\tau)=\frac{\langle S_1^* S_2\rangle}{\sqrt{\langle \lvert S_1\rvert^2\rangle \langle \lvert S_2\rvert^2\rangle}};\, \tilde{g}^{(1)}(\tau)=\frac{\langle F_1^* F_2\rangle}{\sqrt{\langle \lvert F_1\rvert^2\rangle \langle \lvert F_2\rvert^2\rangle}},$$
respectively. $g^{(1)}$ is defined classically in the range $[0, 1]$, with 1 indicating perfectly correlated signals and 0 completely decorrelate signals. Following Makita et al.[24] and Appendix A, it is possible to write
$$\tilde{g}^{(1)}(\tau)=g^{(1)}(\tau)\frac{1}{\left(1+R^{{-}1}_1\right) \left(1+R^{{-}1}_2\right)},$$
where $R_j=\frac {\langle \lvert S_j \rvert ^2\rangle }{\langle \lvert N_j \rvert ^2\rangle }$ is the SNR. As clarified above, we define the SNR as the ratio between the noiseless signal and the noise, which is related to the ratio between the noisy signal and the noise as $\hat {R}_j = \frac {\langle \lvert F_j \rvert ^2\rangle }{\langle \lvert N_j \rvert ^2\rangle } = R_j+1$.

2.3 Second-order autocorrelation functions

We will discuss three different definitions of $g^{(2)}$. First, the definition mostly used in the DLS literature [21], $g^{(2)}_{\textrm {DLS}}$. Second, a definition with symmetric normalization used in photon correlation experiments, in particular in intensity interferometers such as the stellar interferometer devised by Hanbury Brown and Twiss [22,23,29,30], $g^{(2)}_{\textrm {HBT}}$; and finally, the Pearson correlation coefficient $g^{(2)}_{\textrm {P}}$, which in rigorous terms is a normalized autocovariance, but has been used previously in the context of OCT [12,14,20].

2.3.1 Dynamic light scattering

In this definition, the second-order ACF of the noiseless and the noisy intensity signals are

$$g^{(2)}_{\textrm{DLS}}(\tau) = \frac{\langle T_1T_2\rangle}{\langle T_1\rangle^2};\, \tilde{g}^{(2)}_{\textrm{DLS}}(\tau) = \frac{\langle I_1I_2\rangle}{\langle I_1\rangle^2},$$
respectively. Note that $g^{(2)}_{\textrm {DLS}}(\tau =0) = \frac {\langle T_1^2\rangle }{\langle T_1\rangle ^2}$. Speckle has unity contrast $C=\frac {\sigma _T}{\langle T \rangle }=1$ for very large ensembles [4]. Now $\frac {\langle T_1^2\rangle }{\langle T_1\rangle ^2}=C_1^2+1$, which means that perfectly correlated speckle has $g^{(2)}_{\textrm {DLS}}=2$, and decorrelated signals have $g^{(2)}_{\textrm {DLS}}=1$. However $g^{(2)}_{\textrm {DLS}}(\tau =0)=2$ is only obtained for sufficiently large ensembles, and in practice it fluctuates —strongly for typical OCT ensemble sizes— following a log-normal distribution [31]. Therefore, comparing the autocorrelation of ensembles of different contrast will introduce a bias, which we call contrast bias, as opposed to the statistical bias discussed above. It is important to remark that this is not only contrast variability but a bias: due to the nature of the log-normal distribution, the average (and median) contrast for a given ensemble size is dependent on the ensemble size [31]. The smaller the ensemble size, the lower the contrast is, on average. In order to account for this bias and to be able to compare the autocorrelation function between ensembles with different speckle contrast, we propose two approaches. First, we define a contrast-normalized function
$$g^{(2)}_{\textrm{DLS,c}}(\tau) = \frac{g^{(2)}_{\textrm{DLS}}(\tau) - 1}{g^{(2)}_{\textrm{DLS}}(\tau=0) - 1} C_\infty^ 2 + 1,$$
where $C_\infty =1$ is the ideal contrast in large ensembles. This normalization considers that the signal is composed of pure speckle and that all fluctuations should be treated as such. Although rigorous, this type of normalization have unintended effects: mostly static tissue will present very small fluctuations in its signal, which are amplified to match that from moving scatterers. For this reason we also define a global-normalized function
$$g^{(2)}_{\textrm{DLS,g}}(\tau) = \frac{g^{(2)}_{\textrm{DLS}}(\tau)}{g^{(2)}_{\textrm{DLS}}(\tau=0)} \left[C_\infty^ 2 + 1\right],$$
which adjusts globally the range of the autocorrelation function to match that of a speckle signal of unity contrast. This normalization, defined empirically, does not amplify small fluctuations in mostly static tissue.

In order to relate $g^{(2)}_{\textrm {DLS}}(\tau )$ with $\tilde {g}^{(2)}_{\textrm {DLS}}(\tau )$, for $\tau\;>\;0$ we make use of Eq. (42) in Appendix A to obtain

$$ \tilde{g}^{(2)}_{\textrm{DLS}}(\tau\;>\;0) = \dfrac{g^{(2)}_{\textrm{DLS}}(\tau\;>\;0) + R_{12}^{{-}1} + L_{21}R_1^{{-}1} + R_{1}^{{-}1}R_{12}^{{-}1} }{1 + R_1^{{-}2} + 2R_1^{{-}1}}. $$
where $\frac {\langle T_1\rangle }{\langle Q_1\rangle }=R_1$. We also define the cross SNR $\frac {\langle T_1\rangle }{\langle Q_2\rangle }=R_{12}$ and a signal-to-signal ratio (SSR) $\frac {\langle T_2\rangle }{\langle T_1\rangle }= L_{21}$. In practice, measurement times in OCT are relatively short in time and it is possible to assume, to an excellent approximation, that the SNR and SSR are stable during a single time series, which allow us to define a single SNR as $R=R_1=R_2$ and thus simplify these expressions to $\frac {\langle T_1\rangle }{\langle Q_2\rangle }=R$ and $\frac {\langle T_2\rangle }{\langle T_1\rangle }=1$ yielding
$$g^{(2)}_{\textrm{DLS}}(\tau\;>\;0) = \tilde{g}^{(2)}_{\textrm{DLS}}(\tau\;>\;0) [1 + G] - G,$$
with
$$G = \frac{1 + 2R}{R^2}.$$
For $\tau =0$ we write Eq. (46) as
$$g^{(2)}_{\textrm{DLS}}(\tau=0) = \dfrac{\tilde{g}^{(2)}_{\textrm{DLS}}(\tau=0) + 2R_1^{{-}2} + 4R_1^{{-}1}}{1 + R_1^{{-}2} + 2R_1^{{-}1}},$$
which using the same approximation becomes
$$g^{(2)}_{\textrm{DLS}}(\tau=0) = \tilde{g}^{(2)}_{\textrm{DLS}}(\tau=0) [1 + G] - 2G.$$

2.3.2 Symmetric normalization (Hanbury Brown-Twiss)

In this definition, the second-order ACF of the noiseless and the noisy intensity signals are

$$g^{(2)}_{\textrm{HBT}}(\tau) = \frac{\langle T_1T_2\rangle}{\langle T_1\rangle\langle T_2\rangle};\, \tilde{g}^{(2)}_{\textrm{HBT}}(\tau) = \frac{\langle I_1I_2\rangle}{\langle I_1\rangle\langle I_2\rangle},$$
respectively. It shares the same limits of [1, 2] with $g^{(2)}_{\textrm {DLS}}$. For $\tau\;>\;0$, we can relate $g^{(2)}_{\textrm {HBT}}$ with $\tilde {g}^{(2)}_{\textrm {HBT}}$ using Eq. (49) in Appendix A as
$$ \tilde{g}^{(2)}_{\textrm{HBT}}(\tau\;>\;0) = \dfrac{g^{(2)}_{\textrm{HBT}}(\tau\;>\;0) + R_2^{{-}1} + R_1^{{-}1} + R_1^{{-}1}R_2^{{-}1}}{1 + R_2^{{-}1} + R_1^{{-}1} + R_1^{{-}1}R_2^{{-}1}}. $$
By replacing again $R_1=R_2=R$ to an excellent approximation we obtain
$$g^{(2)}_{\textrm{HBT}}(\tau\;>\;0) = \tilde{g}^{(2)}_{\textrm{HBT}}(\tau\;>\;0) [1 + G] - G.$$
For $\tau =0$, ($I_1=I_2$), we employ Eq. (50) in Appendix A to obtain
$$g^{(2)}_{\textrm{HBT}}(\tau=0) = \tilde{g}^{(2)}_{\textrm{HBT}}(\tau=0) \dfrac{2R_1^{{-}2} + 4R_1^{{-}1}}{1 + R_1^{{-}2} + 2R_1^{{-}1}},$$
which we approximate to
$$g^{(2)}_{\textrm{HBT}}(\tau=0) = \tilde{g}^{(2)}_{\textrm{HBT}}(\tau=0) [1 + G] - 2G.$$
This matches DLS result when using the aforementioned approximations. The same two types of normalized autocorrelation functions defined in Eqs. (9) and (10) apply to HBT, which we denote by $g^{(2)}_{\textrm {HBT,c}}(\tau )$ and $g^{(2)}_{\textrm {HBT,g}}(\tau )$.

It is common in wavelength-swept OCT systems to have polarization-diverse detection and therefore two polarization channels with noisy intensities $I_{p_1}$ and $I_{p_2}$. In most clinical systems, only access to the sum of the intensities $I_T=I_{p_1}+I_{p_2}$ is available to the user. It is still possible to find a relation between $g^{(2)}_{\textrm {HBT}}$ and $\tilde {g}^{(2)}_{\textrm {HBT}}$ defined on $I_T$, but the expression is more involved. This special case is considered in Appendix B.

2.3.3 Pearson

The Pearson correlation coefficient of the noiseless and the noisy intensity signals are given by

$$g^{(2)}_{\textrm{P}}(\tau) = \frac{ \langle \left(T_1 - \langle T_1\rangle\right) \left(T_2 - \langle T_2\rangle\right) \rangle}{\sigma_{T_1}\sigma_{T_2}};\, \tilde{g}^{(2)}_{\textrm{P}}(\tau) = \frac{ \langle \left(I_1 - \langle I_1\rangle\right) \left(I_2 - \langle I_2\rangle\right) \rangle}{\sigma_{I_1}\sigma_{I_2}},$$
respectively. From Eq. (54) in Appendix A, it is possible to relate $g^{(2)}_{\textrm {P}}$ with $\tilde {g}^{(2)}_{\textrm {P}}$ as
$$g^{(2)}_{\textrm{P}}(\tau) = \tilde{g}^{(2)}_{\textrm{P}}(\tau) \dfrac{1}{\sqrt{1+\dfrac{1+2R_1}{2\tilde{R}_1^2-R_1^2}}} \dfrac{1}{\sqrt{1+\dfrac{1+2R_2}{2\tilde{R}_2^2-R_2^2}}},$$
where $\tilde {R}_j=\frac {\langle T_j^2\rangle }{\langle Q_J^2\rangle }$. Approximating $\tilde {R}=\tilde {R}_1=\tilde {R}_2$ we finally obtain
$$g^{(2)}_{\textrm{P}}(\tau) = \tilde{g}^{(2)}_{\textrm{P}}(\tau) \dfrac{1}{1+\dfrac{1+2R}{2\tilde{R}^2-R^2}}.$$
The Pearson correlation coefficient is $g^{(2)}_{\textrm {P}}(\tau =0) = 1$ by definition, and therefore its maximum value is not affected by the contrast of the signal. Uncorrelated signals have a Pearson correlation of 0. However, we now show that a $\tau$-dependent contrast bias deforms its shape. First, notice from the definition of $C$ given before we have $\sqrt {\langle T^2\rangle -\langle T \rangle ^2}=1+C^2=\langle T \rangle C$, thus
$$g^{(2)}_{\textrm{P}}(\tau) = \frac{ \langle T_1 T_2\rangle - \langle T_1\rangle\langle T_2\rangle} {C_1C_2\langle T_1\rangle\langle T_2\rangle} = \frac{1}{C_1C_2}\left[g^{(2)}_{\textrm{HBT}}(\tau) - 1 \right],$$
which for $\tau =0$ reduces to $g^{(2)}_{\textrm {P}}(\tau =0)=1$. Therefore, the Pearson correlation coefficient is a modified HBT autocorrelation function with an strong contrast bias in the form of $\frac {1}{C_1C_2}$. This contrast bias is not constant because the ensemble size on which $C_1$ and $C_2$ are calculated depend on $\tau$, which due to the ensemble-size bias of the contrast discussed above, implies that $C_j=C_j(\tau )$. Despite this, an important property of $g^{(2)}_{P}$ is that it is intrinsically normalized to lie between $[0, 1]$ —reaching a value of $-1$ for anticorrelation which does not apply in this case— and thus does not require any of the aforementioned normalization approaches.

2.4 Averaging strategies and statistical bias

Statistical bias occurs when the average value of an estimator of a parameter differs from the true value. This is different from the variability of an estimator, which is a measure of the spread of the distribution of the estimator. In most cases, statistical bias is a consequence of finite sampling and, in general, it decreases asymptotically as the number of samples increases.

Estimation of the ACF in OCT present great variability due to the limited sampling because, as mentioned above, generally at most $\sim 100$ samples per ACF are used in living tissue (larger acquisitions are possible in phantoms, such the use of 1000–5000 time samples by Weiss et al.[32]), and as few as 2–5 for angiographic techniques. Almost all applications of decorrelation metrics in OCT make use of spatial averaging in order to reduce the statistical noise that is associated with such limited sampling, and averaging has been performed after individual ACFs have been calculated on the ACFs themselves [10,13,33] or on the decorrelation rate [11,14]. Here we will discuss three such strategies (illustrated in Fig. 2) and its implications with respect to statistical bias and noise correction. In the experimental results, a systematic study on each approach will be discussed.

 figure: Fig. 2.

Fig. 2. Illustration of the operation of averaging strategies to obtain a more robust estimation of the ACF of the times series in (a) where three temporal members are used for the temporal ensemble average. For the averaging strategies, we consider the goal of having six members of the ensemble in total: in (b) the number of time samples is higher ($t$-EA averaging), and in (c) and (d) there are time series for two spatial locations $x_1$ and $x_2$ to calculate the ACF using (c) $s$-EA and (d) ACC. As we are considering only the numerator of $g^{(2)}$ for illustration purposes, $\Pi (\tau )$ represents the product of the elements in the intersection of the time series with and without time-lag $\tau$: six elements for (b) and three elements for (a), (c) and (d).

Download Full Size | PDF

In the following discussion, we assume the constrain of keeping the acquisition time constant, consequently, we consider different averaging strategies that do not increase the total acquisition time. We show in Fig. 2(a) an example of the calculation of the ACF for a time series at a spatial location $x_1$ and for a given $\tau$ such that there are three members in the temporal ensemble average; we then illustrate in Figs. 2(b)–(d) the operation of the averaging strategies using additional temporal or spatial information. The first approach, which we denote temporal-ensemble averaging ($t$-EA), is the straightforward increase of the number of time samples $n_{\omega }$ —this comes at the expense of reducing the number of lateral locations to keep the total acquisition time constant. $t$-EA is depicted in Fig. 2(b) where a longer time series is required compared to Fig. 2(a). Increasing $n_{\omega }$ to $n_{\omega }+\Delta n_{\omega }$, the number of members of the temporal ensemble increases linearly in $\tau$, $n_t(n_{\omega }, \tau ) - n_t(n_{\omega } + \Delta n_{\omega }, \tau ) = \Delta n_{\omega }\tau$, thus the relative improvement is significantly more pronounced when the time lag approaches $n_{\omega }$

$$\frac{n_t(n_{\omega} + \Delta n_{\omega}, \tau)}{n_t(n_{\omega}, \tau)} = \frac{\Delta n_{\omega}\tau}{n_t(n_{\omega}, \tau)}.$$
The second approach, which we call spatial-ensemble averaging ($s$-EA), consist of averaging spatial information by including in the ensemble average the samples within a local spatial window that is assumed to present similar dynamics. For instance, in Fig. 2(c) there are two time series for two spatial locations $x_1$ and $x_2$ with the same temporal extent of that in Fig. 2(a). In $s$-EA the window can be defined along any of the spatial dimensions $w_s(x,y,z)$ with size $N_z+1$, $N_x+1$ and $N_y+1$ in $z$, $x$ and $y$, respectively, which we denote by $N_{w_s} = \{N_z, N_x, N_y\}$ for simplicity. Assuming $w_s(x,y,z)$ is defined in such way that the maximum value is equal to 1, the effective number of elements in the spatial window is defined as $n_s=\sum _{x,y,z} w_x(x,y,z)$ —which accounts for non-binary windows— and in this case the relative improvement in the number of members of the ensemble is
$$\frac{n_t(n_{\omega}, \tau, w_s)}{n_t(n_{\omega}, \tau)} = n_s,$$
and therefore is independent of $\tau$.

In the final approach, which we call autocorrelation compounding (ACC) [illustrated in Fig. 2(d)], the ACFs after calculation are spatially averaged with a spatial window $w_s(x,y,z)$. ACC is used extensively in the literature [10,13,33]. Note that for a given $\tau$, total number of elements in the calculation, combining temporal and spatial averaging, is $n_{ts} = n_t(n_{\omega },\tau )n_s$. Our experimental results will show that there is a fundamental difference between $s$-EA and ACC in terms of tackling statistical bias. We note that in this case the noise correction can be performed on each individual ACF before or after compounding (averaging). There are advantages and disadvantages to each approach. Applying the noise correction before compounding permits the mixing of ACFs with dissimilar SNRs, but requires that each individual ACF has sufficiently large ensembles for an accurate estimation of the SNR (and associated $R$ values). Applying the noise correction after compounding precludes mixing SNRs, but averaging the individually calculated $R$ values should improve the SNR estimation and make the noise correction more robust. The specific numerical implementation of the averaging strategies described in this section are explained in Sec. 3.2.1 where they are put in the context of the scanning configurations used with our OCT system.

2.5 Speckle size in OCT

The axial speckle size in OCT tomograms is given by the degree of zero-padding and the shape of the windowing function prior to Fourier transformation of the fringes [3]. In some cases the axial speckle size can depend on other parameters, such as when strong sample dispersion or absorption is present [34], which we do not consider here. In a benchtop configuration —using galvanometer mirrors to perform raster scans— the lateral speckle size is determined by the diffraction-limited resolution of the imaging optics. When determining the size in terms of pixels, the lateral sampling has to be considered. When considering the raster scan is sampling the speckle signal of a rigid sample moving at the constant speed of the galvo scan, this dependence has been shown to be the case even when the sample is out of focus [15,25,35] and in a single or multiple scattering regime [36], and are the same in the in-plane and out-of-plane axes. This fact will be used in the experiments below, by means of a raster-scan benchtop configuration scanning a solid scattering sample at a fixed scanning speed, which should yield the same speckle size regardless of location.

To the best of our knowledge, there are no published models for the case of an endoscopic system with rotational scanning. Focusing on the in-plane coordinate only, this configuration cannot be approximated by the raster scan configuration: even considering a small sample volume and slow rotation of the probe, the illumination and detection modes defined by the catheter optics not only move laterally during the scan, but the angle of incidence onto the sample volume changes due to the rotation. The magnitude of the change in angle is different throughout the depth, as scatterers close to the rotation axis are illuminated by a wider range of angles while scatterers away from the rotation axis are asymptotically illuminated by a single angle. However, we can make use of the numerical results by Marks et al.[37] on the topic of digital refocusing in rotationally scanned OCT probes. They studied the recoverable lateral resolution in a rotational endoscopic probe in several configurations, see Fig. 4 in Ref. [37]. Given their model and processing methods enable the recovery of the diffraction-limited resolution throughout the sample, we can consider Fig. 4 as depicting the diffraction-limited resolution as a function of depth. Subplots (d) depict the lowest NA probes considered by the authors, which in terms of a system with a $1.3$ µm central wavelength would have a 8 µm beam waist and a maximum working distance of 42 µm from the rotation axis. The diffraction-limited resolution is mostly linear with depth (numerically evaluated up to a depth of 52 µm). From the behavior of the other configurations, it is possible to predict that the diffraction-limited resolution in linear units µm should increase roughly linearly with depth, although some curves tend to show a slight decrease in curvature at depths away from the focal plane. In terms of angular units, this corresponds to a mostly constant speckle size, which has been observed in the past [38]. The out-of-plane axis, when implemented as a pullback, should behave in the same way as the raster scan described before. Knowledge of the expected speckle size as a function of depth in endoscopic systems is critical for the implementation of NURD detection and correction techniques based on speckle decorrelation for applications in which the tissue cannot be kept in place at a predictable depth —the case of previous works [20] where a balloon catheter plays this role— such as in intravascular and pulmonary imaging.

3. Materials and methods

3.1 OCT systems

We used two OCT systems. For the benchtop experiments, we used a frequency-domain OCT system with a VCSEL wavelength-swept laser at a repetition rate of 100 kHz, with 10-dB bandwidth of 100 nm centered at 1310 nm. Data was acquired using a $k$-clock signal from the laser and an Alazar Tech ATS9373 digitizer. The average sampling rate was approximately 400 MS/s with 3072 samples per sweep. The system had a polarization-diverse receiver and the signal from each channel was processed independently. The acquired fringes were zero-padded and Fourier transformed to reconstruct tomograms with a final A-line size of 2048 samples in the axial direction and an axial pixel size of 5.4 µm in air. In the phantom experiments, the shared illumination and detection paths used a 1.5 mm $e^{-2}$ beam diameter collimator (Thorlabs, F240APC-C) in conjunction with a 36 mm focal length scan lens (Thorlabs, LSM03) in a telecentric configuration. This resulted in a diffraction-limited $e^{-2}$ beam diameter of 57 µm at the focal plane ($e^{-2}$ diameter effective resolution of 40 µm after considering the confocal effect). In the angiography experiment, the collimator had a 3.4 mm $e^{-2}$ beam diameter (Thorlabs, F280APC-C) in conjunction with a 18 mm focal length scan lens (Thorlabs, LSM02), resulting in a diffraction-limited $e^{-2}$ beam diameter of 13 µm at the focal plane (9 µm after considering the confocal effect).

For the endoscopic experiments, we used a frequency-domain OCT system with a polygon-based wavelength-swept laser and a frequency shifter as described by Yun et al. [39]. The laser had a repetition rate of 102 KHz, a 134 nm sweep range, centered at 1285 nm. The signal was digitized at 180 MS/s with a Signatec PX14400A (DynamicSignals LLC, Lockport, IL USA) digitizer, recording 1600 samples per sweep. The acquired fringes were zero-padded and Fourier transformed to reconstruct tomograms with a final A-line size of 1024 samples in the axial direction and an axial pixel size of 5.9 µm in air. The system had a polarization-diverse receiver and the signal from each channel was processed independently. The system used a custom rotary junction and endoscopic probes, which provided a $e^{-1}$ beam diameter of the lateral point spread function (PSF) of approximately 30–36 µm in water (22–26 µm after the confocal effect). Considering that the system acquired each A-line for a given azimuthal angle, the raw B-scan data was in polar coordinates; this implies that the decay of the ACF represents the size of the speckle in angular units.

For both systems, the noise floor $Q$ was determined by blocking the sample arm light in front of the collimator of the sample arm and acquiring a B-scan (512 A-lines for the VCSEL system, 2048 A-lines for the endoscopic system). Such a B-scan would ideally have zero intensity; instead, its non-zero intensity reflects the cumulative noise contributions from digitizer, detector, reference arm shot and excess photon noise due to imperfect balancing. We averaged the intensity of all A-lines in this B-scan to obtain a depth-dependent noise floor $Q(z)$. We ignored contributions to the noise floor due to sample arm power.

3.2 Numerical methods

3.2.1 ACF computation and averaging

For the computation of ensemble averages, denoted as $\langle \cdots \rangle$, we define two scanning configurations for an OCT system: temporal or spatial scanning modes. In temporal mode, the system acquires M-mode data containing $N_t$ A-lines at the same location of the sample, at different lateral locations and with $N_f$ repetition frames, also known as hybrid M-mode–B-scan. This is the most general mode, in which lateral scanning is decoupled from time sampling. Before calculating the ACF, it is also possible to split each M-mode (with a sampling interval of length $N_t$) into $n_{\Omega }=\{0,1,\ldots ,N_{\Omega } - 1\}$ smaller correlation windows —which we call realizations— of size $n_{\omega } = N_t/N_{\Omega }$, in order to calculate an ACF in each and perform a temporal ACC, similar to the Bartlett’s methods in time series analysis [40]. We also consider the possibility of acquiring multiple M-mode–Bscans at the same location, which we identify by the frame index. Therefore, in this mode the OCT intensity signal $I=I(z,x,t,n_f, p)$ is a function of depth $z$, in-plane lateral coordinate $x$, time $t$, frame $n_f$ and polarization channel $p = \{1,2\}$ of the polarization-diverse receiver. Dependency on out-of-plane coordinate $y$ is omitted for simplicity. We now define the operator ${{}_{t} \langle \cdots \rangle }{^{t,t+\tau }_{n_{\omega }}}$ for temporal ensemble averaging of the cross-product of a function $K$ evaluated at $t$ and $t+\tau$ for a given time-delay $\tau$ and correlation window size $n_{\omega }$ as

$${{\vphantom{\langle K(z,x,t,n_f,p)\rangle}}_{t} \langle K(z,x,t,n_f,p)\rangle}{^{t,t+\tau}_{n_{\omega}}} = \frac{1}{n_{\omega} - \tau}\sum_{t=1}^{n_{\omega} - \tau} K(z,x,t,n_f,p) K(z,x,t+\tau,n_f,p).$$
where we remark that $\tau$ is an integer time-difference index, $n_t(n_{\omega }, \tau )= n_{\omega } - \tau$ for uniform sampling as discussed above, and $t$ is an integer index that goes from 1 to $n_{\omega } - \tau$, the maximum available time difference in the ensemble. Similarly, we define the operator ${{}_{t} \langle \cdots \rangle }{_{n_{\omega },\tau }}$ for temporal ensemble averaging of a single term of a function $K$ as
$${{\vphantom{\langle K(z,x,t,n_f,p)\rangle}}_{t} \langle K(z,x,t,n_f,p)\rangle}{_{n_{\omega},\tau}} = \frac{1}{n_{\omega} - \tau}\sum_{t=1}^{n_{\omega} - \tau} K(z,x,t,n_f,p).$$
Based on the above, computation of the first- and second-order ACFs given above is straightforward. For instance, $\tilde {g}^{(2)}_{\textrm {HBT}}$ is computed as
$$\tilde{g}^{(2)}_{\textrm{HBT}}(\tau) = \dfrac{{{\vphantom{\langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}}_{t} \langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}{^{t,t+\tau}_{n_{\omega}}}} {{{\vphantom{\langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}}_{t} \langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}{_{n_{\omega},\tau}}\, {{\vphantom{\langle I(z,x,n_{\Omega} n_{\omega} + t + \tau,n_f,p)\rangle}}_{t} \langle I(z,x,n_{\Omega} n_{\omega} + t + \tau,n_f,p)\rangle}{_{n_{\omega},\tau}}}.$$
For noise correction, the signal and noise floor are averaged in the same way, e.g. the SNR used to correct for noise in the above ACF is calculated, assuming $Q=Q(z,p)$ only, as
$$R = \frac{{{}_{t} \langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}{_{n_{\omega},0}}}{ Q(z,p)} - 1.$$
The temporal scanning mode allows to study the sample dynamics at different fixed positions and is used, for example, in flowmetry and angiography. However, in certain examples, particularly in some experiments that we detail below, there is an interest in studying the spatial evolution of an relatively static sample, dominated in principle by speckle decorrelation, using a spatial scanning mode, in which the system acquires B-scans capturing A-lines while scanning through lateral coordinate $x$. In such case, assuming that the beam is scanned at a constant speed, time $t$ and in-plane coordinate $x$ are equivalent in terms of ensemble averaging as in an ergodic scatterer system, thus in practical terms one can omit $x$ and only consider $t$ as the dimension used to calculate the ensembles.

Consequently, we define the operator ${{\vphantom{\langle \cdots \rangle}}_{s} \langle \cdots \rangle }{^{\hat {w}_s}_{N_{w_s}}}$ for spatial ensemble averaging of a single term of a function $K$ within a normalized window $\hat {w}_s(x,z)$ of depth and lateral size $N_{w_s} = \{N_z, N_x\}$ as

$${{\vphantom{\langle K(z,x,t,n_f,p)\rangle}}_{s} \langle K(z,x,t,n_f,p)\rangle}{^{\hat{w}_s}_{N_{w_s}}} = \frac{1}{(N_x+1)(N_z+1)} \sum_{n_x={-}\frac{N_x}{2}}^{\frac{N_x}{2}}\sum_{n_z={-}\frac{N_z}{2}}^{\frac{N_z}{2}}\hat{w}_s(n_x,n_z)K(z+n_z,x+n_x,t,n_f,p),$$
which considers spatial-ensemble averaging along depth and lateral dimensions, although this can be extended similarly to out-of-plane lateral coordinate, and where the normalized window $\hat {w}_s(x,z)$ is
$$\hat{w}_s(x,y) = \frac{w_s(x,y)}{\sum_{n_z, n_x}w_s(n_z,n_x)}.$$
Using this operator, computation of $g^{(2)}_{\textrm {HBT}}(\tau )$ via $s$-EA is performed as
$$\tilde{g}_{\textrm{HBT}}^{(2)}(\tau ) = \frac{{{\vphantom{t\left \langle{{}_s}\langle I(z,x,{n_{\Omega }}{n_\omega } + t,{n_f},p)\rangle _{{N_{{w_s}}}}^{\hat {{w_s}}}\right \rangle}}_t\left \langle{{\vphantom{\langle I(z,x,{n_{\Omega }}{n_\omega } + t,{n_f},p)\rangle}}_s}\langle I(z,x,{n_{\Omega }}{n_\omega } + t,{n_f},p)\rangle _{{N_{{w_s}}}}^{\hat {{w_s}}}\right \rangle _{{n_\omega }}^{t,t + \tau }}}{{{\vphantom{\left \langle{{}_s}\langle I(z,x,{n_{\Omega }}{n_\omega } + t,{n_f},p)\rangle _{{N_{w_s}}}^{{{\hat{w}}_s}}\right \rangle}}_t{{\left \langle{{\vphantom{\langle I(z,x,{n_{\Omega }}{n_\omega } + t,{n_f},p)\rangle}}_s}\langle I(z,x,{n_{\Omega }}{n_\omega } + t,{n_f},p)\rangle _{{N_{w_s}}}^{{{\hat{w}}_s}}\right \rangle }_{{n_\omega },\tau }}{\vphantom{\left \langle {{}_s}\langle I(z,x,{n_{\Omega }}{n_\omega } + t + \tau ,{n_f},p)\rangle _{{N_{w_s}}}^{{{\hat{w}}_s}}\right \rangle}}_t{{\left \langle {{\vphantom{\langle I(z,x,{n_{\Omega }}{n_\omega } + t + \tau ,{n_f},p)\rangle}}_s}\langle I(z,x,{n_{\Omega }}{n_\omega } + t + \tau ,{n_f},p)\rangle _{{N_{w_s}}}^{{{\hat{w}}_s}}\right \rangle }_{{n_\omega },\tau }}}}.$$
For this case, assuming $Q=Q(z,p)$ only, the SNR used to correct for noise is calculated as
$$R = \frac{{{\vphantom{\left \langle {{}_{s} \langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}{^{\hat{w}_s}_{N_{w_s}}}\right \rangle}}_{t} \left \langle {{\vphantom{\langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}}_{s} \langle I(z,x,n_{\Omega} n_{\omega} + t,n_f,p)\rangle}{^{\hat{w}_s}_{N_{w_s}}}\right \rangle}_{n_{\omega},0}}{ {{\vphantom{\langle Q(z,p)\rangle}}_{s} \langle Q(z,p)\rangle}{^{\hat{w}_s}_{N_{w_s}}}} - 1.$$
On the other hand, computation of $\tilde {g}^{(2)}_{\textrm {HBT}}(\tau )$ using ACC is done as
$$\tilde{g}^{(2)}_{\textrm{HBT}}(\tau) = {{\vphantom{\left \langle \frac{{{}_{t} \langle I(z,x,n_{\Omega} n_{\omega}+t,n_f,p)\rangle}{^{t,t+\tau}_{n_{\omega}}}}{{{}_{t} \langle I(z,x,n_{\Omega} n_{\omega}+t,n_f,p)\rangle}{_{n_{\omega},\tau}}\, {{}_{t} \langle I(z,x,n_{\Omega} n_{\omega}+t+\tau,n_f,p)\rangle}{_{n_{\omega},\tau}}}\right \rangle}}_{s} \left \langle \frac{{{\vphantom{\langle I(z,x,n_{\Omega} n_{\omega}+t,n_f,p)\rangle}}_{t} \langle I(z,x,n_{\Omega} n_{\omega}+t,n_f,p)\rangle}{^{t,t+\tau}_{n_{\omega}}}}{{{\vphantom{\langle I(z,x,n_{\Omega} n_{\omega}+t,n_f,p)\rangle}}_{t} \langle I(z,x,n_{\Omega} n_{\omega}+t,n_f,p)\rangle}{_{n_{\omega},\tau}}\, {{\vphantom{\langle I(z,x,n_{\Omega} n_{\omega}+t+\tau,n_f,p)\rangle}}_{t} \langle I(z,x,n_{\Omega} n_{\omega}+t+\tau,n_f,p)\rangle}{_{n_{\omega},\tau}}}\right \rangle}^{\hat{w}_s}_{N_{w_s}}.$$
For the noise correction in ACC, the correction can be applied to each ACF before compounding using the SNR from Eq. (28), or after compounding using the calculated from Eq. (30).

3.2.2 Decorrelation rate calculation

In the experiments in which a decorrelation rate is reported, we defined a decorrelation threshold $g_c$, and calculated the time lag $\tau _c$ for which the ACF dropped below the threshold after linear interpolation. The decorrelation time in A-lines was converted into $ms$ by using the A-line rate of the laser used. We also used a correction factor to account for the specific threshold used to express the decorrelation time as the equivalent decorrelation time when the ACF function reaches $\exp {(-1)}$. We inverted this corrected $\tau _c$ to calculate the decorrelation rate expressed as

$$\tau^{{-}1} = \frac{1}{\tau_c\sqrt{ -1 / \log(g_c)}}.$$

3.2.3 Motion correction in angiographic contrast

In the $g^{(2)}$-based angiography experiment, motion artifacts due to cardiac motion produced very high decorrelation values for some B-scans. In order to correct artificially high $\tau ^{-1}$ due to motion, we developed a motion correction procedure analogous to that used in Doppler OCT for bulk motion correction [41]. We calculated the averaged decorrelation rate for each B-scan $\bar {\tau }^{-1}$, using a mask to consider only regions with $R\geq 20$ dB to guarantee a similar motion correction for all volumes with and without SNR correction. We then assumed that, in general, the motion will happen along a direction not parallel to the blood flow which gives rise to decorrelation inside vessel lumina. Based on intensity-based DLS-OCT [15], decorrelation contributions arising from different directions are added quadratically. We therefore subtracted the mean decorrelation $\bar {\tau }^{-1}$ to each decorrelation rate in a frame to obtain the motion-corrected decorrelation

$$\hat{\tau}^{{-}1}(z,x,y) = \max{\left\{0, \textrm{Re}{\left\{\sqrt{\tau^{{-}2}(z,x,y) - \bar{\tau}^{{-}2}(y)}\right\}}\right\}},$$
where $\textrm{Re}\{\cdot \}$ is the real part and the $\max$ function is used to make $\hat {\tau }=0$ when the square root argument is negative.

4. Experimental results

4.1 Noise model validation

For the validation of the noise model, we used the benchtop system and a solid polymer phantom. We turned off the galvanometer scanners and acquired an M-mode (continuous A-line acquisition with no lateral scanning) of 2048 A-lines. We calculated, for each depth, the mean intensity and the RSD from the fluctuations of the tomogram intensity. In addition, we determined the noise floor of the system by blocking the sample arm as explained above, obtaining a depth-resolved noise floor intensity $|N(z)|^2$, and used Eq. (5) to calculate the RSD predicted by the model based on the average signal and noise amplitudes. Figure 3 shows the results, where it can be noted that the RSD predicted with the model matches fairly well the experimentally determined RSD. We also include a dashed line with slope $-0.5$ —which corresponds to the RSD decreasing with the root square of the mean intensity signal—, but note that this relation only holds throughout a tomogram if the noise floor is not depth dependent. There are divergences between model and experiment at some specific data points, which we attribute to saturation artifacts: the noise magnitude determined by blocking the sample arm is lower than the noise magnitude during the experiment, which results in experimental RSD values higher than predicted. In summary, these results validate the noise model originally proposed by Makita et al.[24] on which all the noise corrections we developed for $g^{(2)}$ are based.

 figure: Fig. 3.

Fig. 3. Experimentally determined RSD in an M-mode acquisition of a static sample and comparison with the model prediction. The good match validates our noise model on which the rest of our results rely on. The dashed line corresponds to the expected behavior for a system with depth-independent noise floor, which our benchtop system follows closely.

Download Full Size | PDF

4.2 $g^{(2)}$ normalization, variability, contrast bias, and statistical bias

For the study of the statistical bias, the contrast bias, the noise correction and the performance of the different definitions of $g^{(2)}$ and normalizations, we used a solid Intralipid-agarose phantom, using 1$\%$ agarose gel and 5$\%$ Intralipid in volume. We used galvanometer mirrors, acquiring 4096 A-lines per B-scan and 4 B-scans per volume. In all the analyses we omitted the first 128 A-lines of each B-scan to avoid artifacts from the scanner flyback and settling time. The scan range was 2.56 mm in-plane and 0.125 mm out-of-plane with the collimator-lens pair that provided $40$ µm lateral resolution. The in-plane scan range corresponded to an in-plane sampling of 0.625 µm, or roughly $33\times$ Nyquist sampling. This level of oversampling allowed us to study the ACF in a typical flowmetry regime, where the decay is expected to happen within $\sim 100$ A-lines. The out-of-plane scanning allowed us to have multiple speckle realizations of the same dynamics. This effectively gives us a very large data set to calculate the statistics related to bias and variability.

We start our analysis by comparing the performance of the two normalization approaches defined above for the DLS and HBT definitions. Figure 4 shows a comparison of the second-order ACF for single depth with high SNR (28dB) to neglect effects of noise, calculated using $\tilde {g}^{(2)}_{\textrm {DLS,g}}$, $\tilde {g}^{(2)}_{\textrm {DLS,c}}$, $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ and $\tilde {g}^{(2)}_{\textrm {HBT,c}}$. For this comparison we divided the sampling interval into $N_{\Omega } = 62$ correlation windows of size $n_{\omega } = 64$ samples [Fig. 4(a)] and $N_{\Omega } = 15$ with $n_{\omega } = 256$ samples [Fig. 4(b)]. For all cases, ACF within each correlation window was calculated and averaged to obtain a single ACF per definition. Given the expected decay of $\sim 30$ A-lines, these two regimes were chosen in order to have a correlation window nearly $2\times$ the decay and $8\times$ the decay in size. Given that our goal was to determine the best approach to accurately estimate the decorrelation of the signal with the fewest number of samples possible, we used the case with $n_{\omega } = 64$ as a benchmark and the case with $n_{\omega } = 256$ as a reference. Although using larger $n_{\omega }$ is in principle possible, large-scale inhomogeneities in the signal back-scattered from the sample —due to irregular surface or sample heterogeneity— can start to affect the calculation of the ACF and thus we avoided it in the benchtop experiments. Apart from the ACF average, for the case of $n_{\omega } = 64$ samples we also calculated their standard deviation; Figs. 4(c)–(d) show the average ACFs and their $\pm$ standard deviation in the shaded area.

 figure: Fig. 4.

Fig. 4. Average ACFs using global and contrast normalization of $\tilde {g}^{(2)}_{\textrm {DLS}}$ and $\tilde {g}^{(2)}_{\textrm {HBT}}$ for a signal with 28 dB SNR using a correlation window size $n_{\omega }$ of (a) 64 and (b) 256 samples. Variability of $\tilde {g}^{(2)}_{\textrm {DLS}}$ and $\tilde {g}^{(2)}_{\textrm {HBT}}$ with (c) global and (d) contrast normalization for $n_{\omega }=64$. Shaded area encloses the average value $\pm$ standard deviation.

Download Full Size | PDF

Before discussing the effect of the two normalization approaches, we want to point out the form in which statistical bias manifests in the estimated ACF. In Fig. 4(a), the use of the small correlation window $n_{\omega } = 64$ makes the average ACFs calculated more susceptible to bias than those in Fig. 4(b) with $n_{\omega } = 256$. Statistical bias is seen in two main ways: 1) the initial decay of the ACF is different —we have generally seen bias producing a stronger initial decay; and 2) there is an increase in the value of the ACF at some $\tau$. In some cases, rather than an increase in correlation with $\tau$, the ACF starts to decrease with $\tau$ more slowly and fails to decay to 1 at time lags where there is no correlation. From Fig. 4(b) the ACF for this experiment should reach a value of 1 around $\tau =25$–30 A-lines. In what follows, any departure from this behavior is a sign of statistical bias.

With respect to the normalization approaches, Fig. 4(a) reveals a lower bias for global normalization contrast normalization for small correlation window sizes. This lower bias happens both at low values of $\tau$ (the correlation drops more rapidly with contrast bias compared to the references with $n_{\omega } = 256$), as well as at high values of $\tau$, where both $\tilde {g}^{(2)}_{\textrm {DLS,c}}$ and $\tilde {g}^{(2)}_{\textrm {HBT,c}}$ have values consistently below 1 and then increase sharply towards $\tau =30$. This bias is reduced as $n_{\omega }$ is increased resulting in a similar performance, although global normalization still delivers a correlation value closer to 1 for large $\tau$, while contrast normalization produces values consistently below 1. The variability seen in Fig. 4(c)–(d) also benefits global normalization, especially for DLS. Given these results, we will limit ourselves to the use of global normalization for the rest of this work, and we only use $\tilde {g}^{(2)}_{\textrm {DLS,g}}$ and $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ to calculate the ACF.

We now move onto comparing the performance among the different definitions of $g^{(2)}$. Figure 5 shows a comparison of the three definitions of the ACF at two different depths corresponding to 28 dB and 38 dB SNRs. Averages and variabilities were calculated in the same way as those in Fig. 4, i.e. $N_{\Omega } = 62$ with $n_{\omega } = 64$ in Fig. 5(a) and $N_{\Omega } = 15$ with $n_{\omega } = 256$ in Fig. 5(b). The ACFs for both depths should be ideally identical; we show two depths to give an idea of how the variability shows up in individual ACF estimations. In terms of variability they are, indeed, very similar for $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ and $\tilde {g}^{(2)}_{\textrm {P}}$, however, $\tilde {g}^{(2)}_{\textrm {DLS,g}}$ presents a larger deviation among depths when using a small $n_{\omega }$ [Fig. 5(a)]; this deviation is reduced when using larger $n_{\omega }$ [Fig. 5(b)], in which case the three definitions perform similarly. In Fig. 5(c), it is evident that the variability using $N_{\Omega } = 62$ with $n_{\omega } = 64$ is significantly higher for $\tilde {g}^{(2)}_{\textrm {DLS,g}}$, in agreement with Fig. 4. Variability for $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ is lightly lower than for $\tilde {g}^{(2)}_{\textrm {P}}$. It is important to note that $\tilde {g}^{(2)}_{\textrm {DLS,g}}$ presents an increasing variability with $\tau$, whereas variability for $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ and $\tilde {g}^{(2)}_{\textrm {P}}$ tends to remain constant after the main decay.

 figure: Fig. 5.

Fig. 5. Average ACFs for the three definitions of $g^{(2)}$ at two depths corresponding to two 28 dB and 38 dB SNR (see colorbar) using a correlation window size $n_{\omega }$ of (a) 64 and (b) 256 samples. Variability each definition for (c) 28 dB and (d) 38 dB using $n_{\omega }=64$. Shaded area encloses the mean value $\pm$ standard deviation. To allow a direct comparison, we plot $g^{(2)}_{\textrm {P}}+1$ to match the range of the other ACFs.

Download Full Size | PDF

The observations regarding variability can be explained by the fact that the DLS definition in Eq. (8) uses only half of the members of the ensemble for the denominator compared to the HBT definition. When the members of the ensemble become smaller as $\tau$ increases, the ACF estimation worsens more rapidly than HBT. This is also seen in Figs. 4(a) and 5(a) as statistical bias: the reduced size of the ensemble in the denominator makes the statistical bias stronger, seen in the more apparent increase —compared to HBT— in correlation towards $\tau =30$ A-lines. Pearson also exhibits a significant bias in Fig. 5(a), which we attribute to the contrast bias discussed in Sec. 2.3.3. The implication of this is that the decay in Pearson is significantly different for different correlation window sizes.

Given the lower variability and lower bias, and its non-dependency on contrast, we conclude that the $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ definition is the best estimator of the second-order ACF for OCT applications. For the reminder of the experiments we will focus on the HBT definition given its clear superiority.

4.3 $g^{(2)}$ noise correction and averaging strategies

We used the same data acquired for the experiments discussed in Sec. 4.2, but now we focus on the analysis of the noise correction incorporating data from a large range of SNRs. Figure 6 shows $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ [Eq. (15)] and $g^{(2)}_{\textrm {HBT,g}}$ [Eqs. (16) and (18)] for five depths corresponding to SNR values [32, 24, 16, 8, 4] dB. We calculated and averaged $g^{(2)}_{\textrm {HBT,g}}$ for $N_{\Omega } = 62$ with $n_{\omega } = 64$ [Fig. 6(a)] and $N_{\Omega } = 15$ with $n_{\omega } = 256$ [Fig. 6(b)], but rather than using exclusively $t$-EA as in the previous sections, we used $s$-EA and ACC averaging approaches. The kernel was defined as a normalized window of size $N_{w_s} = \{14, 0, 0\}$ (i.e. only depth averaging).

 figure: Fig. 6.

Fig. 6. Average ACFs before [$\tilde {g}^{(2)}_{\textrm {HBT,g}}$, left sides] and after noise correction [$g^{(2)}_{\textrm {HBT,g}}$, right sides] calculated using $s$-EA and ACC with $N_z = 8$ for five depths with corresponding SNR values coded in color (see colorbar) with a correlation window size $n_{\omega }$ of (a–b, top) 64 and (c–d, bottom) 256 samples. (e) Variability for $n_{\omega } = 64$. Shaded area encloses the mean value $\pm$ standard deviation.

Download Full Size | PDF

We first focus on the performance of the noise correction. The sharp drop between $\tau =0$ and $\tau =1$ A-line is the most evident effect of the noise and a confirmation of its white power spectrum. However, merely re-scaling the drop is not enough, due to the change in the shape of the curve as predicted in Eq. (49). It is evident that the correction is working as intended, recovering the true decay between $\tau =0$ and $\tau =1$ A-line that was strongly affected by the noise-induced sharp drop in correlation, in contrast to our previous work [14]. We are able to recover the same shape of the ACF for a very wide range of SNRs. Figure 6(c) shows the variability analysis. In general, variability increases as SNR decreases, and it is consistently smaller for $s$-EA than ACC, but not by a large margin.

With respect to the two averaging strategies, we observe that bias is stronger for ACC compared to $s$-EA for both small $\tau$ (the shapes are different between the two correlation sizes) and for large $\tau$, where the ACF has a tendency to increase in value for the smaller window. This is coherent with the discussion on the origin of the statistical bias: averaging ACFs with bias due to the small correlation window size will only reduce the variability in the estimation, not the bias itself. In contrast, $s$-EA increases the size of the ensemble for all $\tau$, reducing the bias and the variability at the same time.

Moving away from the statistical analysis of bias and variability, we show a practical example in Fig. 7 where only 2 correlation windows of size $n_{\omega } = 64$ were averaged after calculating $g^{(2)}_{\textrm {HBT,g}}$ and $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ using the same $s$-EA and ACC averaging approaches described above. The intention is to show the behavior of the ACFs before and after noise correction representative of a practical application, where we assumed that the total number of samples $N_t$ available for the computation of the ACF is 128, typical of flowmetry. The initial decay due to noise decorrelation in uncorrected ACF is compensated after SNR correction, even for low SNR values that present evident fluctuations. Note that bias and variability are evident in this case because the low number of samples used. This result also suggests that approaches that fit the ACF to a function [10,13] may perform better after noise correction in low SNR regions in comparison to approaches that calculate a decorrelation time. It is expected that the fluctuations in the low-SNR ACFs will average out during the fitting procedure and a relatively correct decay rate obtained, compared to relying on the $\tau$ value after a correlation threshold is crossed [14,15,20].

 figure: Fig. 7.

Fig. 7. Exemplary autocorrelation analysis mimicking a flowmetry application; before [$\tilde {g}^{(2)}_{\textrm {HBT,g}}$, (a), left sides] and after [$\tilde {g}^{(2)}_{\textrm {HBT,g}}$, (b), right sides] noise correction for depths with different SNR values (see colorbar), using $s$-EA (top) and ACC (bottom) averaging with $N_z = 8$. We reproduced a typical flowmetry with a total of 128 time samples using $N_{\Omega } = 2$ and $n_{\omega } = 64$.

Download Full Size | PDF

So far, results have shown that bias is reduced when increasing $n_{\omega }$ ($t$-EA), and when using spatial averaging with $s$-EA. We now focus on comparing the effectiveness of each approach in reducing bias in order to conclude whether is it more advantageous to increase $n_{\omega }$ and sacrifice the temporal resolution of each ACF evaluation, or to increase the kernel size in $s$-EA and sacrifice spatial resolution in the ACF. To answer this, we show in Fig. 8 multiple combinations of spatial window sizes ($n_s$) with depth averaging $N_{w_s} = \{N_z, 0, 0\}$ and correlation window sizes ($n_{\omega }$), in such way that each combination results in an equal $n_{ts}(n_{\omega },\tau =30~\textrm {A-lines})$. We show average ACFs in all cases, thus focusing on bias and not variability. We consider three different total ensemble sizes of $n_{ts}=95,\ 160$ and $472$ which only hold for $\tau =30$ A-lines; we also show a case with a large $n_{ts}$ in black as a reference with no bias. At $\tau =30$ A-lines bias is seen essentially as a deviation from $g^{(2)}=1$. Results show that, for a constant total ensemble size, increasing the correlation window ($t$-EA) is more effective at reducing the bias than increasing the spatial kernel size ($s$-EA). This means that $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ offers a more robust estimation when increasing the number of members in the temporal ensemble averaging (i.e. the correlation window size $n_{\omega }$) than in the spatial ensemble averaging. However, $s$-EA seems to be as effective as $t$-EA at reducing the variability. We note that the blue curve in (b) and (e) has a $N_z=160$, which is extremely large and therefore includes pixels from many physical locations in the sample and a large range of SNRs. The behavior of this set of parameters in (b) and (e) highlights the risk of using too large of a spatial ensemble size: excessive spatial averaging can become detrimental to the accuracy of the calculated ACF.

 figure: Fig. 8.

Fig. 8. Average ACFs $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ calculated using combinations of $t$- and $s$-EA averaging for a single depth with 28 dB SNR with different total ensemble sizes $n_{ts}$ of 95 (a), 160 (b) and 472 (c) at $\tau =30$ A-lines. The corresponding variabilities are in (d), (e) and (f); therein the shaded area encloses the mean value $\pm$ standard deviation. Each specific average has a combinations of $n_{\omega }$ (for $t$-EA) and $N_z$ (for $s$-EA) indicated in the legend in the form $n_{\omega }$;$N_z$. The black curve used as a reference ACF with no bias was calculated using $n_{\omega }=512$ and $N_z=80$ and is the same for all cases.

Download Full Size | PDF

4.4 Determination of speckle size in a rotational endoscopic probe and improved NURD detection

For the evaluation of speckle decorrelation rate as a function of depth in an endoscopic system we used a phantom fabricated with Intralipid-20%, diluted with water to a volume fraction of 0.1 and mixed with agarose (3 g/100 mL). We used agarose to create a solid phantom, to avoid movement due to Brownian motion and to ensure that the catheter would not move inside the phantom during the scan. We acquired tomograms with the catheter rotating at different rotational speeds and with no pullback. We acquired a continuous scan with 20480 A-lines for each rotational speed; we tested speeds [10.0, 13.3, 16.7, 25.0, 33.3, 41.7, 50.0, 58.3, 66.7] rotations per second (RPS). We used the correlation window sizes $n_{\omega }$ [32, 64, 128, 256, 512, 1024, 2048], and used a kernel size for $s$-EA $N_{w_s} = \{14, 0, 0\}$ (i.e. only depth averaging). Given that we punctured the phantom with the catheter for imaging, there are virtually no surface inhomogeneities in this case, the reason for which we were able to increase $n_{\omega }$ up to 2048. We used a correlation threshold $g_c=\exp {(-1)}$ to calculate the decorrelation rate $\tau ^{-1}$ according to Eq. (31).

Figure 9(a)–(b) shows the decorrelation rate determined at each rotational speed for $n_{\omega }=256$ samples using $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ [Fig. 9(a)] and $g^{(2)}_{\textrm {HBT,g}}$ [Fig. 9(b)]. Before noise correction, there is a clear increase in $\tau ^{-1}$ when the depth approaches 1 mm. However, $g^{(2)}_{\textrm {HBT,g}}$ reveals that $\tau ^{-1}$ is nearly constant with depth, with just a slight increase [Fig. 9(b)]. In Fig. 9(c), the decorrelation rate as a function of rotational speed for multiple correlation window sizes shows the presence of bias, especially for small sizes. Since there was no pullback during imaging, we essentially imaged the same speckle pattern several times (dependent on the rotational speed) with only different realizations of the noise. The linear fit shown corresponds to $n_{\omega }=32$ assuming a zero intercept. The speckle pattern became decorrelated between A-lines at 75.0 RPS, which we believe is the reason for the strong fluctuations at 66.7 RPS, given very small fluctuations in the strong decay in the ACF between $\tau =0$ and $\tau =1$ A-line produce large fluctuations after inversion to calculate the decorrelation rate.

 figure: Fig. 9.

Fig. 9. Lateral speckle size analysis in a rotational endoscopic probe using the decorrelation rate determined from the depth-resolved ACFs and $g^{(2)}_{\textrm {HBT,g}}$. Decorrelation rate as a function of depth for $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ [before noise correction, (a)]; noise-corrected decorrelation rate for $g^{(2)}_{\textrm {HBT,g}}$ [(b)]; and depth-averaged noise-corrected decorrelation rate as a function of rotational speed for different correlation window sizes $n_{\omega }$ [see colorbar, (c)]. Errorbars in $c$ denote the standard deviation in depth, seen as the fluctuations in (b). The near-constant decorrelation rate (i.e. lateral speckle size in angular units) as a function of depth is only evident after noise correction.

Download Full Size | PDF

For the demonstration of the improved NURD detection, we performed measurements with the same catheter probe inside raw chicken breast. To obtain a registration mark to allow us to easily visualize the effect of NURD during imaging, we imaged from within a glass capillary with 1.6 mm diameter inserted in the tissue. This allowed us to reproduce the imaging configuration in e.g. intravascular imaging, where the vessel lumen is a few mm in diameter and the catheter can be imaging from an off-center location. This produces large differences in tissue depth within a single B-scan, which requires knowledge of the expected behavior of speckle size with depth to correctly interpret the obtained ACF to calculate an accurate rotation speed. The nominal rotation speed was set at 50 RPS; we induced NURD during imaging by pinching the catheter sheath at the proximal end. The correlation window size was $n_{\omega }=24$ and kernel size for $s$-EA was $N_{w_s} = \{41, 21, 0\}$ (i.e. depth and in-plane averaging) with a Gaussian kernel with $e^{-2}$ diameters of [20, 10, 0] in ($z$, $x$, $y$), which provided us with 85 estimations of the ACF in the lateral dimension for each B-scan. We used the linear fit from Fig. 9(c) to convert the measured decorrelation rates into rotation speeds.

Figure 10 shows the improved NURD detection after SNR correction in an endoscopic system. In Figs. 10(a) and 10(b) we show a decorrelation rate overlay: the intensity of the tomogram provides the luminance, and the rotation speed (in RPS) is encoded in the hue using a nearly-perceptually-isoluminant colormap based on the isolum colormap [42], which we made perceptually uniform [43] and for which we then increased the chroma to improve the contrast. Fluctuations in the rotational speed of the catheter within a B-scan during imaging induces NURD in the tomogram. A correction for NURD can be applied if the speed of the catheter rotation is known. Assuming, to an excellent approximation, that movement of the tissue during imaging is much slower than the scanning speed, the decorrelation is only the result of the catheter rotation speed. Given the results from Fig. 9, the decorrelation rate is expected to differ across azimuthal positions inside a B-scan, but expected to be the same with depth. Figure 10(c) shows that the estimated rotation speed before SNR correction increases with depth, whereas after SNR correction it is fairly stable with depth. In addition, the standard deviation over the mean of the decorrelation rate per correlation window decreases significantly after SNR correction, Fig. 10(d). Averaging the rotation speed in depth provides an estimation of the rotation speed as a function of A-line inside the B-scan, Fig. 10(e). After SNR correction, we obtain a value that fluctuates around the nominal rotation speed, in contrast to the value obtained without SNR correction that exhibits a positive skewness due to the artifactual high decorrelation rates estimated in deep depths, where SNR is lower. This implies that the SNR correction will result in a more accurate determination of the decorrelation rate and, thereby, the rotation speed of the catheter. In turn, this could result in a more accurate NURD correction with existing algorithms [20], however, we consider this further step out of the scope of the present work.

 figure: Fig. 10.

Fig. 10. Improvement of NURD detection in a tomogram of chicken breast obtained with an endoscopic system. A tomogram is presented with overlays of the decorrelation rates calculated before (a) and after (b) noise correction (the luminance is given by the tomogram intensity and the hue of the nearly-perceptually-isoluminant colormap by the decorrelation rate). After noise correction, the decorrelation rate as a function of depth varies much less with depth, as shown here for (c) a region with high rotation speed (violet squares in tomograms). The standard deviation $(\sigma )$ over the mean $(\mu )$ of the decorrelation rate ($\tau ^{-1}$) decreases due to the noise correction, as visible in the histogram over all the correlation windows (d). Depth-averaging of rotation speed as a function of A-line (e). Without noise correction, rotation speed is positively skewed with respect to the nominal value; after noise correction rotation speed exhibits the expected fluctuation around the nominal 50 RPS value.

Download Full Size | PDF

4.5 Improved intensity-based angiographic contrast

For the comparison between $s$-EA and ACC and the demonstration of the noise correction in angiography, we configured the benchtop system to scan a field of view of 1 mm$\times$1 mm with the collimator-lens pair that provided $9$ µm lateral resolution. We sampled at 384 in-plane lateral locations and 256 out-of-plane locations with 4 B-scan repetitions at each out-of-plane location. Kernel size for $s$-EA and ACC was $N_z = 21$, $N_x = 9$, $N_y = 9$ (i.e. depth, in-plane and out-of-plane averaging) with a Gaussian kernel with $e^{-2}$ diameters of [10, 4, 4] in ($z$, $x$, $y$). We note that given the revisiting time of $3.84$ ms and the lateral resolution of 9 µm, full decorrelation is expected at flow speeds between above 1.17 mm/s in a single scattering regime. We expect that the signal from most vessels of small diameter will not be fully decorrelated and therefore the decorrelation rate is representative of flow rate. The noise correction for ACC was performed after spatial averaging, because the SNR estimation for each ACF was so poor —due to the very limited ensemble size $n_{\omega }=4$— that the estimated SNR was negative for many ACFs which produced erroneous noise-corrected ACFs.

Figure 11 shows overlays of decorrelation rate (angiographic contrast) with the structural image of the finger nail vasculature of a healthy volunteer, before and after SNR correction. Figures 11(a) and 11(b) present en-face and B-scan views, respectively, before SNR correction with $s$-EA.Figures 11(c) and 11(d) show the same after SNR correction. Figures 11(e) and 11(f) present en-face and B-scan views, respectively, before SNR correction with ACC. Figures 11(g) and 11(h) show the same after SNR correction. The en-face view corresponds to the depth marked with the white dashed line in the B-scan view, a region with low SNR. It is clear that the decorrelation rate of static tissue is greatly affected by the noise, and therefore the contrast of the vasculature is low. SNR correction provides much higher contrast. The contrast of the vasculature is also greater with $s$-EA compared to ACC, highlighting the benefits of the lower statistical bias of the former. Note that projection tail artifacts below vasculature remain after SNR correction, given that they are intrinsic to the signal as a consequence of decorrelation due to strong forward multiple scattering of red blood cells coupled with scattering events arising from both moving and static tissue; these artifacts are not present in tissue bulk motion even in a multiple scattering regime [36] —such as the bulk rotational motion in Fig. 10. Algorithms have been devised to suppress projection tail artifacts, and these may benefit from the SNR correction [44].

 figure: Fig. 11.

Fig. 11. Example of $g^{(2)}$-based angiography with and without SNR correction, using $s$-EA and ACC averaging strategies. All views correspond to overlays where the luminance is given by the tomogram intensity and the hue of the nearly-perceptually-isoluminant colormap by the decorrelation rate. B-scan views [(a)–(d)] and en-face views [(e)–(h)]. The dashed line in the B-scans (en-face views) show the depth (out-of-plane location) corresponding to the en-face views (B-scans). Visually, $s$-EA shows a higher contrast between static tissue and blood vessels, such as in the tissue surrounding the two large blood vessels near the center of the tissue. Low-SNR tissue regions exhibit some degree of decorrelation presumably due to noise, which became significantly lower after SNR correction; visually, the $s$-EA approach is more robust. Plots at bottom show $g^{(2)}$ at the four locations A–F marked in (a) with high (A–C) and low (D–F) SNR; full decorrelation (A, D), partial decorrelation (D, E), and static tissue (C, F).

Download Full Size | PDF

For a more detailed comparison, Fig. 11 also contains plots of $g^{(2)}$ at locations A–F marked in Fig. 11(a) within vessels with full (A and D), partial (B and E) decorrelation and static tissue (C and F). A–C are at a depths with high SNR and $g^{(2)}$ before and after SNR correction are almost identical for both $s$-EA and ACC, although bias is evidently stronger for ACC. SNR correction becomes important at depths with low SNR (D–F) and partial decorrelation (E–F), particularly for static tissue (F), where artifactual high decorrelation rates are measured without SNR correction. We note that the SNR compensation in our previous work [14] would have failed in E because, due to the dropping of the value at $\tau =0$, the ACF would have been estimated to be almost flat and thus exhibiting little decorrelation.

5. Discussion

We have carried out an exhaustive study of the performance of the different definitions of the second-order ACF of the OCT signal and its averaging strategies. From the analytical analysis and the experimental results, we concluded that the symmetric form of the ACF, $g^{(2)}_{\textrm {HBT}}$, is superior to all other definitions in the context of OCT. We also concluded that the empirical global normalization approach has the best performance to compensate for contrast-bias in HBT. In terms of averaging strategies, temporal ensemble averaging ($t$-EA) is the most efficient way to reduce the statistical bias in $g^{(2)}$, however its reliance on increasing the size of the correlation window size $n_{\omega }$ implies a penalty in terms of acquisition time. Spatial ensemble averaging ($s$-EA) is second in terms of efficiency, and it is the most attractive choice when processing ACFs in time-constrained applications such as angiography. Finally, ACF compounding (ACC), the most used approach in the literature, only tackles variability but does not reduce the bias of the ACF. We note that we have limited ourselves to the second-order ACF, and that the statistical bias strength and the optimal strategies may vary in the case of the first-order ACF; however the latter are out of the scope of this work.

The statistical bias present in time series with typical sizes makes its mitigation of prime importance. Very importantly, discrepancies between model and experiments can be significant if bias is not reduced [45]. We believe that the current (intensity-based) DLS-OCT models should not be blindly applied without carefully considering statistical bias; a calibration carried out with the exact correlation window and ensemble averaging should ideally be made. We also expect that with the knowledge of the expected shape of the ACF, in future works it could be possible to derive an analytical compensation for statistical bias. This could reduce bias in applications with small correlation window sizes; however, addressing the ACF variability via $s$-EA will still be necessary.

We also developed an analytical correction to compensate for noise in the OCT signal, extending the SNR range in which autocorrelation analysis can be used. This correction allowed us to accurately determine experimentally the lateral speckle size depth dependence in a rotational endoscopic probe with low NA. We demonstrated the expected, almost constant speckle size with depth in animal tissue ex vivo and showed the ability to more accurately determine the rotational speed of an endoscopic probe to implement NURD detection and potential correction.

Lastly, we presented $g^{(2)}$-based angiography of the finger nailbed, where the superior performance of $s$-EA over the commonly used ACC is evident, with better blood flow contrast and suppression of noise decorrelation artifacts.

6. Conclusion

Functional OCT imaging based on the decorrelation of the intensity signal has been used extensively in angiography and is finding use in flowmetry and therapy monitoring. However, despite the apparent simplicity of implementing the autocorrelation analysis, there are many subtleties and compromises imposed by the acquisition time constraints of clinical imaging and the minimization of motion artifacts while imaging in vivo. We presented a rigorous analysis of many aspects of the estimation of the autocorrelation function and presented experimental evidence for the best use of the acquired data to obtain the optimal decorrelation metrics for most OCT applications.

Appendix A

We define the noiseless signal as $S$, the noise as $N$ and the noisy signal as $F=S+N$. We consider those to derive the first-order autocorrelation function of the noiseless and the noisy signals given by

$$g^{(1)}(\tau)=\frac{\langle S_1^* S_2\rangle}{\sqrt{\langle \lvert S_1\rvert^2\rangle \langle \lvert S_2\rvert^2\rangle}};\, \tilde{g}^{(1)}(\tau)=\frac{\langle F_1^* F_2\rangle}{\sqrt{\langle \lvert F_1\rvert^2\rangle \langle \lvert F_2\rvert^2\rangle}}.$$
In order to relate both, we first write the term $\langle \lvert F \rvert ^2\rangle$ as
$$ \langle \lvert F \rvert^2\rangle = \langle \lvert S \rvert^2\rangle + \langle \lvert N \rvert^2\rangle + \langle S^\ast N \rangle + \langle SN^\ast \rangle, $$
where last two terms are zero assuming that $\langle N \rangle =0$ and that noise and signal are not correlated, i.e. $\langle SN \rangle =\langle S \rangle \langle N \rangle$, allowing us to write the normalization factor as
$$\begin{aligned} \sqrt{\langle \lvert F_1\rvert^2\rangle\langle \lvert F_2\rvert^2\rangle} &= \sqrt{\langle \lvert S_1\rvert^2\rangle + \langle \lvert N_1\rvert^2\rangle)(\langle \lvert S_2\rvert^2\rangle + \langle \lvert N_2\rvert^2\rangle}\\ &= \sqrt{\langle \lvert S_1\rvert^2\rangle\langle \lvert S_2\rvert^2\rangle} \sqrt{\left(1 + \dfrac{\langle \lvert N_1\rvert^2\rangle}{\langle \lvert S_1\rvert^2\rangle}\right)\left(1 + \dfrac{\langle \lvert N_2\rvert^2\rangle}{\langle \lvert S_2\rvert^2\rangle}\right)}. \end{aligned}$$
Then, we write the term $\langle F_1^* F_2\rangle$ as
$$\begin{aligned} \langle F^\ast_1F_2\rangle &= \langle (S_1^\ast{+} N_1^\ast)(S_2 + N_2)\rangle\\ &= \langle S_1^\ast S_2\rangle + \langle S_1^\ast N_2\rangle + \langle N_1^\ast S_2\rangle + \langle N_1^\ast N_2\rangle\\ &= \langle S_1^\ast S_2\rangle. \end{aligned}$$
Replacing Eqs. (34) and (35) in Eq. (33) it is possible to obtain
$$\begin{aligned}\tilde{g}^{(1)}(\tau) &= \dfrac{\langle S_1^* S_2\rangle}{\sqrt{\langle \lvert S_1\rvert^2\rangle \langle \lvert S_2\rvert^2\rangle}} \dfrac{1}{\sqrt{\left(1 + \dfrac{\langle \lvert N_1\rvert^2\rangle}{\langle \lvert S_1\rvert^2\rangle}\right)\left(1 + \dfrac{\langle \lvert N_2\rvert^2\rangle}{\langle\lvert S_2\rvert^2\rangle}\right)}},\\ &=g^{(1)}(\tau)\dfrac{1}{\sqrt{\left(1 + \dfrac{\langle \lvert N_1\rvert^2\rangle}{\langle \lvert S_1\rvert^2\rangle}\right)\left(1 + \dfrac{\langle \lvert N_2\rvert^2\rangle}{\langle \lvert S_2\rvert^2\rangle}\right)}}. \end{aligned}$$
On the other hand, definitions of second-order autocorrelation function make use of intensities that we define as $T=\lvert S \rvert ^2$, $Q=\lvert N \rvert ^2$ and $I=\lvert F \rvert ^2$. Noisy intensity signal can be expanded as
$$\begin{aligned} I &= \lvert S + N \rvert^2 \\ &= \lvert S \rvert^2 + \lvert N \rvert^2 + 2\textrm{Re}{\{S^\ast N\}}, \end{aligned}$$
with ensemble average given by
$$\begin{aligned} \langle I \rangle &= \langle \lvert S \rvert^2\rangle + \langle \lvert N \rvert^2\rangle + 2\langle \textrm{Re}{\{S^\ast N\}}\rangle,\\ &= \langle T \rangle + \langle Q \rangle, \end{aligned}$$
where
$$2\langle \textrm{Re}{\{S^\ast N\}}\rangle = \langle S^\ast N + SN^\ast \rangle = \langle S^\ast N \rangle + \langle SN^\ast \rangle = 0.$$
In DLS, the second-order autocorrelation of the noiseless and the noisy intensity signals are given by
$$g^{(2)}_{\textrm{DLS}}(\tau) = \frac{\langle T_1T_2\rangle}{\langle T_1\rangle^2};\, \tilde{g}^{(2)}_{\textrm{DLS}}(\tau) = \frac{\langle I_1I_2\rangle}{\langle I_1\rangle^2}.$$
We make use of Eq. (37) to write the normalization factor as
$$\begin{aligned} \langle I_1\rangle^2 &= (\langle T_1\rangle + \langle Q_1\rangle)(\langle T_1\rangle + \langle Q_1\rangle)\\ &= \langle T_1\rangle^2 + \langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle, \end{aligned}$$
whereas the ensemble average in the numerator is obtained as
$$ \begin{aligned} I_1I_2 &= \lvert S_1\rvert^2\lvert S_2\rvert^2 + \lvert S_1\rvert^2\lvert N_2\rvert^2 + 2\lvert S_1\rvert^2\textrm{Re}{\{S_2^\ast N_2\}} + \lvert N_1\rvert^2\lvert S_2\rvert^2 + \lvert N_1\rvert^2\lvert N_2\rvert^2 +\cdots\\ & 2\lvert N_1\rvert^2\textrm{Re}{\{S_2^\ast N_2\}} + 2\textrm{Re}{\{S_1^\ast N_1\}}\lvert S_2\rvert^2 + 2\textrm{Re}{\{S_1^\ast N_1\}}\lvert N_2\rvert^2 + 4\textrm{Re}{\{S_1^\ast N_1\}}\textrm{Re}{\{S_2^\ast N_2\}}. \\ \langle I_1I_2\rangle &= \langle \lvert S_1\rvert^2\lvert S_2\rvert^2\rangle + \langle \lvert S_1\rvert^2\lvert N_2\rvert^2\rangle + 2\langle \lvert S_1\rvert^2\textrm{Re}{\{S_2^\ast N_2\}}\rangle + \langle \lvert N_1\rvert^2\lvert S_2\rvert^2\rangle + \langle \lvert N_1\rvert^2\lvert N_2\rvert^2\rangle + \cdots\\ &2\langle \lvert N_1\rvert^2\textrm{Re}{\{S_2^\ast N_2\}}\rangle + 2\langle \textrm{Re}{\{S_1^\ast N_1\}}\lvert S_2\rvert^2\rangle + 2\langle \textrm{Re}{\{S_1^\ast N_1\}}\lvert N_2\rvert^2\rangle + 4\langle \textrm{Re}{\{S_1^\ast N_1\}}\textrm{Re}{\{S_2^\ast N_2\}}\rangle.\end{aligned}$$
where several terms are zero similarly to Eq. (38), for instance
$$\begin{aligned} 2\langle \lvert S_1\rvert^2\textrm{Re}{\{S_2^\ast N_2\}}\rangle &= \langle \lvert S_1\rvert^2(S_2^\ast N_2 + S_2N_2^\ast)\rangle \\ &= \langle \lvert S_1\rvert^2S_2^\ast N_2 + \lvert S_1\rvert^2S_2N_2^\ast \rangle = 0, \end{aligned}$$
therefore,
$$\begin{aligned} \langle I_1I_2\rangle &= \langle \lvert S_1\rvert^2\lvert S_2\rvert^2\rangle + \langle \lvert S_1\rvert^2\lvert N_2\rvert^2\rangle + \langle \lvert N_1\rvert^2\lvert S_2\rvert^2\rangle + \langle \lvert N_1\rvert^2\lvert N_2\rvert^2\rangle\\ &= \langle T_1T_2\rangle + \langle T_1\rangle\langle Q_2\rangle + \langle Q_1\rangle\langle T_2\rangle+\langle Q_1\rangle\langle Q_2\rangle. \end{aligned}$$
Replacing Eqs. (40) and (41) in Eq. (39), we relate $g^{(2)}_{\textrm {DLS}}(\tau )$ with $\tilde {g}^{(2)}_{\textrm {DLS}}(\tau )$ for $\tau\;>\;0$ as
$$\begin{aligned} \tilde{g}^{(2)}_{\textrm{DLS}}(\tau\;>\;0) &= \frac{\langle T_1T_2\rangle + \langle T_1\rangle\langle Q_2\rangle + \langle Q_1\rangle\langle T_2\rangle + \langle Q_1\rangle\langle Q_2\rangle}{\langle T_1\rangle^2 + \langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle}\\ &= \dfrac{g^{(2)}_{\textrm{DLS}}(\tau\;>\;0) + \dfrac{\langle Q_2\rangle}{\langle T_1\rangle} + \dfrac{\langle T_2\rangle\langle Q_1\rangle}{\langle T_1\rangle^2} + \dfrac{\langle Q_1\rangle\langle Q_2\rangle}{\langle T_1\rangle^2} }{1 + \dfrac{\langle Q_1\rangle^2}{\langle T_1\rangle^2} + \dfrac{2\langle Q_1\rangle}{\langle T_1\rangle}}. \end{aligned}$$
For the case $\tau =0$ we have $I_1=I_2$, thus
$$\langle I_1^2\rangle = \langle \lvert S_1\rvert^2\lvert S_1\rvert^2\rangle + 2\langle \lvert S_1\rvert^2\lvert N_1\rvert^2\rangle + \langle \lvert N_1\rvert^2\lvert N_1\rvert^2\rangle + 4\langle \textrm{Re}^2{\{S_1^\ast N_1\}}\rangle,$$
where
$$\begin{aligned} 4\langle \textrm{Re}^2{\{S_1^\ast N_1\}}\rangle &= \langle S_1^{\ast2}N_1^2 + S_1^2N_1^{\ast2} + 2\lvert S_1\rvert^2\lvert N_1\rvert^2\rangle\\ &= 2\langle \textrm{Re}{\{S_1^{\ast2} N_1^2\}}\rangle + 2\langle \lvert S_1\rvert^2\lvert N_1\rvert^2\rangle\\ &= 2\langle \lvert S_1\rvert^2\rangle\langle \lvert N_1\rvert^2\rangle, \end{aligned}$$
(note that this term is zero for $I_1\neq I_2$) and replacing back in Eq. (43),
$$\begin{aligned} \langle I_1^2\rangle &= \langle \lvert S_1\rvert^2\lvert S_1\rvert^2\rangle + 4\langle \lvert S_1\rvert^2\lvert N_1\rvert^2\rangle + \langle \lvert N_1\rvert^2\lvert N_1\rvert^2\rangle,\\ &= \langle T_1^2\rangle + 2\langle Q_1\rangle^2 + 4\langle T_1\rangle\langle Q_1\rangle \end{aligned}$$
where the property $\langle Q_1^2\rangle = 2\langle Q_1\rangle ^2$ for the noise was used. We then relate $g^{(2)}_{\textrm {DLS}}(\tau )$ with $\tilde {g}^{(2)}_{\textrm {DLS}}(\tau )$ for $\tau =0$ as
$$\begin{aligned} g^{(2)}_{\textrm{DLS}}(\tau=0) &= \frac{\langle T_1^2\rangle + 2\langle Q_1\rangle^2 + 4\langle T_1\rangle\langle Q_1\rangle}{\langle T_1\rangle^2 + \langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle}\\ &= \dfrac{g^{(2)}_{\textrm{DLS}}(\tau=0) + \dfrac{2\langle Q_1\rangle^2}{\langle T_1\rangle^2} + \dfrac{4\langle Q_1\rangle}{\langle T_1\rangle}}{1 + \dfrac{\langle Q_1\rangle^2}{\langle T_1\rangle^2} + \dfrac{2\langle Q_1\rangle}{\langle T_1\rangle}}. \end{aligned}$$
In the Hanbury Brown-Twiss definition, the second-order autocorrelation of the noiseless and the noisy intensity signals are given by
$$g^{(2)}_{\textrm{HBT}}(\tau) = \frac{\langle T_1T_2\rangle}{\langle T_1\rangle\langle T_2\rangle};\, \tilde{g}^{(2)}_{\textrm{HBT}}(\tau) = \frac{\langle I_1I_2\rangle}{\langle I_1\rangle\langle I_2\rangle}.$$
Using Eq. (37), we can write the normalization factor as
$$\begin{aligned} \langle I_1\rangle\langle I_2\rangle &= (\langle T_1\rangle + \langle Q_1\rangle)(\langle T_2\rangle + \langle Q_2\rangle)\\ &= \langle T_1\rangle\langle T_2\rangle + \langle T_1\rangle\langle Q_2\rangle + \langle Q_1\rangle\langle T_2\rangle + \langle Q_1\rangle\langle Q_2\rangle, \end{aligned}$$
and this, in conjunction with Eq. (41), allows us to relate $g^{(2)}_{\textrm {HBT}}(\tau )$ with $\tilde {g}^{(2)}_{\textrm {HBT}}(\tau )$ for $\tau\;>\;0$ as
$$\begin{aligned} \tilde{g}^{(2)}_{\textrm{HBT}}(\tau\;>\;0) &= \frac{\langle T_1T_2\rangle + \langle T_1\rangle\langle Q_2\rangle + \langle Q_1\rangle\langle T_2\rangle + \langle Q_1\rangle\langle Q_2\rangle}{\langle T_1\rangle\langle T_2\rangle + \langle T_1\rangle\langle Q_2\rangle + \langle Q_1\rangle\langle T_2\rangle + \langle Q_1\rangle\langle Q_2\rangle}\\ \tilde{g}^{(2)}_{\textrm{HBT}}(\tau\;>\;0) &= \dfrac{g^{(2)}_{\textrm{HBT}}(\tau\;>\;0) + \dfrac{\langle Q_2\rangle}{\langle T_2\rangle} + \dfrac{\langle Q_1\rangle}{\langle T_1\rangle} + \dfrac{\langle Q_1\rangle\langle Q_2\rangle}{\langle T_1\rangle\langle T_2\rangle} }{1 + \dfrac{\langle Q_2\rangle}{\langle T_2\rangle} + \dfrac{\langle Q_1\rangle}{\langle T_1\rangle} + \dfrac{\langle Q_1\rangle\langle Q_2\rangle}{\langle T_1\rangle\langle T_2\rangle}}. \end{aligned}$$
For the case $\tau =0$, we have $I_1=I_2$ and DLS and HBT are equivalent,
$$\begin{aligned} g^{(2)}_{\textrm{HBT}}(\tau=0) &= \frac{\langle T_1^2\rangle + 2\langle Q_1\rangle^2 + 4\langle T_1\rangle\langle Q_1\rangle}{\langle T_1\rangle^2 + \langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle}\\ \tilde{g}^{(2)}_{\textrm{HBT}}(\tau=0) &= \dfrac{g^{(2)}_{\textrm{HBT}}(\tau=0) + \dfrac{2\langle Q_1\rangle^2}{\langle T_1\rangle^2} + \dfrac{4\langle Q_1\rangle}{\langle T_1\rangle}}{1 + \dfrac{\langle Q_1\rangle^2}{\langle T_1\rangle^2} + \dfrac{2\langle Q_1\rangle}{\langle T_1\rangle}}. \end{aligned}$$
In terms of the Pearson correlation coefficient of the noiseless and the noisy intensity signals we have
$$g^{(2)}_{\textrm{P}}(\tau) = \frac{ \langle \left(T_1 - \langle T_1\rangle\right) \left(T_2 - \langle T_2\rangle\right) \rangle}{\sigma_{T_1}\sigma_{T_2}};\, \tilde{g}^{(2)}_{\textrm{P}}(\tau) = \frac{ \langle \left(I_1 - \langle I_1\rangle\right) \left(I_2 - \langle I_2\rangle\right) \rangle}{\sigma_{I_1}\sigma_{I_2}}.$$
First, using the definition of standard deviation $\sigma _F = \sqrt {\langle F^2\rangle - \langle F \rangle ^2}$ and Eqs. (40) and (45);
$$\begin{aligned} \sigma_{I_1} = &~\sqrt{\langle I_1^2\rangle - \langle I_1\rangle^2}\\ &~\sqrt{\langle T_1^2\rangle + 2\langle Q_1\rangle^2 + 4 \langle T_1\rangle\langle Q_1\rangle - (\langle T_1\rangle^2 + \langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle)}\\ = &~\sqrt{\langle T_1^2\rangle - \langle T_1\rangle^2 + \langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle}. \end{aligned}$$
Using Eqs. (41) and (48) we can write
$$\begin{aligned} \langle (I_1 - \langle I_1\rangle)(I_2 - \langle I_2\rangle)\rangle = &~\langle I_1I_2\rangle-\langle I_1\rangle\langle I_2\rangle\\ = &~\langle T_1T_2\rangle + \langle T_1\rangle\langle Q_2\rangle + \langle Q_1\rangle\langle T_2\rangle + \langle Q_1\rangle\langle Q_2\rangle - \cdots\\ &~(\langle T_1\rangle\langle T_2\rangle + \langle T_1\rangle\langle Q_2\rangle + \langle Q_1\rangle\langle T_2\rangle + \langle Q_1\rangle\langle Q_2\rangle)\\ = &~\langle T_1T_2\rangle - \langle T_1\rangle\langle T_2\rangle. \end{aligned}$$
Combining Eqs. (52) and (53) in Eq. (51) we relate $g^{(2)}_{\textrm {P}}$ with $\tilde {g}^{(2)}_{\textrm {P}}$ as
$$\begin{aligned} \tilde{g}^{(2)}_{\textrm{P}}(\tau) = &~\frac{\langle T_1T_2\rangle - \langle T_1\rangle\langle T_2\rangle}{\sqrt{\langle T_1^2\rangle - \langle T_1\rangle^2 - \langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle} \sqrt{\langle T_2^2\rangle - \langle T_2\rangle^2 - \langle Q_2\rangle^2 + 2\langle T_2\rangle\langle Q_2\rangle}}\\ = &~\frac{\langle T_1T_2\rangle - \langle T_1\rangle\langle T_2\rangle}{\sqrt{\langle T_1^2\rangle - \langle T_1\rangle^2}\sqrt{\langle T_2^2\rangle - \langle T_2\rangle^2}} \dfrac{1}{\sqrt{1 + \dfrac{\langle Q_1\rangle^2 + 2\langle T_1\rangle\langle Q_1\rangle}{\langle T_1^2\rangle-\langle T_1\rangle^2}}} \dfrac{1}{\sqrt{1 + \dfrac{\langle Q_2\rangle^2 + 2\langle T_2\rangle\langle Q_2\rangle}{\langle T_2^2\rangle-\langle T_2\rangle^2}}}\\ = &~g^{(2)}_{\textrm{P}}(\tau)\dfrac{1}{\sqrt{1 + \dfrac{1 + \dfrac{2\langle T_1\rangle}{Q_1}}{\dfrac{2\langle T_1^2\rangle}{\langle Q_1\rangle} - \dfrac{\langle T_1\rangle^2}{\langle Q_1\rangle^2}}}} \dfrac{1}{\sqrt{1 + \dfrac{1 + \dfrac{2\langle T_2\rangle}{Q_2}}{\dfrac{2\langle T_2^2\rangle}{\langle Q_2\rangle} - \dfrac{\langle T_2\rangle^2}{\langle Q_2\rangle^2}}}}. \end{aligned}$$
Note that for $\tau =0$, replacing $I_1=I_2$ in Eq. (51) yields $\tilde {g}^{(2)}_{\textrm {P}}=g^{(2)}_{\textrm {P}}=1$

Appendix B

Now we consider the two polarization channels $p=\{p_1,p_2\}$ corresponding to the channels of the polarization-diverse receiver. In such case, we define the noiseless signals $S_{p_1}$ and $S_{p_2}$ and the total noiseless signal $S_T = S_{p_1} + S_{p_2}$, the noises $N_{p_1}$, $N_{p_2}$ and total noise $N_T = N_{p_1} + N_{p_2}$. We therefore can define the intensities $T_{p_j} = \lvert S_{p_j}\rvert ^2$, $Q_{p_j} = \lvert N_{p_j}\rvert ^2$ and $I_{p_j} = \lvert S_{p_j} + Q_{p_j}\rvert ^2$, and the total intensities $T = \lvert S_{p1}\rvert ^2 + \lvert S_{p2}\rvert ^2$, $Q = Q_{p_1} + Q_{p_2}$ and $I_T = I_{p_1} + I_{p_2}$ First, we note that noise is different in each polarization channel (i.e. $N_{p_1}\neq N_{p_2}$), but signal is the same except for scaling factors $\alpha$, $\beta$, thus $S_{p_1}=\alpha S$ and $S_{p_2}=\beta S$, and therefore

$$\begin{aligned} I_T &= \lvert S_{p_1} + N_{p_1}\rvert^2 + \lvert S_{p_2} + N_{p_2}\rvert^2\\ &= \lvert\alpha \rvert^2 \lvert S \rvert^2 + \lvert N_{p_1}\rvert^2 + 2\textrm{Re}{\{\alpha S^\ast N_{p_1}\}} + \lvert\beta \rvert^2 \lvert S \rvert^2 + \lvert N_{p_2}\rvert^2 + 2\textrm{Re}{\{\beta S^\ast N_{p_2}\}}\\ &= \lvert S \rvert^2 + \lvert N_{p_1}\rvert^2 + \lvert N_{p_2}\rvert^2 + 2\textrm{Re}{\{\alpha S^\ast N_{p_1}\}} + 2\textrm{Re}{\{\beta S^\ast N_{p_2}\}}, \end{aligned}$$
where we used the property $\lvert \alpha \rvert ^2 + \lvert \beta \rvert ^2 = 1$. In this derivation, we will only consider the HBT definition for the second-order autocorrelation of the noiseless $T = T_{p_1} + T_{p_2} = \lvert S \rvert ^2$ and the noisy total intensity signals given by
$$g^{(2)}_{\textrm{HBT}}(\tau) = \frac{\langle T_1T_2\rangle}{\langle T_1\rangle\langle T_2\rangle};\, \tilde{g}^{(2)}_{\textrm{HBT}}(\tau) = \frac{\langle I_{T_1}I_{T_2}\rangle}{\langle I_{T_1}\rangle\langle I_{T_2}\rangle},$$
respectively. To calculate the ensemble average in the numerator, we first use Eq. (55) to write
$$\begin{aligned} I_{T_1}I_{T_2} &= \lvert S_1\rvert^2\lvert S_2\rvert^2 + \lvert S_1\rvert^2\lvert N_{p_1, 2}\rvert^2 + \lvert S_1\rvert^2\lvert N_{p_2, 2}\rvert^2 + 2\lvert S_1\rvert^2\textrm{Re}{\{\alpha S_2^\ast N_{p_1,2}\}} + \cdots\\ & 2\lvert S_1\rvert^2\textrm{Re}{\{\beta S_2^\ast N_{p_2,2}\}} + \lvert N_{p_1,1}\rvert^2\lvert S_2\rvert^2 + \lvert N_{p_1,1}\rvert^2\lvert N_{p_1, 2}\rvert^2 + \lvert N_{p_1,1}\rvert^2\lvert N_{p_2, 2}\rvert^2 + \cdots\\ & 2\lvert N_{p_1,1}\rvert^2\textrm{Re}{\{\alpha S_2^\ast N_{p_1,2}\}} + 2\lvert N_{p_1,1}\rvert^2\textrm{Re}{\{\beta S_2^\ast N_{p_2,2}\}} + \lvert N_{p_2,1}\rvert^2\lvert S_2\rvert^2 + \lvert N_{p_2,1}\rvert^2\lvert N_{p_1, 2}\rvert^2 + \cdots\\ & \lvert N_{p_2,1}\rvert^2\lvert N_{p_2, 2}\rvert^2 + 2\lvert N_{p_2,1}\rvert^2\textrm{Re}{\{\alpha S_2^\ast N_{p_1,2}\}} + 2\lvert N_{p_2,1}\rvert^2\textrm{Re}{\{\beta S_2^\ast N_{p_2,2}\}} + \cdots\\ & 2\lvert S_2\rvert^2\textrm{Re}{\{\alpha S_1^\ast N_{p_1,1}\}} + \lvert N_{p_1, 2}\rvert^2\textrm{Re}{\{\alpha S_1^\ast N_{p_1,1}\}} + \lvert N_{p_2, 2}\rvert^2\textrm{Re}{\{\alpha S_1^\ast N_{p_1,1}\}} + \cdots\\ & 4\textrm{Re}{\{\alpha S_1^\ast N_{p_1,1}\}}\textrm{Re}{\{\alpha S_2^\ast N_{p_1,2}\}} + 4\textrm{Re}{\{\alpha S_1^\ast N_{p_1,1}\}}\textrm{Re}{\{\beta S_2^\ast N_{p_2,2}\}} + \cdots\\ & 2\lvert S_2\rvert^2\textrm{Re}{\{\beta S_1^\ast N_{p_2,1}\}} + \lvert N_{p_1, 2}\rvert^2\textrm{Re}{\{\beta S_1^\ast N_{p_2,1}\}} + \lvert N_{p_2, 2}\rvert^2\textrm{Re}{\{\beta S_1^\ast N_{p_2,1}\}} + \cdots\\ & 4\textrm{Re}{\{\beta S_1^\ast N_{p_2,1}\}}\textrm{Re}{\{\alpha S_2^\ast N_{p_1,2}\}} + 4\textrm{Re}{\{\beta S_1^\ast N_{p_2,1}\}}\textrm{Re}{\{\beta S_2^\ast N_{p_2,2}\}} , \end{aligned}$$
where we recall subscripts 1 and 2 represent $(t)$ and $(t+\tau )$, respectively, whereas $p_1$ and $p_2$ represent the polarization channels. Assuming that the noise in any channel and the signal are not correlated, i.e. $\langle S N_{p_j}\rangle =\langle S \rangle \langle N_{p_j}\rangle$, also $\langle N_{p_j}\rangle = 0$, several terms in Eq. (57) are zero when calculating the ensemble average, similarly to Eq. (38), resulting in
$$\begin{aligned} \langle I_{T_1}I_{T_2}\rangle = &~\langle\lvert S_1\rvert^2\lvert S_2\rvert^2\rangle + \langle\lvert S_1\rvert^2\rangle\langle\lvert N_{p_1,2}\rvert^2\rangle + \langle\lvert S_1\rvert^2\rangle\langle\lvert N_{p_2,2}\rvert^2\rangle + \cdots\\ & \langle\lvert N_{p_1,1}\rvert^2\rangle\langle\lvert S_2\rvert^2\rangle + \langle\lvert N_{p_1,1}\rvert^2\rangle\langle\lvert N_{p_1,2}\rvert^2\rangle + \langle\lvert N_{p_1,1}\rvert^2\rangle\langle\lvert N_{p_2,2}\rvert^2\rangle + \cdots\\ & \langle\lvert N_{p_2,1}\rvert^2\rangle\langle\lvert S_2\rvert^2\rangle + \langle\lvert N_{p_2,1}\rvert^2\rangle\langle\lvert N_{p_1,2}\rvert^2\rangle + \langle\lvert N_{p_2,1}\rvert^2\rangle\langle\lvert N_{p_2,2}\rvert^2\rangle\\ = &~\langle T_1T_2\rangle + \langle T_1\rangle\langle Q_{p_1,2}\rangle + \langle T_1\rangle\langle Q_{p_2,2}\rangle + \cdots\\ & \langle Q_{p_1,1}\rangle\langle T_2\rangle + \langle Q_{p_1,1}\rangle\langle Q_{p_1,2}\rangle + \langle Q_{p_1,1}\rangle\langle Q_{p_2,2}\rangle + \cdots\\ & \langle Q_{p_2,1}\rangle\langle T_2\rangle + \langle Q_{p_2,1}\rangle\langle Q_{p_1,2}2\rangle + \langle Q_{p_2,1}\rangle\langle Q_{p_2,2}\rangle. \end{aligned}$$
On the other hand, for normalization factor of HBT we first consider Eq. (55) again to write
$$\begin{aligned} \langle I_T \rangle =&~\langle\lvert S \rvert^2\rangle + \langle\lvert N_{p_1}\rvert^2\rangle + \langle\lvert N_{p_2}\rvert^2\rangle + 2\langle\textrm{Re}{\{\alpha S^\ast N_{p_1}\}}\rangle + 2\langle\textrm{Re}{\{\beta S^\ast N_{p_2}\}}\rangle\\ =&~\langle T \rangle + \langle Q_{p_1}\rangle + \langle Q_{p_2}\rangle \end{aligned}$$
therefore,
$$\begin{aligned} \langle I_{T_1}\rangle\langle I_{T_2}\rangle =&~\langle T_1\rangle\langle T_2\rangle + \langle T_1\rangle\langle Q_{p_1,2}\rangle + \langle T_1\rangle\langle Q_{p_2, 2}\rangle + \langle Q_{p_1,1}\rangle\langle T_2\rangle + \langle Q_{p_1,1}\rangle\langle Q_{p_1,2}\rangle + \cdots\\ & \langle Q_{p_1,1}\rangle\langle Q_{p_2, 2}\rangle + \langle Q_{p_2,1}\rangle\langle T_2\rangle + \langle Q_{p_2,1}\rangle\langle Q_{p_1,2}\rangle + \langle Q_{p_2,1}\rangle\langle Q_{p_2, 2}\rangle. \end{aligned}$$
Replacing Eqs. (58) and (60) in Eq. (56) it is possible to obtain
$$ \tilde{g}^{(2)}_{\textrm{HBT}}(\tau\;>\;0) = \frac{g^{(2)}_{\textrm{HBT}}(\tau\;>\;0) + R_{T_1}^{{-}1} + R_{T_2}^{{-}1} + R_{T_{12}}}{1 + R_{T_1}^{{-}1} + R_{T_2}^{{-}1} + R_{T_{12}}}, $$
where
$$ R_{T_j} = \dfrac{\langle T_j \rangle}{\langle Q_{p_1,j}\rangle + \langle Q_{p_2,j}\rangle} = \frac{\langle T_j \rangle}{\langle Q_j \rangle} $$
is the SNR of the total intensity signal, and after several extensive steps,
$$ R_{T_{12}} = \dfrac{\left(R_{p_{11},1} + R_{p_{12},1} + R_{p_{21},1} + R_{p_{22},1}\right)\left(R_{p_{11},2} + R_{p_{12},2} + R_{p_{21},2} + R_{p_{22},2}\right)}{\left(R_{p_{11},1} + R_{p_{21},2}\right) \left(R_{p_{11},2} + R_{p_{21},2}\right) + \left(R_{p_{12},2} + R_{p_{22},2}\right) + \left(R_{p_{12},1} + R_{p_{21},1}\right)}, $$
where
$$ R_{p_{jk},i} = \frac{\langle T_{p_j,i}\rangle}{\langle Q_{p_k,i}\rangle} $$
is the polarization cross SNR which compares the noiseless signal of polarization channel $p_j$ with the noise of polarization channel $p_k$. We can assume to a good approximation that $R_{T_1}=R_{T_2}=R_T$ and $R_{p_{jk},1} = R_{p_{jk},2} = R_{p_{jk}}$ which allows us to write
$$ \tilde{g}^{(2)}_{\textrm{HBT}}(\tau\;>\;0) = \frac{g^{(2)}_{\textrm{HBT}}(\tau\;>\;0) + 2R_T^{{-}1} + \dfrac{\left(R_{p_{11}} + R_{p_{12}} + R_{p_{21}} + R_{p_{22}}\right)^2}{\left(R_{p_{11}} + R_{p_{21}}\right)^2 \left(R_{p_{12}} + R_{p_{22}}\right)^2}}{1 + 2R_T^{{-}1} + \dfrac{\left(R_{p_{11}} + R_{p_{12}} + R_{p_{21}} + R_{p_{22}}\right)^2}{\left(R_{p_{11}} + R_{p_{21}}\right)^2 \left(R_{p_{12}} + R_{p_{22}}\right)^2}} $$
and finally, $\tilde {g}^{(2)}_{\textrm {HBT}}(\tau )$ is related to $g^{(2)}_{\textrm {HBT}}(\tau )$ for $\tau\;>\;0$ by
$$ g^{(2)}_{\textrm{HBT}}(\tau\;>\;0) = \tilde{g}^{(2)}_{\textrm{HBT}}(\tau\;>\;0)[1 + G_{T_{12}}] - G_{T_{12}}. $$
with
$$G_{T_{12}} = 2R_T^{{-}1} + \dfrac{\left(R_{p_{11}} + R_{p_{12}} + R_{p_{21}} + R_{p_{22}}\right)^2}{\left(R_{p_{11}} + R_{p_{21}}\right)^2 \left(R_{p_{12}} + R_{p_{22}}\right)^2}.$$
For $\tau = 0$ we have $I_{T_1} = I_{ T_2}$ and, similarly to Eqs. (58) and (60), it is possible to obtain
$$\begin{aligned} \langle I_{T_1}^2\rangle =&~\langle\lvert S_1\rvert^4\rangle + 2\langle\lvert S_1\rvert^2\lvert N_{p_1,1}\rvert^2\rangle + 2\langle\lvert S_1\rvert^2\lvert N_{p_2,1}\rvert^2\rangle + \langle\lvert N_{p_1,1}\rvert^4\rangle + \langle\lvert N_{p_2,1}\rvert^4\rangle + \cdots\\ & 2\langle\lvert N_{p_1,1}\rvert^2\rangle\langle\lvert N_{p_2,1}\rvert^2\rangle + 4\langle\textrm{Re}^{2}{\{\alpha S_1^\ast N_{p_1,1}\}}\rangle + 4\langle\textrm{Re}^{2}{\{\beta S_1^\ast N_{p_2,1}\}}\rangle\\ =&~\langle T_1^2\rangle + \langle Q_{p_1,1}^2\rangle + \langle Q_{p_2,1}^2\rangle + 2\langle Q_{p_1,1}\rangle\langle Q_{p_2,1}\rangle + 4\langle T_1\rangle\langle Q_{p_1,1}\rangle + 4\langle T_1\rangle\langle Q_{p_2,1}\rangle\\ =&~\langle T_1^2\rangle + 2\langle Q_{p_1,1}\rangle^2 + 2\langle Q_{p_2,1}\rangle^2 + 2\langle Q_{p_1,1}\rangle\langle Q_{p_2,1}\rangle + 4\langle T_1\rangle\langle Q_1\rangle\\ =&~\langle T_1^2\rangle + 2\left(\langle Q_{p_1,1}\rangle + \langle Q_{p_2,1}\rangle\right)^2 - 2\langle Q_{p_1,1}\rangle\langle Q_{p_2,1}\rangle + 4\langle T_1\rangle\langle Q_1\rangle, \end{aligned}$$
where we used $4\langle \textrm{Re}^{2}{\{\alpha S_1^\ast N_{p_j,1}\}}\rangle = 2\langle T_1\rangle \langle Q_{p_j,1}\rangle$, which is proved similarly to Eq. (44). For $\tau = 0$, normalization factor is obtained using Eq. (59) as
$$\begin{aligned} \langle I_{T_1}\rangle^2 =&~\langle T_1\rangle^2 + 2\langle T_1\rangle\langle Q_{p_2,1}\rangle + \langle Q_{p_1,1}\rangle^2 + \langle Q_{p_2,1}\rangle^2 + 2\langle Q_{p_1,1}\rangle\langle Q_{p_2,1}\rangle + 2\langle T_1\rangle\langle Q_{p_1,1}\rangle\\ =&~\langle T_1^2\rangle + \langle Q_{p_1,1}\rangle^2 + \langle Q_{p_2,1}\rangle^2 + 2\langle Q_{p_1,1}\rangle\langle Q_{p_2,1}\rangle + 2\langle T_1\rangle\langle Q_1\rangle\\ =&~\langle T_1^2\rangle + \left(\langle Q_{p_1,1}\rangle \langle Q_{p_2,1}\rangle\right)^2 + 2\langle T_1\rangle\langle Q_1\rangle. \end{aligned}$$
Using Eqs. (62) and (63) it is possible to write
$$ \tilde{g}^{(2)}_{\textrm{HBT}}(\tau=0) = \frac{g^{(2)}_{\textrm{HBT}}(\tau=0) + 4R_T^{{-}1} + 2R_T^{{-}2} - 2R_p^{{-}1}}{1 + 2R_T^{{-}1} + R_T^{{-}2}}, $$
with
$$ R_p =~\frac{\langle T_1\rangle}{\langle Q_{p_1,1}\rangle}\frac{\langle T_1\rangle}{\langle Q_{p_2,1}\rangle} = \frac{\langle T_{p_1}\rangle+\langle T_{p_2}\rangle}{\langle Q_{p_1,1}\rangle}\frac{\langle T_{p_1}\rangle+\langle T_{p_2}\rangle}{\langle Q_{p_2,1}\rangle} = \left(R_{p_{11}} + R_{p_{11}}\right)\left(R_{p_{12}} + R_{p_{22}}\right), $$
where $R_T$ and $R_{p_{jk}}$ are defined as previously, and $R_{p_j} = \frac {\langle T_1\rangle }{\langle Q_{p_j,1}\rangle }$. Finally, $\tilde {g}^{(2)}_{\textrm {HBT}}(\tau )$ is related to $g^{(2)}_{\textrm {HBT}}(\tau )$ for $\tau =0$ by
$$ g^{(2)}_{\textrm{HBT}}(\tau=0) = \tilde{g}^{(2)}_{\textrm{HBT}}(\tau=0)[1 + G_{T_{11}}] - 2G_{T_{11}} + 2R_p^{{-}1}, $$
with
$$G_{T_{11}} = \frac{1+2R_T}{R_T^2}.$$

Funding

National Institute of Biomedical Imaging and Bioengineering (K25 EB024595, P41 EB015903); Universidad EAFIT (881-000010).

Acknowledgments

ALP would like to thank BioPM, Amsterdam University Fund and Blue Ribbon (Netherlands) for providing travel support. SRL thanks SPIE Optics and Photonics Education Scholarship for providing travel support.

Disclosures

BEB: Terumo Corporation (P,F) NUP: Terumo Corporation (P)

References

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and A. Et, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]  

2. J. Fujimoto and W. Drexler, “Introduction to Optical Coherence Tomography,” in Optical Coherence Tomography: Technology and Applications, W. Drexler and J. G. Fujimoto, eds., Biological and Medical Physics, Biomedical Engineering (Springer Berlin Heidelberg, Berlin, Heidelberg, 2008), pp. 1–45.

3. A. Curatolo, B. F. Kennedy, D. D. Sampson, and T. R. Hillman, “Speckle in Optical Coherence Tomography,” Adv. Biophotonics p. 68 (2016).

4. J. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Company, 2007).

5. A. Mariampillai, B. A. Standish, E. H. Moriyama, M. Khurana, N. R. Munce, M. K. Leung, J. Jiang, A. Cable, B. C. Wilson, I. A. Vitkin, and V. X. D. Yang, “Speckle variance detection of microvasculature using swept-source optical coherence tomography,” Opt. Lett. 33(13), 1530–1532 (2008). [CrossRef]  

6. Y. Jia, O. Tan, J. Tokayer, B. Potsaid, Y. Wang, J. J. Liu, M. F. Kraus, H. Subhash, J. G. Fujimoto, J. Hornegger, and D. Huang, “Split-spectrum amplitude-decorrelation angiography with optical coherence tomography,” Opt. Express 20(4), 4710–4725 (2012). [CrossRef]  

7. Y. Hong, S. Makita, M. Yamanari, M. Miura, S. Kim, T. Yatagai, and Y. Yasuno, “Three-dimensional visualization of choroidal vessels by using standard and ultra-high resolution scattering optical coherence angiography,” Opt. Express 15(12), 7538 (2007). [CrossRef]  

8. J. Barton and S. Stromski, “Flow measurement without phase information in optical coherence tomography images,” Opt. Express 13(14), 5234–5239 (2005). [CrossRef]  

9. A. S. Nam, I. Chico-Calero, and B. J. Vakoc, “Complex differential variance algorithm for optical coherence tomography angiography,” Biomed. Opt. Express 5(11), 3822 (2014). [CrossRef]  

10. J. Lee, W. Wu, J. Y. Jiang, B. Zhu, and D. A. Boas, “Dynamic light scattering optical coherence tomography,” Opt. Express 20(20), 22262–22277 (2012). [CrossRef]  

11. B. K. Huang and M. A. Choma, “Resolving directional ambiguity in dynamic light scattering-based transverse motion velocimetry in optical coherence tomography,” Opt. Lett. 39(3), 521–524 (2014). [CrossRef]  

12. X. Liu, Y. Huang, J. C. Ramella-Roman, S. A. Mathews, and J. U. Kang, “Quantitative transverse flow measurement using OCT speckle decorrelation analysis,” Opt. Lett. 38(5), 805–807 (2013). [CrossRef]  

13. N. Weiss, T. G. van Leeuwen, and J. Kalkman, “Localized measurement of longitudinal and transverse flow velocities in colloidal suspensions using optical coherence tomography,” Phys. Rev. E 88(4), 042312 (2013). [CrossRef]  

14. N. Uribe-Patarroyo, M. Villiger, and B. E. Bouma, “Quantitative technique for robust and noise-tolerant speed measurements based on speckle decorrelation in optical coherence tomography,” Opt. Express 22(20), 24411–24429 (2014). [CrossRef]  

15. N. Uribe-Patarroyo and B. E. Bouma, “Velocity gradients in spatially-resolved laser Doppler flowmetry and dynamic light scattering with confocal and coherence gating,” Phys. Rev. E 94(2), 022604 (2016). [CrossRef]  

16. B. J. Vakoc, G. J. T. M.d, and B. E. Bouma, “Real-time microscopic visualization of tissue response to laser thermal therapy,” J. Biomed. Opt. 12(2), 020501 (2007). [CrossRef]  

17. W. C. Y. Lo, N. Uribe-Patarroyo, A. S. Nam, M. Villiger, B. J. Vakoc, and B. E. Bouma, “Laser thermal therapy monitoring using complex differential variance in optical coherence tomography,” J. Biophotonics 10(1), 84–91 (2017). [CrossRef]  

18. W. C. Y. Lo, N. Uribe-Patarroyo, K. Hoebel, K. Beaudette, M. Villiger, N. S. Nishioka, B. J. Vakoc, and B. E. Bouma, “Balloon catheter-based radiofrequency ablation monitoring in porcine esophagus using optical coherence tomography,” Biomed. Opt. Express 10(4), 2067–2089 (2019). [CrossRef]  

19. N. Weiss, T. G. van Leeuwen, and J. Kalkman, “Doppler-based lateral motion tracking for optical coherence tomography,” Opt. Lett. 37(12), 2220–2222 (2012). [CrossRef]  

20. N. Uribe-Patarroyo and B. E. Bouma, “Rotational distortion correction in endoscopic optical coherence tomography based on speckle decorrelation,” Opt. Lett. 40(23), 5518 (2015). [CrossRef]  

21. B. J. Berne and R. Pecora, Dynamic Light Scattering: With Applications to Chemistry, Biology, and Physics (Dover Publications, 2000).

22. K. Schätzel, M. Drewel, and S. Stimac, “Photon Correlation Measurements at Large Lag Times: Improving Statistical Accuracy,” J. Mod. Opt. 35(4), 711–718 (1988). [CrossRef]  

23. K. Schatzel, “Noise on photon correlation data. I. Autocorrelation functions,” Quantum Opt. 2(4), 287–305 (1990). [CrossRef]  

24. S. Makita, K. Kurokawa, Y.-J. Hong, M. Miura, and Y. Yasuno, “Noise-immune complex correlation for optical coherence angiography based on standard and Jones matrix optical coherence tomography,” Biomed. Opt. Express 7(4), 1525–1548 (2016). [CrossRef]  

25. I. Popov, A. S. Weatherbee, and A. Vitkin, “Impact of velocity gradient in Poiseuille flow on the statistics of coherent radiation scattered by flowing Brownian particles in optical coherence tomography,” J. Biomed. Opt. 24(9), 097001 (2019). [CrossRef]  

26. M. Almasian, T. G. van Leeuwen, and D. J. Faber, “OCT Amplitude and Speckle Statistics of Discrete Random Media,” Sci. Rep. 7(1), 14873 (2017). [CrossRef]  

27. N. Mohan and B. Vakoc, “Principal-component-analysis-based estimation of blood flow velocities using optical coherence tomography intensity signals,” Opt. Lett. 36(11), 2068–2070 (2011). [CrossRef]  

28. V. J. Srinivasan, H. Radhakrishnan, E. H. Lo, E. T. Mandeville, J. Y. Jiang, S. Barry, and A. E. Cable, “OCT methods for capillary velocimetry,” Biomed. Opt. Express 3(3), 612 (2012). [CrossRef]  

29. P. Zakharov and F. Scheffold, “Advances in dynamic light scattering techniques,” in Light Scattering Reviews 4: Single Light Scattering and Radiative Transfer, A. A. Kokhanovsky, ed., Springer Praxis Books (Springer, Berlin, Heidelberg, 2009), pp. 433–467.

30. Y. Bromberg, Y. Lahini, E. Small, and Y. Silberberg, “Hanbury Brown and Twiss interferometry with interacting photons,” Nat. Photonics 4(10), 721–726 (2010). [CrossRef]  

31. D. D. Duncan, S. J. Kirkpatrick, and R. K. Wang, “Statistics of local speckle contrast,” J. Opt. Soc. Am. A 25(1), 9–15 (2008). [CrossRef]  

32. N. Weiss, T. G. van Leeuwen, and J. Kalkman, “Simultaneous and localized measurement of diffusion and flow using optical coherence tomography,” Opt. Express 23(3), 3448 (2015). [CrossRef]  

33. J. Lee, H. Radhakrishnan, W. Wu, A. Daneshmand, M. Climov, C. Ayata, and D. A. Boas, “Quantitative imaging of cerebral blood flow velocity and intracellular motility using dynamic light scattering–optical coherence tomography,” J. Cereb. Blood Flow Metab. 33(6), 819–825 (2013). [CrossRef]  

34. T. R. Hillman and D. D. Sampson, “The effect of water dispersion and absorption on axial resolution in ultrahigh-resolution optical coherence tomography,” Opt. Express 13(6), 1860–1874 (2005). [CrossRef]  

35. I. Popov, A. Weatherbee, and I. A. Vitkin, “Statistical properties of dynamic speckles from flowing Brownian scatterers in the vicinity of the image plane in optical coherence tomography,” Biomed. Opt. Express 8(4), 2004–2017 (2017). [CrossRef]  

36. N. G. Ferris, T. M. Cannon, M. Villiger, B. E. Bouma, and N. Uribe-Patarroyo, “Forward multiple scattering dominates speckle decorrelation in whole-blood flowmetry using optical coherence tomography,” Biomed. Opt. Express 11(4), 1947–1966 (2020). [CrossRef]  

37. D. L. Marks, T. S. Ralston, P. S. Carney, and S. A. Boppart, “Inverse scattering for rotationally scanned optical coherence tomography,” J. Opt. Soc. Am. A 23(10), 2433–2439 (2006). [CrossRef]  

38. M. L. Villiger and B. E. Bouma, “Physics of Cardiovascular OCT,” in Cardiovascular OCT Imaging, I.-K. Jang, ed. (Springer International Publishing, Cham, 2015), pp. 23–38.

39. S. Yun, G. Tearney, J. de Boer, and B. Bouma, “Removing the depth-degeneracy in optical frequency domain imaging with frequency shifting,” Opt. Express 12(20), 4822–4828 (2004). [CrossRef]  

40. M. S. Bartlett, “PERIODOGRAM ANALYSIS AND CONTINUOUS SPECTRA,” Biometrika 37(1-2), 1–16 (1950). [CrossRef]  

41. B. R. White, M. C. Pierce, N. Nassif, B. Cense, B. H. Park, G. J. Tearney, B. E. Bouma, T. C. Chen, and J. F. de Boer, “In vivo dynamic human retinal blood flow imaging using ultra-high-speed spectral domain optical Doppler tomography,” Opt. Express 11(25), 3490 (2003). [CrossRef]  

42. M. Geissbuehler and T. Lasser, “How to display data by color schemes compatible with red-green color perception deficiencies,” Opt. Express 21(8), 9862–9874 (2013). [CrossRef]  

43. P. Kovesi, “Good Colour Maps: How to Design Them,” arXiv:1509.03700 [cs] (2015).

44. M. Zhang, T. S. Hwang, J. P. Campbell, S. T. Bailey, D. J. Wilson, D. Huang, and Y. Jia, “Projection-resolved optical coherence tomographic angiography,” Biomed. Opt. Express 7(3), 816–828 (2016). [CrossRef]  

45. K. C. Zhou, B. K. Huang, H. Tagare, and M. A. Choma, “Improved velocimetry in optical coherence tomography using Bayesian analysis,” Biomed. Opt. Express 6(12), 4796–4811 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Autocorrelation analysis in OCT is used to determine local dynamics of scatterers, which adds functional contrast such as blood flowmetry. The evolution of the complex-amplitude OCT signal reveals presence of moving scatterers [upper blue curve in (a)], which can be quantified using the first-order ACF $g^{(1)}$ [blue curve in (b)]. When analyzing the OCT intensity signal [lower blue curve in (a)], the second-order ACF $g^{(2)}$ is used for quantification [blue curve in (c)]. In both cases, the decay of the ACF can be related to the scatterer dynamics —including diffusive and translational motion, indicated generically as $v$ in the figure. Noise in the OCT signal [green curves in (a)] is known to affect $g^{(1)}$ and $g^{(2)}$ [green curves in (b) and (c)], most commonly seen as a sharp initial drop in the ACF. Furthermore, $g^{(1)}(\tau =0)=1$ by definition, but $g^{(2)}(\tau =0)$ value is dependent on the contrast of the OCT intensity signal time series. Statistical bias manifests as a different estimated ACF decay for the same process when using time series with different lengths.
Fig. 2.
Fig. 2. Illustration of the operation of averaging strategies to obtain a more robust estimation of the ACF of the times series in (a) where three temporal members are used for the temporal ensemble average. For the averaging strategies, we consider the goal of having six members of the ensemble in total: in (b) the number of time samples is higher ($t$-EA averaging), and in (c) and (d) there are time series for two spatial locations $x_1$ and $x_2$ to calculate the ACF using (c) $s$-EA and (d) ACC. As we are considering only the numerator of $g^{(2)}$ for illustration purposes, $\Pi (\tau )$ represents the product of the elements in the intersection of the time series with and without time-lag $\tau$: six elements for (b) and three elements for (a), (c) and (d).
Fig. 3.
Fig. 3. Experimentally determined RSD in an M-mode acquisition of a static sample and comparison with the model prediction. The good match validates our noise model on which the rest of our results rely on. The dashed line corresponds to the expected behavior for a system with depth-independent noise floor, which our benchtop system follows closely.
Fig. 4.
Fig. 4. Average ACFs using global and contrast normalization of $\tilde {g}^{(2)}_{\textrm {DLS}}$ and $\tilde {g}^{(2)}_{\textrm {HBT}}$ for a signal with 28 dB SNR using a correlation window size $n_{\omega }$ of (a) 64 and (b) 256 samples. Variability of $\tilde {g}^{(2)}_{\textrm {DLS}}$ and $\tilde {g}^{(2)}_{\textrm {HBT}}$ with (c) global and (d) contrast normalization for $n_{\omega }=64$. Shaded area encloses the average value $\pm$ standard deviation.
Fig. 5.
Fig. 5. Average ACFs for the three definitions of $g^{(2)}$ at two depths corresponding to two 28 dB and 38 dB SNR (see colorbar) using a correlation window size $n_{\omega }$ of (a) 64 and (b) 256 samples. Variability each definition for (c) 28 dB and (d) 38 dB using $n_{\omega }=64$. Shaded area encloses the mean value $\pm$ standard deviation. To allow a direct comparison, we plot $g^{(2)}_{\textrm {P}}+1$ to match the range of the other ACFs.
Fig. 6.
Fig. 6. Average ACFs before [$\tilde {g}^{(2)}_{\textrm {HBT,g}}$, left sides] and after noise correction [$g^{(2)}_{\textrm {HBT,g}}$, right sides] calculated using $s$-EA and ACC with $N_z = 8$ for five depths with corresponding SNR values coded in color (see colorbar) with a correlation window size $n_{\omega }$ of (a–b, top) 64 and (c–d, bottom) 256 samples. (e) Variability for $n_{\omega } = 64$. Shaded area encloses the mean value $\pm$ standard deviation.
Fig. 7.
Fig. 7. Exemplary autocorrelation analysis mimicking a flowmetry application; before [$\tilde {g}^{(2)}_{\textrm {HBT,g}}$, (a), left sides] and after [$\tilde {g}^{(2)}_{\textrm {HBT,g}}$, (b), right sides] noise correction for depths with different SNR values (see colorbar), using $s$-EA (top) and ACC (bottom) averaging with $N_z = 8$. We reproduced a typical flowmetry with a total of 128 time samples using $N_{\Omega } = 2$ and $n_{\omega } = 64$.
Fig. 8.
Fig. 8. Average ACFs $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ calculated using combinations of $t$- and $s$-EA averaging for a single depth with 28 dB SNR with different total ensemble sizes $n_{ts}$ of 95 (a), 160 (b) and 472 (c) at $\tau =30$ A-lines. The corresponding variabilities are in (d), (e) and (f); therein the shaded area encloses the mean value $\pm$ standard deviation. Each specific average has a combinations of $n_{\omega }$ (for $t$-EA) and $N_z$ (for $s$-EA) indicated in the legend in the form $n_{\omega }$;$N_z$. The black curve used as a reference ACF with no bias was calculated using $n_{\omega }=512$ and $N_z=80$ and is the same for all cases.
Fig. 9.
Fig. 9. Lateral speckle size analysis in a rotational endoscopic probe using the decorrelation rate determined from the depth-resolved ACFs and $g^{(2)}_{\textrm {HBT,g}}$. Decorrelation rate as a function of depth for $\tilde {g}^{(2)}_{\textrm {HBT,g}}$ [before noise correction, (a)]; noise-corrected decorrelation rate for $g^{(2)}_{\textrm {HBT,g}}$ [(b)]; and depth-averaged noise-corrected decorrelation rate as a function of rotational speed for different correlation window sizes $n_{\omega }$ [see colorbar, (c)]. Errorbars in $c$ denote the standard deviation in depth, seen as the fluctuations in (b). The near-constant decorrelation rate (i.e. lateral speckle size in angular units) as a function of depth is only evident after noise correction.
Fig. 10.
Fig. 10. Improvement of NURD detection in a tomogram of chicken breast obtained with an endoscopic system. A tomogram is presented with overlays of the decorrelation rates calculated before (a) and after (b) noise correction (the luminance is given by the tomogram intensity and the hue of the nearly-perceptually-isoluminant colormap by the decorrelation rate). After noise correction, the decorrelation rate as a function of depth varies much less with depth, as shown here for (c) a region with high rotation speed (violet squares in tomograms). The standard deviation $(\sigma )$ over the mean $(\mu )$ of the decorrelation rate ($\tau ^{-1}$) decreases due to the noise correction, as visible in the histogram over all the correlation windows (d). Depth-averaging of rotation speed as a function of A-line (e). Without noise correction, rotation speed is positively skewed with respect to the nominal value; after noise correction rotation speed exhibits the expected fluctuation around the nominal 50 RPS value.
Fig. 11.
Fig. 11. Example of $g^{(2)}$-based angiography with and without SNR correction, using $s$-EA and ACC averaging strategies. All views correspond to overlays where the luminance is given by the tomogram intensity and the hue of the nearly-perceptually-isoluminant colormap by the decorrelation rate. B-scan views [(a)–(d)] and en-face views [(e)–(h)]. The dashed line in the B-scans (en-face views) show the depth (out-of-plane location) corresponding to the en-face views (B-scans). Visually, $s$-EA shows a higher contrast between static tissue and blood vessels, such as in the tissue surrounding the two large blood vessels near the center of the tissue. Low-SNR tissue regions exhibit some degree of decorrelation presumably due to noise, which became significantly lower after SNR correction; visually, the $s$-EA approach is more robust. Plots at bottom show $g^{(2)}$ at the four locations A–F marked in (a) with high (A–C) and low (D–F) SNR; full decorrelation (A, D), partial decorrelation (D, E), and static tissue (C, F).

Equations (82)

Equations on this page are rendered with MathJax. Learn more.

F = S + N .
S N = S N .
T = | S | 2 , Q = | N | 2 , I = | F | 2 ;
I = T + Q + 2 Re { S N } = T + Q + 2 | S | | N | cos φ ,
ρ = 2 | N | t | S | t .
g ( 1 ) ( τ ) = S 1 S 2 | S 1 | 2 | S 2 | 2 ; g ~ ( 1 ) ( τ ) = F 1 F 2 | F 1 | 2 | F 2 | 2 ,
g ~ ( 1 ) ( τ ) = g ( 1 ) ( τ ) 1 ( 1 + R 1 1 ) ( 1 + R 2 1 ) ,
g DLS ( 2 ) ( τ ) = T 1 T 2 T 1 2 ; g ~ DLS ( 2 ) ( τ ) = I 1 I 2 I 1 2 ,
g DLS,c ( 2 ) ( τ ) = g DLS ( 2 ) ( τ ) 1 g DLS ( 2 ) ( τ = 0 ) 1 C 2 + 1 ,
g DLS,g ( 2 ) ( τ ) = g DLS ( 2 ) ( τ ) g DLS ( 2 ) ( τ = 0 ) [ C 2 + 1 ] ,
g ~ DLS ( 2 ) ( τ > 0 ) = g DLS ( 2 ) ( τ > 0 ) + R 12 1 + L 21 R 1 1 + R 1 1 R 12 1 1 + R 1 2 + 2 R 1 1 .
g DLS ( 2 ) ( τ > 0 ) = g ~ DLS ( 2 ) ( τ > 0 ) [ 1 + G ] G ,
G = 1 + 2 R R 2 .
g DLS ( 2 ) ( τ = 0 ) = g ~ DLS ( 2 ) ( τ = 0 ) + 2 R 1 2 + 4 R 1 1 1 + R 1 2 + 2 R 1 1 ,
g DLS ( 2 ) ( τ = 0 ) = g ~ DLS ( 2 ) ( τ = 0 ) [ 1 + G ] 2 G .
g HBT ( 2 ) ( τ ) = T 1 T 2 T 1 T 2 ; g ~ HBT ( 2 ) ( τ ) = I 1 I 2 I 1 I 2 ,
g ~ HBT ( 2 ) ( τ > 0 ) = g HBT ( 2 ) ( τ > 0 ) + R 2 1 + R 1 1 + R 1 1 R 2 1 1 + R 2 1 + R 1 1 + R 1 1 R 2 1 .
g HBT ( 2 ) ( τ > 0 ) = g ~ HBT ( 2 ) ( τ > 0 ) [ 1 + G ] G .
g HBT ( 2 ) ( τ = 0 ) = g ~ HBT ( 2 ) ( τ = 0 ) 2 R 1 2 + 4 R 1 1 1 + R 1 2 + 2 R 1 1 ,
g HBT ( 2 ) ( τ = 0 ) = g ~ HBT ( 2 ) ( τ = 0 ) [ 1 + G ] 2 G .
g P ( 2 ) ( τ ) = ( T 1 T 1 ) ( T 2 T 2 ) σ T 1 σ T 2 ; g ~ P ( 2 ) ( τ ) = ( I 1 I 1 ) ( I 2 I 2 ) σ I 1 σ I 2 ,
g P ( 2 ) ( τ ) = g ~ P ( 2 ) ( τ ) 1 1 + 1 + 2 R 1 2 R ~ 1 2 R 1 2 1 1 + 1 + 2 R 2 2 R ~ 2 2 R 2 2 ,
g P ( 2 ) ( τ ) = g ~ P ( 2 ) ( τ ) 1 1 + 1 + 2 R 2 R ~ 2 R 2 .
g P ( 2 ) ( τ ) = T 1 T 2 T 1 T 2 C 1 C 2 T 1 T 2 = 1 C 1 C 2 [ g HBT ( 2 ) ( τ ) 1 ] ,
n t ( n ω + Δ n ω , τ ) n t ( n ω , τ ) = Δ n ω τ n t ( n ω , τ ) .
n t ( n ω , τ , w s ) n t ( n ω , τ ) = n s ,
K ( z , x , t , n f , p ) t K ( z , x , t , n f , p ) n ω t , t + τ = 1 n ω τ t = 1 n ω τ K ( z , x , t , n f , p ) K ( z , x , t + τ , n f , p ) .
K ( z , x , t , n f , p ) t K ( z , x , t , n f , p ) n ω , τ = 1 n ω τ t = 1 n ω τ K ( z , x , t , n f , p ) .
g ~ HBT ( 2 ) ( τ ) = I ( z , x , n Ω n ω + t , n f , p ) t I ( z , x , n Ω n ω + t , n f , p ) n ω t , t + τ I ( z , x , n Ω n ω + t , n f , p ) t I ( z , x , n Ω n ω + t , n f , p ) n ω , τ I ( z , x , n Ω n ω + t + τ , n f , p ) t I ( z , x , n Ω n ω + t + τ , n f , p ) n ω , τ .
R = t I ( z , x , n Ω n ω + t , n f , p ) n ω , 0 Q ( z , p ) 1.
K ( z , x , t , n f , p ) s K ( z , x , t , n f , p ) N w s w ^ s = 1 ( N x + 1 ) ( N z + 1 ) n x = N x 2 N x 2 n z = N z 2 N z 2 w ^ s ( n x , n z ) K ( z + n z , x + n x , t , n f , p ) ,
w ^ s ( x , y ) = w s ( x , y ) n z , n x w s ( n z , n x ) .
g ~ HBT ( 2 ) ( τ ) = t s I ( z , x , n Ω n ω + t , n f , p ) N w s w s ^ t I ( z , x , n Ω n ω + t , n f , p ) s I ( z , x , n Ω n ω + t , n f , p ) N w s w s ^ n ω t , t + τ s I ( z , x , n Ω n ω + t , n f , p ) N w s w ^ s t I ( z , x , n Ω n ω + t , n f , p ) s I ( z , x , n Ω n ω + t , n f , p ) N w s w ^ s n ω , τ s I ( z , x , n Ω n ω + t + τ , n f , p ) N w s w ^ s t I ( z , x , n Ω n ω + t + τ , n f , p ) s I ( z , x , n Ω n ω + t + τ , n f , p ) N w s w ^ s n ω , τ .
R = s I ( z , x , n Ω n ω + t , n f , p ) N w s w ^ s t I ( z , x , n Ω n ω + t , n f , p ) s I ( z , x , n Ω n ω + t , n f , p ) N w s w ^ s n ω , 0 Q ( z , p ) s Q ( z , p ) N w s w ^ s 1.
g ~ HBT ( 2 ) ( τ ) = t I ( z , x , n Ω n ω + t , n f , p ) n ω t , t + τ t I ( z , x , n Ω n ω + t , n f , p ) n ω , τ t I ( z , x , n Ω n ω + t + τ , n f , p ) n ω , τ s I ( z , x , n Ω n ω + t , n f , p ) t I ( z , x , n Ω n ω + t , n f , p ) n ω t , t + τ I ( z , x , n Ω n ω + t , n f , p ) t I ( z , x , n Ω n ω + t , n f , p ) n ω , τ I ( z , x , n Ω n ω + t + τ , n f , p ) t I ( z , x , n Ω n ω + t + τ , n f , p ) n ω , τ N w s w ^ s .
τ 1 = 1 τ c 1 / log ( g c ) .
τ ^ 1 ( z , x , y ) = max { 0 , Re { τ 2 ( z , x , y ) τ ¯ 2 ( y ) } } ,
g ( 1 ) ( τ ) = S 1 S 2 | S 1 | 2 | S 2 | 2 ; g ~ ( 1 ) ( τ ) = F 1 F 2 | F 1 | 2 | F 2 | 2 .
| F | 2 = | S | 2 + | N | 2 + S N + S N ,
| F 1 | 2 | F 2 | 2 = | S 1 | 2 + | N 1 | 2 ) ( | S 2 | 2 + | N 2 | 2 = | S 1 | 2 | S 2 | 2 ( 1 + | N 1 | 2 | S 1 | 2 ) ( 1 + | N 2 | 2 | S 2 | 2 ) .
F 1 F 2 = ( S 1 + N 1 ) ( S 2 + N 2 ) = S 1 S 2 + S 1 N 2 + N 1 S 2 + N 1 N 2 = S 1 S 2 .
g ~ ( 1 ) ( τ ) = S 1 S 2 | S 1 | 2 | S 2 | 2 1 ( 1 + | N 1 | 2 | S 1 | 2 ) ( 1 + | N 2 | 2 | S 2 | 2 ) , = g ( 1 ) ( τ ) 1 ( 1 + | N 1 | 2 | S 1 | 2 ) ( 1 + | N 2 | 2 | S 2 | 2 ) .
I = | S + N | 2 = | S | 2 + | N | 2 + 2 Re { S N } ,
I = | S | 2 + | N | 2 + 2 Re { S N } , = T + Q ,
2 Re { S N } = S N + S N = S N + S N = 0.
g DLS ( 2 ) ( τ ) = T 1 T 2 T 1 2 ; g ~ DLS ( 2 ) ( τ ) = I 1 I 2 I 1 2 .
I 1 2 = ( T 1 + Q 1 ) ( T 1 + Q 1 ) = T 1 2 + Q 1 2 + 2 T 1 Q 1 ,
I 1 I 2 = | S 1 | 2 | S 2 | 2 + | S 1 | 2 | N 2 | 2 + 2 | S 1 | 2 Re { S 2 N 2 } + | N 1 | 2 | S 2 | 2 + | N 1 | 2 | N 2 | 2 + 2 | N 1 | 2 Re { S 2 N 2 } + 2 Re { S 1 N 1 } | S 2 | 2 + 2 Re { S 1 N 1 } | N 2 | 2 + 4 Re { S 1 N 1 } Re { S 2 N 2 } . I 1 I 2 = | S 1 | 2 | S 2 | 2 + | S 1 | 2 | N 2 | 2 + 2 | S 1 | 2 Re { S 2 N 2 } + | N 1 | 2 | S 2 | 2 + | N 1 | 2 | N 2 | 2 + 2 | N 1 | 2 Re { S 2 N 2 } + 2 Re { S 1 N 1 } | S 2 | 2 + 2 Re { S 1 N 1 } | N 2 | 2 + 4 Re { S 1 N 1 } Re { S 2 N 2 } .
2 | S 1 | 2 Re { S 2 N 2 } = | S 1 | 2 ( S 2 N 2 + S 2 N 2 ) = | S 1 | 2 S 2 N 2 + | S 1 | 2 S 2 N 2 = 0 ,
I 1 I 2 = | S 1 | 2 | S 2 | 2 + | S 1 | 2 | N 2 | 2 + | N 1 | 2 | S 2 | 2 + | N 1 | 2 | N 2 | 2 = T 1 T 2 + T 1 Q 2 + Q 1 T 2 + Q 1 Q 2 .
g ~ DLS ( 2 ) ( τ > 0 ) = T 1 T 2 + T 1 Q 2 + Q 1 T 2 + Q 1 Q 2 T 1 2 + Q 1 2 + 2 T 1 Q 1 = g DLS ( 2 ) ( τ > 0 ) + Q 2 T 1 + T 2 Q 1 T 1 2 + Q 1 Q 2 T 1 2 1 + Q 1 2 T 1 2 + 2 Q 1 T 1 .
I 1 2 = | S 1 | 2 | S 1 | 2 + 2 | S 1 | 2 | N 1 | 2 + | N 1 | 2 | N 1 | 2 + 4 Re 2 { S 1 N 1 } ,
4 Re 2 { S 1 N 1 } = S 1 2 N 1 2 + S 1 2 N 1 2 + 2 | S 1 | 2 | N 1 | 2 = 2 Re { S 1 2 N 1 2 } + 2 | S 1 | 2 | N 1 | 2 = 2 | S 1 | 2 | N 1 | 2 ,
I 1 2 = | S 1 | 2 | S 1 | 2 + 4 | S 1 | 2 | N 1 | 2 + | N 1 | 2 | N 1 | 2 , = T 1 2 + 2 Q 1 2 + 4 T 1 Q 1
g DLS ( 2 ) ( τ = 0 ) = T 1 2 + 2 Q 1 2 + 4 T 1 Q 1 T 1 2 + Q 1 2 + 2 T 1 Q 1 = g DLS ( 2 ) ( τ = 0 ) + 2 Q 1 2 T 1 2 + 4 Q 1 T 1 1 + Q 1 2 T 1 2 + 2 Q 1 T 1 .
g HBT ( 2 ) ( τ ) = T 1 T 2 T 1 T 2 ; g ~ HBT ( 2 ) ( τ ) = I 1 I 2 I 1 I 2 .
I 1 I 2 = ( T 1 + Q 1 ) ( T 2 + Q 2 ) = T 1 T 2 + T 1 Q 2 + Q 1 T 2 + Q 1 Q 2 ,
g ~ HBT ( 2 ) ( τ > 0 ) = T 1 T 2 + T 1 Q 2 + Q 1 T 2 + Q 1 Q 2 T 1 T 2 + T 1 Q 2 + Q 1 T 2 + Q 1 Q 2 g ~ HBT ( 2 ) ( τ > 0 ) = g HBT ( 2 ) ( τ > 0 ) + Q 2 T 2 + Q 1 T 1 + Q 1 Q 2 T 1 T 2 1 + Q 2 T 2 + Q 1 T 1 + Q 1 Q 2 T 1 T 2 .
g HBT ( 2 ) ( τ = 0 ) = T 1 2 + 2 Q 1 2 + 4 T 1 Q 1 T 1 2 + Q 1 2 + 2 T 1 Q 1 g ~ HBT ( 2 ) ( τ = 0 ) = g HBT ( 2 ) ( τ = 0 ) + 2 Q 1 2 T 1 2 + 4 Q 1 T 1 1 + Q 1 2 T 1 2 + 2 Q 1 T 1 .
g P ( 2 ) ( τ ) = ( T 1 T 1 ) ( T 2 T 2 ) σ T 1 σ T 2 ; g ~ P ( 2 ) ( τ ) = ( I 1 I 1 ) ( I 2 I 2 ) σ I 1 σ I 2 .
σ I 1 =   I 1 2 I 1 2   T 1 2 + 2 Q 1 2 + 4 T 1 Q 1 ( T 1 2 + Q 1 2 + 2 T 1 Q 1 ) =   T 1 2 T 1 2 + Q 1 2 + 2 T 1 Q 1 .
( I 1 I 1 ) ( I 2 I 2 ) =   I 1 I 2 I 1 I 2 =   T 1 T 2 + T 1 Q 2 + Q 1 T 2 + Q 1 Q 2   ( T 1 T 2 + T 1 Q 2 + Q 1 T 2 + Q 1 Q 2 ) =   T 1 T 2 T 1 T 2 .
g ~ P ( 2 ) ( τ ) =   T 1 T 2 T 1 T 2 T 1 2 T 1 2 Q 1 2 + 2 T 1 Q 1 T 2 2 T 2 2 Q 2 2 + 2 T 2 Q 2 =   T 1 T 2 T 1 T 2 T 1 2 T 1 2 T 2 2 T 2 2 1 1 + Q 1 2 + 2 T 1 Q 1 T 1 2 T 1 2 1 1 + Q 2 2 + 2 T 2 Q 2 T 2 2 T 2 2 =   g P ( 2 ) ( τ ) 1 1 + 1 + 2 T 1 Q 1 2 T 1 2 Q 1 T 1 2 Q 1 2 1 1 + 1 + 2 T 2 Q 2 2 T 2 2 Q 2 T 2 2 Q 2 2 .
I T = | S p 1 + N p 1 | 2 + | S p 2 + N p 2 | 2 = | α | 2 | S | 2 + | N p 1 | 2 + 2 Re { α S N p 1 } + | β | 2 | S | 2 + | N p 2 | 2 + 2 Re { β S N p 2 } = | S | 2 + | N p 1 | 2 + | N p 2 | 2 + 2 Re { α S N p 1 } + 2 Re { β S N p 2 } ,
g HBT ( 2 ) ( τ ) = T 1 T 2 T 1 T 2 ; g ~ HBT ( 2 ) ( τ ) = I T 1 I T 2 I T 1 I T 2 ,
I T 1 I T 2 = | S 1 | 2 | S 2 | 2 + | S 1 | 2 | N p 1 , 2 | 2 + | S 1 | 2 | N p 2 , 2 | 2 + 2 | S 1 | 2 Re { α S 2 N p 1 , 2 } + 2 | S 1 | 2 Re { β S 2 N p 2 , 2 } + | N p 1 , 1 | 2 | S 2 | 2 + | N p 1 , 1 | 2 | N p 1 , 2 | 2 + | N p 1 , 1 | 2 | N p 2 , 2 | 2 + 2 | N p 1 , 1 | 2 Re { α S 2 N p 1 , 2 } + 2 | N p 1 , 1 | 2 Re { β S 2 N p 2 , 2 } + | N p 2 , 1 | 2 | S 2 | 2 + | N p 2 , 1 | 2 | N p 1 , 2 | 2 + | N p 2 , 1 | 2 | N p 2 , 2 | 2 + 2 | N p 2 , 1 | 2 Re { α S 2 N p 1 , 2 } + 2 | N p 2 , 1 | 2 Re { β S 2 N p 2 , 2 } + 2 | S 2 | 2 Re { α S 1 N p 1 , 1 } + | N p 1 , 2 | 2 Re { α S 1 N p 1 , 1 } + | N p 2 , 2 | 2 Re { α S 1 N p 1 , 1 } + 4 Re { α S 1 N p 1 , 1 } Re { α S 2 N p 1 , 2 } + 4 Re { α S 1 N p 1 , 1 } Re { β S 2 N p 2 , 2 } + 2 | S 2 | 2 Re { β S 1 N p 2 , 1 } + | N p 1 , 2 | 2 Re { β S 1 N p 2 , 1 } + | N p 2 , 2 | 2 Re { β S 1 N p 2 , 1 } + 4 Re { β S 1 N p 2 , 1 } Re { α S 2 N p 1 , 2 } + 4 Re { β S 1 N p 2 , 1 } Re { β S 2 N p 2 , 2 } ,
I T 1 I T 2 =   | S 1 | 2 | S 2 | 2 + | S 1 | 2 | N p 1 , 2 | 2 + | S 1 | 2 | N p 2 , 2 | 2 + | N p 1 , 1 | 2 | S 2 | 2 + | N p 1 , 1 | 2 | N p 1 , 2 | 2 + | N p 1 , 1 | 2 | N p 2 , 2 | 2 + | N p 2 , 1 | 2 | S 2 | 2 + | N p 2 , 1 | 2 | N p 1 , 2 | 2 + | N p 2 , 1 | 2 | N p 2 , 2 | 2 =   T 1 T 2 + T 1 Q p 1 , 2 + T 1 Q p 2 , 2 + Q p 1 , 1 T 2 + Q p 1 , 1 Q p 1 , 2 + Q p 1 , 1 Q p 2 , 2 + Q p 2 , 1 T 2 + Q p 2 , 1 Q p 1 , 2 2 + Q p 2 , 1 Q p 2 , 2 .
I T =   | S | 2 + | N p 1 | 2 + | N p 2 | 2 + 2 Re { α S N p 1 } + 2 Re { β S N p 2 } =   T + Q p 1 + Q p 2
I T 1 I T 2 =   T 1 T 2 + T 1 Q p 1 , 2 + T 1 Q p 2 , 2 + Q p 1 , 1 T 2 + Q p 1 , 1 Q p 1 , 2 + Q p 1 , 1 Q p 2 , 2 + Q p 2 , 1 T 2 + Q p 2 , 1 Q p 1 , 2 + Q p 2 , 1 Q p 2 , 2 .
g ~ HBT ( 2 ) ( τ > 0 ) = g HBT ( 2 ) ( τ > 0 ) + R T 1 1 + R T 2 1 + R T 12 1 + R T 1 1 + R T 2 1 + R T 12 ,
R T j = T j Q p 1 , j + Q p 2 , j = T j Q j
R T 12 = ( R p 11 , 1 + R p 12 , 1 + R p 21 , 1 + R p 22 , 1 ) ( R p 11 , 2 + R p 12 , 2 + R p 21 , 2 + R p 22 , 2 ) ( R p 11 , 1 + R p 21 , 2 ) ( R p 11 , 2 + R p 21 , 2 ) + ( R p 12 , 2 + R p 22 , 2 ) + ( R p 12 , 1 + R p 21 , 1 ) ,
R p j k , i = T p j , i Q p k , i
g ~ HBT ( 2 ) ( τ > 0 ) = g HBT ( 2 ) ( τ > 0 ) + 2 R T 1 + ( R p 11 + R p 12 + R p 21 + R p 22 ) 2 ( R p 11 + R p 21 ) 2 ( R p 12 + R p 22 ) 2 1 + 2 R T 1 + ( R p 11 + R p 12 + R p 21 + R p 22 ) 2 ( R p 11 + R p 21 ) 2 ( R p 12 + R p 22 ) 2
g HBT ( 2 ) ( τ > 0 ) = g ~ HBT ( 2 ) ( τ > 0 ) [ 1 + G T 12 ] G T 12 .
G T 12 = 2 R T 1 + ( R p 11 + R p 12 + R p 21 + R p 22 ) 2 ( R p 11 + R p 21 ) 2 ( R p 12 + R p 22 ) 2 .
I T 1 2 =   | S 1 | 4 + 2 | S 1 | 2 | N p 1 , 1 | 2 + 2 | S 1 | 2 | N p 2 , 1 | 2 + | N p 1 , 1 | 4 + | N p 2 , 1 | 4 + 2 | N p 1 , 1 | 2 | N p 2 , 1 | 2 + 4 Re 2 { α S 1 N p 1 , 1 } + 4 Re 2 { β S 1 N p 2 , 1 } =   T 1 2 + Q p 1 , 1 2 + Q p 2 , 1 2 + 2 Q p 1 , 1 Q p 2 , 1 + 4 T 1 Q p 1 , 1 + 4 T 1 Q p 2 , 1 =   T 1 2 + 2 Q p 1 , 1 2 + 2 Q p 2 , 1 2 + 2 Q p 1 , 1 Q p 2 , 1 + 4 T 1 Q 1 =   T 1 2 + 2 ( Q p 1 , 1 + Q p 2 , 1 ) 2 2 Q p 1 , 1 Q p 2 , 1 + 4 T 1 Q 1 ,
I T 1 2 =   T 1 2 + 2 T 1 Q p 2 , 1 + Q p 1 , 1 2 + Q p 2 , 1 2 + 2 Q p 1 , 1 Q p 2 , 1 + 2 T 1 Q p 1 , 1 =   T 1 2 + Q p 1 , 1 2 + Q p 2 , 1 2 + 2 Q p 1 , 1 Q p 2 , 1 + 2 T 1 Q 1 =   T 1 2 + ( Q p 1 , 1 Q p 2 , 1 ) 2 + 2 T 1 Q 1 .
g ~ HBT ( 2 ) ( τ = 0 ) = g HBT ( 2 ) ( τ = 0 ) + 4 R T 1 + 2 R T 2 2 R p 1 1 + 2 R T 1 + R T 2 ,
R p =   T 1 Q p 1 , 1 T 1 Q p 2 , 1 = T p 1 + T p 2 Q p 1 , 1 T p 1 + T p 2 Q p 2 , 1 = ( R p 11 + R p 11 ) ( R p 12 + R p 22 ) ,
g HBT ( 2 ) ( τ = 0 ) = g ~ HBT ( 2 ) ( τ = 0 ) [ 1 + G T 11 ] 2 G T 11 + 2 R p 1 ,
G T 11 = 1 + 2 R T R T 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.