Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Interferometric detection of 3D motion using computational subapertures in optical coherence tomography

Open Access Open Access

Abstract

Doppler optical coherence tomography (OCT) quantifies axial motion with high precision, whereas lateral motion cannot be detected by a mere evaluation of phase changes. This problem was solved by the introduction of three-beam Doppler OCT, which, however, entails a high experimental effort. Here, we present the numerical analogue to this experimental approach. Phase-stable complex-valued OCT datasets, recorded with full-field swept-source OCT, are filtered in the Fourier domain to limit imaging to different computational subapertures. These are used to calculate all three components of the motion vector with interferometric precision. As known from conventional Doppler OCT for axial motion only, the achievable accuracy exceeds the actual imaging resolution by orders of magnitude in all three dimensions. The feasibility of this method is first demonstrated by quantifying micro-rotation of a scattering sample. Subsequently, a potential application is explored by recording the 3D motion vector field of tissue during laser photocoagulation in ex-vivo porcine retina.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Phase-sensitive or Doppler optical coherence tomography (OCT) is probably the most important functional extension to OCT imaging developed so far [1]. The phases of complex OCT signals are evaluated to visualize motion and micro-structural changes on a nanometer scale. Doppler OCT, however, is not capable of detecting the entire three-dimensional motion vector – instead it is just sensitive to motion along the optical axis, typically denoted as the z-axis. Motion in x- and y-direction does not change the optical path length that the backscattered light propagates, and therefore it hardly has any usable effect on the phases of a complex-valued OCT volume. In special cases, the entire three-dimensional motion can nevertheless be deduced. It is, for example, a justifiable assumption that blood always moves along its vessel. Hence, the three-dimensional velocity vector v of flowing blood may be estimated by determining just the z-component vz and the flow direction v/(vx2+vy2+vz2)1/2. Often, however, the direction of motion is not that readily deducible. One example is the motion of retinal tissue during photocoagulation, which therefore has to be determined using additional methods [2, 3]. Speckle tracking is used to determine lateral motion, but is not as precise as a phase evaluation for axial motion [4,5]. Moreover, the widely used assumption that a speckle pattern is “locked” to the micro-structure of a sample and strictly follows any motion the sample might perform is actually not entirely correct. Instead, even a small defocussing error or other aberrations may transform axial motion into apparent lateral motion and therefore lead to false results [6,7].

Recently, a three-beam Doppler OCT technique was introduced for an interferometric detection of 3D motion. The sample is imaged using three different apertures, each sensitive to a certain component of the three-dimensional motion vector [8–10]. If these three directions correspond to linearly independent vectors, the actual motion can be determined without relying on any assumptions about the motion direction. Measurement of blood flow in retinal vessels was successfully demonstrated [8–10]. However, the experimental effort is substantial and a precise synchronization and alignment of the three foci, jointly scanning the sample, is required.

The introduction of full-field swept-source OCT (FF-SS-OCT) increased the possible imaging speed by orders of magnitude [11], resulting in phase stability over an entire volume of a few mm size. Hence, the wave field in the aperture plane is numerically accessible by a two-dimensional Fourier transform in the x- and y-dimension [12]. Afterwards, arbitrary regions of the aperture plane may be selected to include only light that passed the aperture in that particular region in the reconstruction of amplitude and phase of the scattered wave field. Different imaging conditions can be simulated after the data has actually been acquired. Hence, different apertures can be computed to change the apparent direction the sample is imaged from. One OCT dataset, which was acquired at the maximum possible NA, can be processed into any number of computational-subaperture datasets. This way it is possible to successively extract different components of the present motion to finally reconstruct the entire three-dimensional motion vector field. The experimental effort of three-beam Doppler-OCT is hence replaced by an equivalent computational procedure.

2. Theory

2.1. Interferometric detection of lateral motion by computational subapertures

Given a three-dimensional, complex-valued representation of the backscattered light from the tissue volume, the corresponding wave field in the back focal or Fourier plane is calculated by a two-dimensional Fourier transform along the x- and y-direction. An example of such a Fourier transform is provided in Fig. 1(a). The spatial frequency spectrum (kx, ky) transferred by the imaging optics is limited to |(kx, ky)| ≤ kNA by the physical aperture. By masking it is possible to choose arbitrary sub-regions from inside this aperture and narrow down the detection of scattered light to the fraction that passed the Fourier plane in that specific subaperture. This subaperture may be chosen of any arbitrary shape, size, and position. Assuming that all subapertures S(r, , θ) are circular, they are specified by their radius r, their eccentricity and their azimuth angle θ between the x-axis and the line connecting the center of the physical aperture to the center of the computational subaperture. In such a subaperture the spatial frequencies kx = ∊kNA cos θ and ky = ∊kNA sin θ are detected, while kz is given by kz=(k2kx2ky2)1/2=k(12NA2)1/2. The corresponding wavenumber vector of the detected light therefore equals

kout(,θ)=k(NAcosθNAsinθ12NA2).
The inter-volume phase difference induced by a motion of Δr = (Δx, Δy, Δz)T between the acquisition of the volumes is determined by the phase change of the illumination with kin = (0, 0, −k)T and the phase change of the detection along kout through the computational subaperture:
Δϕ=n(koutkin)Δr=nk[(NAcosθNAsinθ12NA2)(ΔxΔyΔz)(001)(ΔxΔyΔz)]=nk[(NAcosθNAsinθ12NA2)(ΔxΔyΔz)+Δz]
Considering the difference Δθθ+π between inter-volume phase changes reconstructed in two opposite subapertures S(, θ) and S(, θ + π), and the symmetry properties of sine and cosine, leads to:
Δθθ+πΔϕ=2nk(NAcosθNAsinθ0)(ΔxΔyΔz).
Hence, the high sensitivity of the inter-volume phase changes is now shifted from the z-axis to an arbitrary direction perpendicular to z. Particularly suitable for further calculations may be the x-direction (θ = 0) and the y-direction (θ = π/2), at which sine and cosine become 0 and 1, respectively:
Δ0πΔϕ=2nkNAΔxΔx=Δ0πΔϕ2nkNA
Δπ/23π/2Δϕ=2nkNAΔyΔy=Δπ/23π/2Δϕ2nkNA
Axial motion is calculated using a centered aperture ( = 0) and the well known relation
Δz=Δϕ2nk
Thereby, a 3D motion vector field may be detected with interferometric precision in every direction.

 figure: Fig. 1

Fig. 1 Principle of computational-subaperture processing for a determination of lateral motion. (a) Lateral Fourier transform of phase-stable complex-valued OCT data showing a superposition of the physical aperture and four computational subapertures. (b) Relation between detection angle and position of the subaperture in the Fourier plane: The sample is illuminated by a plane wave kin of wavenumber k propagating along the z-direction. The detection of scattered light is computationally limited to light that passed the aperture in a certain sub-region specified by its radius r, its eccentricity , and its azimuthal angle θ.

Download Full Size | PDF

2.2. Configuration of subapertures

While axial motion may be determined most precisely by using the full available aperture, the optimum number, size, and arrangement of computational subapertures for a determination of lateral motion is not obvious. In principle, three subapertures, whose detection directions are linearly independent vectors, are sufficient. But this sufficient number of subapertures is not necessarily the optimum number of subapertures.

Using computational instead of physical subapertures changes some boundary conditions of the problem to find the optimum configuration. First of all, any number of subapertures may be computed with little extra effort, whereas each additional physical aperture requires additional hardware. Moreover, light detected by one physical subaperture is lost for all other physical subapertures, so a set of apertures has to be found that provides optimum results for all three directions simultaneously. Using computational subapertures instead each component can be determined by a set of apertures that provides best results for just that direction. Hence axial motion may be determined by evaluating the full physical aperture. Afterwards a set of apertures has to be found for an optimum determination of motion in x-direction. For symmetry reasons the optimum configuration for the y-direction will be a rotated copy of those subapertures. This suggests that the optimum number of computational subapertures is at least four – plus the full physical aperture for the determination of axial motion. Determining the optimum size and position of these four or more subapertures without making simplifying assumptions is not trivial. To simplify the discussion, only circular subapertures are considered, although in principle any shape would be possible.

First of all, it appears likely that large subapertures should be favorable. However, a computational subaperture cannot exceed the border of the physical aperture. Hence, the maximum possible radius rmax of a subaperture with eccentricity equals rmax = kNA(1 − ). The amount of detected light is proportional to the area of the subaperture, so it decreases like rmax2(1)2 when the eccentricity is increased. The number of independent pixels of the OCT image, however, is reduced by the same factor. Therefore, the number of detected photons per pixel in the resulting image, and hence the signal-to-noise ratio (SNR), is not changed and does not depend on the size of a subaperture. It can be shown, that the phase noise σϕ of OCT data equals 1/(2SNR) [13,14]. According to Gaussian error propagation, the phase change between two OCT volumes is therefore superimposed with a noise of σΔϕ=1/SNR. To determine the lateral motion Δx or Δy, respectively, inter-subaperture differences of inter-volume phase changes are calculated, which increases the noise again by a factor of 2 to

σΔ0πΔϕ=σΔπ/23π/2Δϕ=2/SNR.
The noise of the phase differences is hence independent from the chosen subaperture eccentricity. The prefactor used in Eqs. (4) and (5) to rescale the phase differences Δ0πΔϕ and Δπ/23π/2Δϕ into lateral displacements Δx and Δy (e.g. in nanometers), however, depends on the eccentricity. The noise of the displacement in lateral directions is then equal to
σΔx=2SNR12knNA
For the determination of the optimum eccentricity of computational subapertures, however, it is not sufficient to consider only the noise of the determined lateral motion. This motion can in fact not be determined at full resolution δx0 and δy0, i.e., not for every individual voxel of the original OCT data set. Depending on the size of the subaperture, the number of independent voxels and therefore the lateral resolution δx and δy is correspondingly reduced. For a fair comparison of different eccentricities this has to be taken into account. To combine the noise σΔx and the lateral resolution δx into one quantity, we make use of the fact that the noise of a highly resolved representation of lateral motion can be reduced by averaging N × N adjacent pixels. This reduces the noise as well as the lateral resolution by a factor of N2=N. Therefore we convert the present noise at reduced resolution into a corresponding effective noise σΔxeff at full resolution δx0. The effective noise σΔxeff of the (hypothetical) motion field detected at the original resolution is a factor of δx/δx0 = 1/(1 − ) higher than the actual noise σΔx. Hence, it equals
σΔxeff=2SNR12knNA11
The first factor describes the noise of inter-subaperture inter-volume phase changes; the second factor accounts for the pre-factor used to rescale phase changes into lateral translation according to Eqs. (4) and (5). Finally, the third factor takes the reduced lateral resolution into account.

Given the above simplifying assumptions, the minimum noise is realized by subapertures at = 1/2. Hence, the highest accuracy in determining lateral motion is reached, if two opposite, non-overlapping subapertures, with half the radius of the original aperture, are used like shown in Fig. 1(a). Assuming, e.g., a very high SNR of 60 dB, an NA of 0.2, a refractive index of n = 1.33, and a wavenumber of k = 2π/841 nm, theoretically lateral motion below 1 nm may be detected. This is a factor of 4000 better than the actual lateral resolution of 4 μm at these imaging parameters. Even at a realistic SNR for high-speed retinal imaging of 20 dB, lateral motion below 100 nm may be detected, which is still a factor of 40 better than the lateral resolution. Due to the reduced aperture size, the lateral resolution of these motion vector fields would be half of the original resolution of the acquired OCT dataset.

The configuration of computational subapertures may not only be modified to optimize the signal-to-noise ratio: As in conventional Doppler- or phase-sensitive OCT, strong motion may induce phase changes below −π or above +π. This causes phase wrapping into the intervall between −π and +π, and some effort may be neccessary to reverse this effect by phase-unwrapping algorithms. Without such a procedure, the maximum detectable lateral motion between two consecutively acquired datasets is limited to the range

Δxmax/min=±π2nkNA.
It is hence a factor of (∊NA)−1 higher than the maximum detectable axial motion. The maximum detectable lateral motion may not only be tuned by the imaging speed, as it is the case for axial motion, but instead also by choosing a proper eccentricity of the subapertures.

3. Material and methods

3.1. Setup

The FF-SS-OCT setup used for the experiments is identical to the setup described in detail in previous publications [11,15]. It is based on the Mach-Zehnder type interferometer shown in Fig. 2. The light of a swept laser source (Superlum Broadsweeper BS 840-1, 51 nm sweep range, 841.5 nm central wavelength) is split into sample illumination and reference wave. The sample illumination beam contains 5.2 mW of radiant power. The sample is illuminated by a collimated beam. Light backscattered by the sample is limited by an adjustable aperture in the back focal plane and then imaged onto the sensor of a high-speed camera (FASTCAM SA-Z, Photron), where it is superimposed with the reference wave. The central 896 × 368 pixels of the camera are read out at a frame rate of 60 kHz, acquiring 512 images during one wavelength sweep. That way an entire OCT volume is acquired in about 8.5 ms. This corresponds to an effective A-scan rate of 39 MHz. The lateral phase stability required for the computation of subapertures is provided by the high lateral parallelization of the data acquisition [11]. Motion induced axial phase errors cannot be excluded. They are, however, of low polynomial order and equal in all A-scans of a volume. Therefore, they can effectively be estimated and corrected by an optimization of image quality [11].

 figure: Fig. 2

Fig. 2 Full-field swept-source OCT setup used for the phase-stable acquisition of volume datasets. The sample is positioned in a cuvette containing saline solution. It is illuminated with a collimated beam and then imaged onto the sensor of a high-speed camera, where it is superimposed with a reference wave. An Arduino Uno microprocessor board synchronizes the swept laser source, the high-speed camera, and a green laser diode used to heat a 200 μm sized spot of the sample.

Download Full Size | PDF

3.2. Measuring in-plane rotation for a proof of principle

To demonstrate the sensitivity of lateral motion detection, a cardboard was rotated by 0.2° between the acquisition of two consecutive OCT volumes using a manually operated rotation stage. At the border of the field of view, 2 mm from the center of rotation, this corresponds to motion of a few micrometers. A rotational movement was chosen because it contains motion in all possible lateral directions. Moreover, the magnitude of the motion in the center of rotation is zero and then increases linearly with the distance from the center. Mathematically, the in-plane rotational motion of a scatterer located at a distance ρ from the center of rotation and at an initial azimuth angle of ϕ0 is described by

r(t)=(x(t)y(t)z(t))=(ρcos(ωt+ϕ0)ρsin(ωt+ϕ0)z0)
The velocity v(t) of this particular scatterer is the temporal derivative d/dt of its position r(t):
v(t)=dr(t)dt=(ρωsin(ωt+ϕ0)ρωcos(ωt+ϕ0)0)=(ωy(t)ωx(t)0)
Hence the lateral motion in x-direction is a linear gradient in y-direction and vice versa. Axial motion is expected to be zero, if the axis of rotation is aligned exactly along the optical axis. Using subapertures, magnitude and direction of the sample motion were determined from the phase and compared to theoretically expected values.

3.3. Retinal photocoagulation ex-vivo

To explore a potential clinical or scientific application, the setup was additionally equipped with a laser for an illumination of ex-vivo porcine retina (see Fig. 2). The green diode laser (LDM-520-1000-C, Lasertack, 520 nm wavelength, 1 W output power) was coupled into the sample illumination arm via a cold mirror (50 × 50 mm, 45° AOI, Cold Mirror, Edmund Optics). A neutral density filter was used to reduce the laser power to 20 mW on the retina. Synchronous triggering of the photocoagulation laser, the swept laser source for OCT imaging, and the high-speed camera was done by an Arduino Uno microprocessor board. A long pass filter (4″ × 5″, Optical Cast Plastic IR Longpass Filter, Edmund Optics) placed directly in front of the camera sensor avoided that stray light of the coagulation laser had any influence on the OCT imaging.

To ensure reproducible imaging conditions for ex-vivo porcine retina, a specially designed measuring cuvette was used. The enucleated porcine eye was cut open equatorially. After a removal of the vitreous body it was stabilized using an aluminum mount and positioned in a cuvette containing physiologic salt solution (Ringer’s Solution, B. Braun). Imaging was then performed via an achromatic lens embedded into the wall of the cuvette. A 1300 μm × 540 μm sized field of the retina was imaged onto the sensor of the high-speed camera at a numerical aperture (NA) of 0.285 (±0.005). OCT volumes were acquired at 100 Hz volume rate. The photocoagulation laser was switched on for 200 ms during a total measurement time of 500 ms. The diameter of the spot irradiated by the coagulation laser was approximately 200 μm at a Gaussian-like beam profile.

3.4. Data processing

Acquired data were processed as described in [11,15]. The actual OCT reconstruction consists of a Fourier transform of the 3D data set I(x, y, k) along the wavenumber axis. Axial phase errors caused by motion and group velocity dispersion are corrected by multiplication with a polynomial phase factor. Coefficients were determined by optimization of image quality [11]. In addition, artifacts due to reflections were identified and filtered by their directional backscattering. Biological tissue, such as the retina, or other scattering samples show a predominantly isotropic backscattering. Scattered light is more or less evenly distributed to all solid angles. Specular reflections at optical interfaces, however, are instead directed to a small and certain range of angles. Hence they are highly localized in Fourier space, i.e., their entire intensity is concentrated to a few spatial frequencies. Additionally, specular reflections from optical interfaces reach the camera at a certain optical path length. Hence, the corresponding artifact is reconstructed to a fixed depth. The artifacts are hence highly localized in lateral Fourier space and in axial position space [16]. The corresponding data points inside the (kx, ky, z)-dataset are first identified by their extremely high amplitude, and then masked out to effectively eliminate the artifacts. This does not have a significant effect onto the imaging of scattering structures, because their signal is distributed over the entire Fourier plane and the masked fraction is negligibly small (≈ 10−4). Then the dominant layers, i.e., the retinal pigment epithelium (RPE) of the porcine eye or the surface of the cardboard sample, respectively, were detected and aligned to a constant depth by shifting each A-scan accordingly using the Fourier shift theorem. The subsequent correction of aberrations was again based on an optimization of image quality and is also described in detail in [11]. To render the aberration correction for the porcine retina as precise as possible, the OCT-volume was split into 8 × 4 tiles Ti,j of 90 × 112 px each. Aberrations were then determined for every third depth layer l of each tile Ti,j independently by optimizing the coefficients Ci,j,l,n,m for the Zernike polynomials Znm up to the 6th order (n=6, 25 degrees of freedom) to allow for local and in-depth variations of the aberrations. For each layer l and tile i, j optimization provided a set of 25 Zernike polynomials (n, m). Individual correction in each sub volume resulted in phase discontinuities at the borders between them. Therefore, the obtained coefficients Ci,j,l,n,m for each Zernike polynomial were then fitted by a three-dimensional polynomial of second order in both lateral dimensions i, j and first order in depth l to ensure a correction as precise as possible, while maintaining smooth tile boundaries. Subaperture processing and motion reconstruction where then applied to the entire volume instead of each tile separately.

The main purpose of this aberration correction was not to improve the lateral resolution – it was instead mandatory to obtain valid results in the subsequent subaperture phase analysis, because even a slight defocus may transform axial motion into apparent lateral motion [7].

Lateral motion was determined by numerically limiting the reconstruction to the four computational subapertures shown in Fig. 1(a). The pairwise calculated differences of phase changes in opposite subapertures Δ0πΔϕ and Δπ/23π/2Δϕ were then processed according to Eqs. (4) and (5) to determine the motion in x- and y-direction with interferometric precision. Axial motion was determined according to Eq. (6) using the full physical aperture. Motion between consecutively acquired volumes was detected by evaluating the corresponding phase differences. For the porcine retina a temporal integration of the inter-volume motion provided the temporal evolution of the 3D displacement. Phase unwrapping was neccessary for the axial direction only.

4. Results

4.1. Proof of principle

En-face representations of the two acquired OCT-volumes of the cardboard sample before and after a rotation by 0.2° are shown in Fig. 3. The three-dimensional motion calculated from these two OCT-volumes is shown in Fig. 4. As expected, differently oriented gradients of a few micrometer were observed in x- and y-direction (Figs. 4(d) – 4(f)). In Fig. 4(i) the micro-motion is visualized as a vector-based representation.

 figure: Fig. 3

Fig. 3 OCT images of the rotated cardboard sample (en-face representation) and rotation profile. The sample was intended to be rotated by 0.2° between the first (a) and second (b) exposure. The rotation profile (c) corresponds to a rotation by 0.22°.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Three-dimensional motion of the rotating cardboard sample. The lateral x- and y-components (d,e) were calculated by determining the difference of phase changes in opposite subapertures (a,b). The z-component (f) is given by the full-aperture inter-volume phase change (c). The resulting cartesian representation of the x, y, and z-component of the motion vector field was transformed into the spherical components, i.e., magnitude m(g), azimuthal angle φ (h) and polar angle ϑ. Due to the in-plane motion, the latter is very close to zero in the whole field of view and therefore not shown. The in-plane motion is additionally visualized by the corresponding vector field (i).

Download Full Size | PDF

4.2. Exploring a clinical application

To explore a potential clinical application, tissue changes during photocoagulation in ex-vivo porcine retina were investigated. A weak coagulation was induced by a laser power of 20 mW and an exposure time of 200 ms. Figure 5 shows the axial displacement Δz in a vertical cross-section through the center of the lesion. In the irradiated area, and slightly beyond, an upward motion of the entire neuronal retina is observed. Additionally, lateral motion was captured using phase changes in the different subapertures. Figure 6 shows the measured three-dimensional motion of the nerve fiber layer in an en-face representation. Before the irradiation started, no motion was observed. The displacement in x-direction vanished on a line parallel to the y-axis through the center of the lesion. At higher distances from the lesion, the tissue was also almost not moving. To the right of the lesion, i.e., at positive x-coordinates, a positive, rightward motion was observed. The strongest motion was found at a distance of 100 μm from the center of the illuminated area. A similar but negative distribution is observed on the left side of the lesion. The y-component shows a corresponding behavior. The spatial distribution of the motion in z-direction is in accordance to the profile observed in the B-scan representation, where the strongest motion was observed in the center of the irradiated area.

 figure: Fig. 5

Fig. 5 Tomographic representation of the detected axial motion of retinal tissue before and during the irradiation, overlayed with an averaged OCT B-scan of the porcine retina.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 En-face representation of the detected axial and lateral motion of the nerve fiber layer before and during the irradiation. The irradiated area is marked by the dashed circle.

Download Full Size | PDF

5. Discussion

5.1. Rotating cardboard sample

The lateral motion observed in x-direction is a linear gradient in y-direction and vice versa, which is the expected behaviour for an in-plane rotation. As mentioned above, the small linear gradient in the axial motion is probably due to a small misalignment of the axis of rotation with respect to the optical axis. An angle of about 2.5° between those two axes might explain the linear increase of the axial motion by about 150 nm per mm over the entire field of view. This is in the expected range of precision achievable by a manual alignment of the sample stage based on visual estimation. The fact that axial motion is not zero in the center of rotation indicates that there is at least an additional non-rotational motion component of about 100 nm in axial direction. This might be due to insufficient mechanical stability of the sample stage or its connection to the setup.

For an evaluation of the observed lateral motion in x- and y-direction, it was transformed into spherical coordinates of magnitude m=Δx2+Δy2 and direction φ = tan−1yx), see Figs. 4(g) and 4(h). At the left border of the field of view, a motion of only 5 μm was observed, which is in the order of the lateral resolution of the imaging optics. However, even much smaller motion in the proximity of the center of rotation is still captured precisely, although being far below the actual optical resolution. The noise in lateral motion determination was approximately Gaussian distributed with a standard deviation of 220 nm at a minimum imaging SNR of 20 dB. The above theoretical considerations (see Eq. (8)) predicted a standard deviation of 190 nm. The additional noise is probably due to a decorrelation of the two speckle patterns due to the rotation. This is consistent with the observation that the noise increases with distance to the center of rotation. To further analyze the detected motion vector field, the angle of rotation was determined by evaluating the slope of the rotation profile, i.e., the linear relationship between the local magnitude m at a certain position and the distance d between this position and the center of rotation. The center of rotation was determined by the position of the vortex in Fig. 4(h). By fitting a linear function to the rotation profile shown in Fig. 3(c), the slope m/d is determined to be 3.84 × 10−3. This is equivalent to a rotation by (180° · m)/(πd) = 0.22°. The average deviation of a single data point from the fitted rotation profile equals 24 nm. The much larger 10% deviation from the intended rotation of 0.20° is presumably not due to an inaccurate measurement, but more likely due to the limited precision of the manually operated rotatable sample stage. This result demonstrates that lateral micro-motion can indeed be measured with high accuracy using the Doppler phase in computational subapertures.

5.2. 3D motion during photocoagulation

The tomographic representation of the displacement of retinal tissue during photocoagulation generated by scanning OCT in previous studies [2,3,17] could be reproduced in measurements with our FF-SS-OCT system. Due to thermal effects the nerve fiber layer is displaced by 4.8 μm (see Fig. 5). Until the end of the measurement the displacement reversed to 2.2 μm. Simply judged by image impression, the quality of the recorded phase signals seems to us even better than the results previously achieved with scanning systems. This might be due to the full-field technology, which is superior at such high imaging rates and provides laterally phase stable data. Axial phase errors are identical in all acquired A-scans of a volume and can therefore be effectively estimated and corrected by an approach based on optimization of image quality [11]. Not only the temporal progression but also the spatial distribution of the axial motion is very similar to that extracted from previously published scanned data [17]. The full three-dimensional micro-motion of the nerve fiber layer was evaluated using the methods presented above. A 3D vector-based representation of the observed motion is shown in Fig. 7 and Visualization 1. Before the irradiation begins, the motion is approximately zero in the entire field of view. After the irradiation started, radially outward directed motion was observed. At the center of the coagulation the lateral components disappear and a purely axial motion is detected. As time progressed, the displacement of the tissue increased, until the laser was switched off. Afterwards, a partial reversal of the tissue motion was observed in both lateral and axial direction.

 figure: Fig. 7

Fig. 7 3D motion vector field of retinal tissue during photocoagulation. See Visualization 1 for the temporal evolution of the vector field.

Download Full Size | PDF

In principle, this method is capable of detecting a full three-dimensional distribution of three-dimensional motion vectors. This provides access to arbitrarily oriented three-dimensional motion like blood flow in a vessel network or, moreover, to the volumetric expansion of tissue or other semitransparent media. It may find application for an OCT-based dosimetry of photocoagulation treatment, the volumetric observation of biological processes, or the visualization of phase transitions of proteins.

The technique is, however, vulnerable by strong axial movements, as these can affect the measurement in two different ways:

  1. In combination with slightly defocused imaging optics, local axial movements lead to apparent lateral motion [6,7]. Therefore we have shown the motion of the nerve fiber layer only. The high-precision aberration correction that is needed to obtain valid results was challenging in the lower retinal layers because of their comparably weak backscattering, the lack of high-contrast structures favorable for the optimization of image quality, and wavefront distortions by the nerve fiber layer.
  2. In swept-source OCT, axial movements cause a Doppler shift that displaces the image of a moving scatterer in axial direction [18]. Although this has no influence on the phase of the signal itself, it increases the noise of the phase evaluation and, in the worst case, leads to the comparison of completely uncorrelated speckle patterns. This effect was also observed during the measurement of retinal photocoagulation. The strongest shifts occured when the laser was switched on and off, respectively, and led to increased noise of the phase evaluation for axial as well as lateral motion. The Doppler shift artifact, however, does not only affect the subaperture method for a detection of lateral motion in particular, but all FF-SS-OCT measurements in general.

Hence, the presented method is most robust for samples moving predominantly in lateral directions and special care has to be taken if strong axial motion is present – like it is, e.g., during photocoagulation.

6. Conclusion

The presented method is an extension of Doppler OCT. Given a sufficiently high SNR, the possibility to numerically correct for aberrations, and sufficiently homogeneous or low axial motion, it enables the detection of a three-dimensional field of three-dimensional motion vectors in biological samples or other semi-transparent media, thereby expanding the possible fields of application for Doppler OCT. Contrary to the corresponding experimental approach known as three-beam Doppler OCT, it does not require to actually align or synchronize three or more foci created by different physical apertures. The setup is technically simple, robust, and does not contain any movable parts. A necessary precondition is, however, that the volumetric OCT data were acquired phase stable, which places high demands on the OCT imaging speed. Moreover, a precise alignment of the focus into the evaluated depth is required to obtain valid results. This restriction, however, also applies to corresponding methods based on experimental subapertures and even speckle-tracking [6, 7]. Since the data has to be acquired phase stable anyway, the compuational-subaperture based method benefits from the possibilities of numerical refocussing.

Funding

German Federal Ministry of Education and Research; Project iCube (BMBF 13GW0043C & 13GW0043E) and German Research Foundation (DFG), Project Holo-OCT HU 629/6-1

Acknowledgments

Reginald Birngruber and Alfred Vogel for support and inspiring discussions; Reinhard Schulz for precision mechanical components and advices; Superlum Diodes Ltd. for technical support.

References

1. R. A. Leitgeb, R. M. Werkmeister, C. Blatter, and L. Schmetterer, “Doppler optical coherence tomography,” Prog. Retin. Eye Res. 41, 26–43 (2014). [CrossRef]   [PubMed]  

2. K. Kurokawa, S. Makita, Y.-J. Hong, and Y. Yasuno, “Two-dimensional micro-displacement measurement for laser coagulation using optical coherence tomography,” Biomed. Opt. Express 6, 170–190 (2015). [CrossRef]   [PubMed]  

3. K. Kurokawa, S. Makita, and Y. Yasuno, “Investigation of thermal effects of photocoagulation on retinal tissue using fine-motion-sensitive dynamic optical coherence tomography,” PLoS ONE 11, 1–12 (2016). [CrossRef]  

4. S. Wang and K. V. Larin, “Optical coherence elastography for tissue characterization: a review,” J. Biophotonics 8, 279–302 (2015). [CrossRef]  

5. N. D. Shemonski, S. S. Ahn, Y.-Z. Liu, F. A. South, P. S. Carney, and S. A. Boppart, “Three-dimensional motion correction using speckle and phase for in vivo computed optical interferometric tomography,” Biomed. Opt. Express 5, 4131–4143 (2014). [CrossRef]  

6. R. L. Maurice and M. Bertrand, “Speckle-motion artifact under tissue shearing,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 46, 584–594 (1999). [CrossRef]  

7. H. Spahr, D. Hillmann, C. Pfäffle, and G. Hüttmann, “Defocus-induced motion artifact in optical coherence tomography,” Opt. Lett. (submitted 2018).

8. W. Trasischker, R. M. Werkmeister, S. Zotter, B. Baumann, T. Torzicky, M. Pircher, and C. K. Hitzenberger, “In vitro and in vivo three-dimensional velocity vector measurement by three-beam spectral-domain Doppler optical coherence tomography,” J. Biomed. Opt. 18, 116010 (2013). [CrossRef]   [PubMed]  

9. R. Haindl, W. Trasischker, B. Baumann, M. Pircher, and C. K. Hitzenberger, “Three-beam Doppler optical coherence tomography using a facet prism telescope and MEMS mirror for improved transversal resolution,” J. Mod. Opt. 62, 1781–1788 (2015). [CrossRef]   [PubMed]  

10. R. Haindl, W. Trasischker, A. Wartak, B. Baumann, M. Pircher, and C. K. Hitzenberger, “Total retinal blood flow measurement by three beam Doppler optical coherence tomography,” Biomed. Opt. Express 7, 287–301 (2016). [CrossRef]   [PubMed]  

11. D. Hillmann, H. Spahr, C. Hain, H. Sudkamp, G. Franke, C. Pfäffle, C. Winter, and G. Hüttmann, “Aberration-free volumetric high-speed imaging of in vivo retina,” Sci. Reports 6, 35209 (2016). [CrossRef]  

12. J. W. Goodman, Introduction to Fourier optics (Roberts & Company Publishers, 2005), 3rd ed.

13. B. H. Park, M. C. Pierce, B. Cense, S.-h. Yun, M. Mujat, G. J. Tearney, B. Bouma, and J. de Boer, “Real-time fiber-based multi-functional spectral-domain optical coherence tomography at 1.3 micrometer,” Opt. Express 13, 3931–3944 (2005). [CrossRef]   [PubMed]  

14. S. Yazdanfar, C. Yang, M. V. Sarunic, and J. A. Izatt, “Frequency estimation precision in Doppler optical coherence tomography using the Cramer-Rao lower bound,” Opt. Express 13, 410 (2005). [CrossRef]   [PubMed]  

15. H. Spahr, D. Hillmann, C. Hain, C. Pfäffle, H. Sudkamp, G. Franke, and G. Hüttmann, “Imaging pulse wave propagation in human retinal vessels using full-field swept-source optical coherence tomography,” Opt. Lett. 40, 4771–4774 (2015). [CrossRef]   [PubMed]  

16. D. Hillmann, T. Bonin, C. Lührs, G. Franke, M. Hagen-Eggert, P. Koch, and G. Hüttmann, “Common approach for compensation of axial motion artifacts in swept-source OCT and dispersion in Fourier-domain OCT,” Opt. Express 20, 6761–6776 (2012). [CrossRef]   [PubMed]  

17. H. H. Müller, L. Ptaszynski, K. Schlott, C. Debbeler, M. Bever, S. Koinzer, R. Birngruber, R. Brinkmann, and G. Hüttmann, “Imaging thermal expansion and retinal tissue changes during photocoagulation by high speed OCT,” Biomed. Opt. Express 3, 1025–1046 (2012). [CrossRef]   [PubMed]  

18. S. H. Yun, G. Tearney, J. de Boer, and B. Bouma, “Motion artifacts in optical coherence tomography with frequency-domain ranging,” Opt. Express 12, 2977–2998 (2004). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1       3D motion vector field of ex-vivo retinal tissue during photocoagulation

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Principle of computational-subaperture processing for a determination of lateral motion. (a) Lateral Fourier transform of phase-stable complex-valued OCT data showing a superposition of the physical aperture and four computational subapertures. (b) Relation between detection angle and position of the subaperture in the Fourier plane: The sample is illuminated by a plane wave kin of wavenumber k propagating along the z-direction. The detection of scattered light is computationally limited to light that passed the aperture in a certain sub-region specified by its radius r, its eccentricity , and its azimuthal angle θ.
Fig. 2
Fig. 2 Full-field swept-source OCT setup used for the phase-stable acquisition of volume datasets. The sample is positioned in a cuvette containing saline solution. It is illuminated with a collimated beam and then imaged onto the sensor of a high-speed camera, where it is superimposed with a reference wave. An Arduino Uno microprocessor board synchronizes the swept laser source, the high-speed camera, and a green laser diode used to heat a 200 μm sized spot of the sample.
Fig. 3
Fig. 3 OCT images of the rotated cardboard sample (en-face representation) and rotation profile. The sample was intended to be rotated by 0.2° between the first (a) and second (b) exposure. The rotation profile (c) corresponds to a rotation by 0.22°.
Fig. 4
Fig. 4 Three-dimensional motion of the rotating cardboard sample. The lateral x- and y-components (d,e) were calculated by determining the difference of phase changes in opposite subapertures (a,b). The z-component (f) is given by the full-aperture inter-volume phase change (c). The resulting cartesian representation of the x, y, and z-component of the motion vector field was transformed into the spherical components, i.e., magnitude m(g), azimuthal angle φ (h) and polar angle ϑ. Due to the in-plane motion, the latter is very close to zero in the whole field of view and therefore not shown. The in-plane motion is additionally visualized by the corresponding vector field (i).
Fig. 5
Fig. 5 Tomographic representation of the detected axial motion of retinal tissue before and during the irradiation, overlayed with an averaged OCT B-scan of the porcine retina.
Fig. 6
Fig. 6 En-face representation of the detected axial and lateral motion of the nerve fiber layer before and during the irradiation. The irradiated area is marked by the dashed circle.
Fig. 7
Fig. 7 3D motion vector field of retinal tissue during photocoagulation. See Visualization 1 for the temporal evolution of the vector field.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

k out ( , θ ) = k ( NA cos θ NA sin θ 1 2 NA 2 ) .
Δ ϕ = n ( k out k in ) Δ r = n k [ ( NA cos θ NA sin θ 1 2 NA 2 ) ( Δ x Δ y Δ z ) ( 0 0 1 ) ( Δ x Δ y Δ z ) ] = n k [ ( NA cos θ NA sin θ 1 2 NA 2 ) ( Δ x Δ y Δ z ) + Δ z ]
Δ θ θ + π Δ ϕ = 2 n k ( NA cos θ NA sin θ 0 ) ( Δ x Δ y Δ z ) .
Δ 0 π Δ ϕ = 2 n k NA Δ x Δ x = Δ 0 π Δ ϕ 2 n k NA
Δ π / 2 3 π / 2 Δ ϕ = 2 n k N A Δ y Δ y = Δ π / 2 3 π / 2 Δ ϕ 2 n k NA
Δ z = Δ ϕ 2 n k
σ Δ 0 π Δ ϕ = σ Δ π / 2 3 π / 2 Δ ϕ = 2 / SNR .
σ Δ x = 2 SNR 1 2 k n NA
σ Δ x eff = 2 SNR 1 2 k n NA 1 1
Δ x max / min = ± π 2 n k NA .
r ( t ) = ( x ( t ) y ( t ) z ( t ) ) = ( ρ cos ( ω t + ϕ 0 ) ρ sin ( ω t + ϕ 0 ) z 0 )
v ( t ) = d r ( t ) d t = ( ρ ω sin ( ω t + ϕ 0 ) ρ ω cos ( ω t + ϕ 0 ) 0 ) = ( ω y ( t ) ω x ( t ) 0 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.