Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time corneal segmentation and 3D needle tracking in intrasurgical OCT

Open Access Open Access

Abstract

Ophthalmic procedures demand precise surgical instrument control in depth, yet standard operating microscopes supply limited depth perception. Current commercial microscope-integrated optical coherence tomography partially meets this need with manually-positioned cross-sectional images that offer qualitative estimates of depth. In this work, we present methods for automatic quantitative depth measurement using real-time, two-surface corneal segmentation and needle tracking in OCT volumes. We then demonstrate these methods for guidance of ex vivo deep anterior lamellar keratoplasty (DALK) needle insertions. Surgeons using the output of these methods improved their ability to reach a target depth, and decreased their incidence of corneal perforations, both with statistical significance. We believe these methods could increase the success rate of DALK and thereby improve patient outcomes.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) produces micrometer-scale tomographic images of both the anterior and posterior segments of the human eye [1]. Earlier studies showed the impact of OCT imaging during surgery or for examination of patients under anesthesia using handheld systems separate from the operating microscope for imaging [2, 3]. Recently, research groups and commercial entities have integrated OCT systems into surgical operating microscopes to provide direct visualization of ophthalmic surgery in real-time [4–16]. Early versions of microscope integrated OCT (MICOT) systems were restricted to real-time two-dimensional imaging [4, 6, 7, 9, 14–16], but research prototype systems can now acquire, process, and render three-dimensional OCT data in real-time [12, 17–23]. Surgeons are able to view the OCT images in their operating microscopes via heads up displays (HUDs) [10, 11, 14–16, 24]. MIOCT has demonstrated its utility for both visualizing ophthalmic surgical procedures and enhancing surgeon performance in ex vivo depth-based tasks [4, 12, 25–32].

Deep anterior lamellar keratoplasty (DALK) is a type of corneal transplant where the corneal stroma and epithelium are replaced, and the host Descemet’s membrane (DM) and endothelium remain. This procedure differs from penetrating keratoplasty (PKP) wherein the entire cornea, including the DM and endothelium, is replaced. DALK has a lower chance of graft rejection (the antigenic endothelium is not transplanted), increased predicted graft survival [33], and fewer post-operative complications [34], with no significant difference in visual acuity [33,34] compared to PKP.

One of the more common techniques for performing DALK is the “big bubble” technique [35]. In this method, the surgeon inserts a hypodermic needle into the corneal stroma and advances the needle, following the curvature of the cornea, to the apex. At the apex, the surgeon injects an air bubble, which ideally separates DM from the stroma. The surgeon can then easily resect the epithelium and stroma without harming the endothelium.

The main drawback of big bubble DALK is that positioning the needle and injecting the air bubble are both extremely difficult. Inserting the needle too superficially prevents the bubble from separating the proper deeper layers. Inserting the needle too deeply increases the risk of perforating DM, in which case the surgeon usually must revert to PKP. As such, failure rates for separating stroma from DM range between 44% to 54% [33, 36, 37].

In a prior study, MIOCT was used to monitor the penetration depth of the needle in ex vivo big bubble DALK trials. That study determined the success of big bubble formation was highly dependent on the final needle penetration depth as a percentage of corneal thickness [38]. However, in that study needle depth was determined via manual segmentation of MIOCT data after the procedure. The purpose of this work is to automate needle depth estimation and provide real-time information about needle progress to the surgeon during the procedure. We present methods for segmenting two corneal surfaces and tracking a needle’s location in real-time for use in guiding big bubble DALK.

1.1. Related work

Many methods for both corneal and retinal segmentation exist in the literature [39–54], and some are capable of processing images in real-time (i.e. keeping up with a data acquisition rate of at least several frames per second) [49, 50]. The segmentation method presented in this work is most closely related to the graph theory approach used by LaRocca et al. [43], but differs in that we acquire images over a larger field of view and use a swept-source laser based MIOCT system. Furthermore, we integrate the segmentation code into our acquisition software, impose a real-time processing constraint, and address the problem of the needle and its OCT “shadow” corrupting the segmentation.

One benefit of intrasurgical OCT is that the surgeon can observe dynamic tissue-tool interactions. However, this is only possible if the tool is visible in the OCT image/volume. For two-dimensional OCT, this requires the surgeon or an operator to manually track the tool with the OCT B-scan location [55]. In three-dimensional OCT, the surgeon or operator needs to determine which B-scan(s) within the volume contain the tool. To address this problem, several automated tool tracking solutions have been proposed in the literature. El-Haddad et. al tracked the lateral position of the tool by placing fiducial markers on the base of the tool and using a stereo camera pair to track the tool tip [56]. Viehland et. al performed tool tracking in intrasurgical OCT with software using multiple volumetric renderings, but assumed little to no contact between the tissue and tool [57]. Zhou et al. developed two methods, a morphological approach and 2D deep learning approach, to segment a needle above the cornea of pig eyes. Gesser et. al developed a 3D convolutional neural network to determine the 6D pose of custom designed marker, which could be attached to a tool [58]. A hardware based approach, specifically designed for DALK, involved placing the OCT fiber in the needle to provide the surgeon with an M-mode scan, which conveyed depth information about their needle as an insertion was performed [59].

The needle tracking approach described here is capable of estimating the needle tip in three dimensions by fitting a 3D model to automatically detected needle voxels, which provides five degree of freedom tracking. Since our intended use case, big bubble DALK, requires the needle to puncture the cornea, our tracking approach must work even when the needle is physically altering the tissue and interfering with the OCT imaging light.

2. Methods

2.1. System description

The optical setup of our MIOCT system was described in detail in a previous publication [12]. Briefly, we used a 100 kHz swept frequency laser centered at 1060 nm. The optical signal detection chain included a balanced photoreceiver (Thorlabs; Newton, NJ) and a 1.8 GS/s digitizer (Alazar; Quebec, Canada). The system had a peak sensitivity of 102 dB and a −6 dB falloff distance of 4.9 mm. We used a raster scan acquisition pattern with volume dimensions of 5.47 mm × 12.0 mm × 8.0 mm, consisting of 2205 spectral samples/A-scan, 688 A-scans/B-scan (500 usable A-scans/B-scan), and 96 B-scans/volume. All computation was performed on a 64-bit Windows 10 machine with an Intel i7-6850K processor and an NVIDIA GeForce GTX Titan X graphics card. The segmentation and tracking algorithms ran on the CPU whereas OCT processing and display code ran on the GPU.

2.2. Requirements and assumptions

We designed the segmentation and tracking algorithms to minimize perceived image latency for the system operator, given the acquisition rate constraints of our system. In our setup, OCT volumes (each comprising 96 B-scans) were processed on the GPU in parallel and the partial volume display was updated in groups of 32 sequential B-scans, while the next group was being acquired. B-scan groups of 32 (acquired every 688 × 32 ÷ 105 = 220 ms) were chosen to exploit the parallel processing capabilities of the GPU, while limiting the partial volume rendering latency (i.e., the maximum time between data acquisition and display) to less than 1/3 second (220 ms acquisition time plus approximately 50 ms processing/display time). This real-time constraint required processing 32 B-scans of raw interferograms into images, segmenting the images, and tracking the needle (once per volume) before the next group of 32 B-scans were acquired. This left 170 ms to segment B-scans and track the needle. Because of this aggressive time constraint, we segmented B-scans in parallel but were able to segment only every other B-scan in the volume (16 of the 32 B-scans in each group). Additionally, we assumed that the entire cornea would be visible and laterally centered (roughly) in the volume and that the needle would be hyper-reflective.

2.3. Algorithmic description

A flow chart of the entire segmentation and needle tracking process is shown in Fig. 1. The following subsections describe the methods used for segmentation and tracking in the order in which they were performed.

 figure: Fig. 1

Fig. 1 Flow chart of segmentation and tracking to find needle penetration depth. The acquisition software acquired volumes of 96 B-scans in three groups of 32 B-scans. B-scan segmentation of each group occurred during the acquisition of the next group.

Download Full Size | PDF

2.3.1. Real-time segmentation

We performed segmentation in parallel by processing B-scans independently on separate CPU hardware threads using the OpenMP parallel computing library [60]. This section describes the method used to segment each individual B-scan.

We began the segmentation process by low-pass filtering the B-scan to reduce noise and prevent aliasing. Our filter was a 3×11 Gaussian filter with a standard deviation of 1.0 in both x and y. We then downsampled the image by a factor of two in the lateral dimension and a factor of five in the axial dimension. We determined the minimum amount of downsampling required to segment 16 B-scans in under 170 ms, while still leaving time to perform needle tracking. We downsampled the axial dimension more than the lateral dimension because our system’s axial resolution was greater than its lateral resolution. After blurring and downsampling, we subtracted the average A-scan from each A-scan in the image to remove horizontal line artifacts [43].

Next, we obtained four gradient images, two horizontal images and two vertical images, by convolving the image with [−1, −1, −1, −1, −1, 0, 1, 1, 1, 1, 1], [−1, −1, −1, −1, −1, 0, 1, 1, 1, 1, 1] , [1, 1, 1, 1, 1, 0, −1, −1, −1, −1, −1], and [1, 1, 1, 1, 1, 0, −1, −1, −1, −1, −1] filters and setting negative values to zero. We then normalized the gradient images between zero and one. These gradient images were used to detect black to white vertical (top down) transitions, white to black vertical transitions, black to white horizontal (left to right) transitions, and white to black horizontal transitions.

We constructed a graph from the gradient images by following the procedures outlined in previous publications [42–44, 52]. Pixels in the image represented nodes, or vertices, in the graph and weighted edges connected neighboring pixels. In our graph, each pixel was connected to the pixels above, above and right, right, below and right, and below it. A starting node was connected to all nodes in the image’s leftmost column, and an ending node was connected to all the nodes in the image’s rightmost column. Weights between starting/ending nodes and other nodes were set to the minimum possible weight. Equation (1) [42] was used to compute weights between neighboring nodes, where wij was the edge weight connecting nodes i and j, Gi was the gradient value at node i, Gj was the gradient value at node j, and dij was the physical distance between node i and j.

wij=dij(2(Gi+Gj)+105)

When searching for the epithelial surface, we used the black to white horizontal gradient image when node i was below node j, the black to white vertical gradient image when nodes i and j were horizontal, and the white to black horizontal gradient image when node i was above node j. Searching for the endothelial surface required that we change the gradients used to create the graph. As such, we used the white to black horizontal gradient image when node i was below node j, the white to black vertical gradient image when nodes i and j were horizontal, and the black to white horizontal gradient image when node i was above node j.

To constrain the search space, we found a rough estimate of the epithelial surface by using the black to white vertical gradient image and fitting a second order polynomial to the maximum gradient in each column using iteratively re-weighted least squares [61, 62]. Shrinking and expanding this estimate in the direction normal to the estimate by 0.4 and 0.8 times the mean center corneal thickness of 537 µm [63] provided us with a minimum and maximum height search area constraint for the epithelial surface (magenta lines in Fig. 2(B)). We removed nodes in the graph above/below the maximum/minimum height restrictions and used the minimum height estimate to constrain the starting and ending point of our graph search. The minimum and maximum height constraints ensured we segmented only the epithelial surface, while also decreasing the search duration. Because the constraints were only estimates, we removed edges between the starting/ending node and leftmost/rightmost column outside a 100 µm window around the intersection of the minimum height estimate with the leftmost and rightmost columns. We found the epithelial surface by searching for the shortest path from the starting node to the ending node using Dijkstra’s algorithm [64].

 figure: Fig. 2

Fig. 2 Illustration of corneal segmentation. (A) Original image obtained from human cadaver corneal sample. (B) Epithelial segmentation (orange) with epithelial constraints (magenta). (C) Epithelial and endothelial segmentation (orange) with endothelial constraints (magenta).

Download Full Size | PDF

After segmenting the epithelial surface, we fit a smoothing spline [65] to the result. We shrank the epithelial surface spline to be 0.8 and 2.3 times the mean center corneal thickness [63] and removed vertices above/below these lines to establish a search area constraint for the endothelial surface. The search was allowed to start/finish within 100 µm of the intersection of the minimum height constraint with the leftmost/rightmost columns or the bottom row. We found the endothelial surface by searching for the shortest path from the starting node to the ending node using Dijkstra’s algorithm (Fig. 2(C)).

2.3.2. Needle tracking

Once the entire volume was acquired and segmented, we searched for the needle. Because we assumed the needle to be hyper-reflective, we reduced our search space to only include bright voxels within the volume. Thus, we created a maximum intensity projection (MIP) of a DC subtracted volume, in which each B-scan had its average A-scan subtracted. This DC subtraction helped to suppress bright horizontal line artifacts introduced by the needle.

We applied a threshold of 210 (the pixel values ranged from 0 to 255) to the MIP and recorded the depth of points in the MIP above the threshold. Then, we performed connected component (CC) labeling [66] on the thresholded depth map. We considered pixels connected if they were within 100 µm of each other laterally and within 50 µm in depth. Choosing this distance-based connectivity definition helped separate the needle from other bright points in the image, such as the corneal apex, and accommodated small gaps in the needle image.

We then filtered the CCs using their physical dimensions. The needle is much longer than it is wide and for this reason we considered only CCs with an appropriate second principal component. Using the equation for the variance of a uniform distribution, (12σCC2)1/2, where σCC2 was the variance of the current CC, we checked to see if the width of the CC was within an acceptable range of the known needle width (410 µm for the 27-gauge needle we used). For our system and scan dimensions we assumed that any CC between 0.65 and 1.25 times the known needle width could be the needle. It was necessary to allow for a range of widths for two reasons: (1) the MIP was acquired from a rolling shutter and needle movement could increase or decrease the width of the needle; and (2) thresholding could eliminate actual needle pixels and thereby reduce the width.

If any of the CC matched the needle width criterion, we extrapolated the first principal component to the MIP borders to determine an estimate for the lateral position of the needle at the edge of the image (base) and needle tip. The base was estimated as the point in the CC closest to a border and the tip was estimated as the point in the CC furthest from the base estimate. CCs with a base more than 300 µm away from a border were discarded, as it was highly unlikely that needle’s base was not near the edge of the volume. Figure 3 shows an example of the process used to determine the lateral position of the base and tip of a needle in a volume. After determining the lateral position of the base and tip of the needle, we obtained depth estimates for the needle base and tip to seed a 3D model fit. To accomplish this, we looked at each pixel in the MIP identified as the needle, found the closest point on the line connecting base to tip, and recorded the depth and percent distance from base. We fit a robust first order polynomial mapping percent distance from base to depth. The depths at the base and tip were calculated by evaluating this polynomial at 0% and 100%.

 figure: Fig. 3

Fig. 3 Representation of the process used to determine an estimate of the needle base and tip. (A) DC-subtracted maximum intensity projection (MIP). (B) Thresholded depth map. (C) Six largest connected components from the depth map. Only the green connected component fit the width criterion. (D) Needle base estimate (green circle) and needle tip estimate (red circle) based on the intersection of the line formed by the first principal component with the borders of the image (blue line). Pixels identified as the needle are orange. Best viewed in color.

Download Full Size | PDF

We fit a 3D model to the tool instead of using our tip estimate because the 3D model utilized all needle information available in the volume and therefore was more robust to noise. The additional step of fitting a 3D model did not require significant processing time (Section 3.2). We used needle voxels identified in the volume to fit a 3D model. The needle pixels found from the MIP provided us with the lateral location of potential needle voxels. At each needle pixel location, we searched the A-scan for voxels brighter than 180 (voxel values ranged from 0 to 255). However, this allowed any bright voxel in the A-scan to be added to the set of needle voxels, which interfered with our model fitting. To prevent erroneous voxels from being added, we used the polynomial which mapped percent distance from base to depth computed in our previous step. We required the distance in depth between any potential needle voxel and calculated depth at that needle pixel location to be less than the known needle radius.

Once we identified all the needle voxels, we used the Iterative Closest Point (ICP) algorithm [67] to fit a 3D model of the needle. Our three-dimensional model was a hollow semicylinder with outer diameter equal to that of the needle, inner diameter equal to 3/4 the needle diameter, and length equal to the needle length. ICP determined the rotation and translation from needle model to needle voxels that minimized the sum of distances between all needle voxels and their corresponding closest point on the model. The closest point on the model to a needle voxel was computed using the procedure outlined by Barbier and Galin [68]. We used the needle base and tip location estimates computed from the depth map to seed ICP with an initial transform. The output of ICP was a transform that provided the needle’s yaw, pitch, and tip position. We did not find a meaningful roll angle for the needle due to its rotational symmetry. Although yaw and pitch were not used in our ex vivo study, yaw could be used to align the scan’s fast axis with the needle to provide a higher resolution cross section to the surgeon.

2.3.3. Needle shadow segmentation correction

Correctly segmenting the two corneal surfaces required additional techniques for those B-scans also containing the needle. Our segmentation method used image gradients to delineate surfaces, but the presence of a hyper-reflective needle created stronger gradients than did the corneal surfaces. This caused the segmentation to include the needle when tracing out the corneal surfaces (Fig. 4(A)). Furthermore, the needle cast a shadow, obscuring anything below it. To address these problems, we corrected the segmentation of B-scans where the needle was present after locating the needle. For each segmented surface, we created a two-dimensional height map of the segmentation. Then, using the output of our needle tracking in the en face image, we marked pixels to correct by inflating the needle 1.0 mm along its principal axis and by 0.75 mm in the orthogonal direction. This inflation was necessary because our graph-based segmentation failed at points around the needle (white arrow Fig. 4(A)). Next, we performed a trial inpainting [69, 70] of marked pixels in the height map (green pixels Fig. 4(D)). Inpainting uses information from surrounding pixels to compute the value of pixels marked to be inpainted. If marked pixels in the trial inpainting did not change significantly, we concluded they contained valuable information that could be used to more accurately update the value of pixels which did change significantly. Therefore, only marked pixels that changed in height by more than 150 µm and 50 µm in the epithelial and endothelial surfaces, respectively, were inpainted in the final inpainting (green pixels Fig. 4(E)). From the final inpainted height map (Fig. 4(F)), we reconstructed the segmentation for each B-scan. This reconstruction preserved the connectivity and smoothness constraints imposed by the graph search for single B-scans, and inpainting enforced smoothness and connectivity constraints across B-scans. We found inpainting performed best when the needle was aligned with the scan’s fast axis. Because our tracking method determined needle yaw, our system was capable of adjusting the scan’s rotation to align the scan’s fast axis with the needle.

 figure: Fig. 4

Fig. 4 Example needle shadow segmentation correction. (A) B-scan with uncorrected segmentation. Shadows from the needle interfere with the endothelial surface segmentation. Inflating the needle allows for the area by the white arrow to be corrected. (B) Corrected segmentation taken from height map in (F). (C) Height map of the endothelial surface of the original segmentation. Black arrow denotes corrupted segmentation caused by the needle. (D) Height map of the endothelial surface of the original segmentation with the inflated needle pixels marked in green and the location of B-scan (A) and (B) denoted by the blue line. (E) Height map of the endothelial surface of the original segmentation with pixels that changed after the trial inpainting marked in green. (F) Corrected height map after inpainting green pixels in (E). Black arrow denotes original location of corrupted segmentation.

Download Full Size | PDF

2.3.4. Percent depth calculation

After correcting the segmentation, we computed the needle penetration depth in the cornea. We used the method from Pasricha et al. [38] because they showed this particular depth measurement was strongly indicative of bubble formation and computing it did not take a significant amount of time (Section 3.2). We created a segmented OCT cross section along the axis of the needle by linearly interpolating the segmented epithelial and endothelial surfaces. The segmented endothelial surface and tracked tool tip were refraction corrected using 2-D refraction correction [71] assuming a corneal refractive index of 1.376 [72]. Using this cross section along the needle axis, we determined the point on the epithelial surface with a normal vector whose line passed closest to the computed needle tip. Then, we located the point on the endothelial surface that was closest to the line formed by the epithelial surface point and the needle tip. The penetration depth was the distance between the epithelial surface point and the needle tip as a percentage of the total distance between epithelial and endothelial surface points. A sample refraction corrected cross section along the needle axis illustrating the penetration depth calculation is shown in Fig. 5.

 figure: Fig. 5

Fig. 5 Refraction corrected cross section along the axis of the needle. Green dots denote the epithelial surface point, needle tip, and endothelial surface point used to compute the depth along the magenta line.

Download Full Size | PDF

2.4. Experiments

We quantitatively validated our image segmentation, needle tracking, and overall system efficacy for DALK needle insertions in three separate experiments. Corneal surgery fellows performed the needle insertion step of DALK in human donor corneas, and we measured the efficacy of the system by recording their perforation rate, perforation-free final depth, and the variance of their perforation-free final depth with and without segmentation/tracking. The validation experiments for our segmentation and system efficacy used nine human cadaver corneas, provided by Miracles in Sight (Winston-Salem, NC). Donor corneas were mounted on a Barron artificial anterior chamber (Katena, Denville, NJ) and a syringe was used to pressurize the corneas with saline. The corneas were imaged under an MIOCT system, which included a stereoscopic microscope (Fig. 6). We used a 27-gauge needle in all experiments. Needle tracking validation was performed without the operating microscope. This study was approved by the Duke University Health System Institutional Review Board was in accordance with Health Insurance Portability and Accountability Act regulations and the standards of the 1964 Declaration of Helsinki.

 figure: Fig. 6

Fig. 6 Experimental setup for validation experiments. In the experiment where corneal fellows inserted needles into the cornea, a tracked cross section was displayed on the monitor next to the microscope.

Download Full Size | PDF

2.4.1. Real-time segmentation

To test our segmentation algorithm, we imaged seven of the nine donor corneas and segmented the center 25 B-scans of the volume (175 B-scans total). We compared the results of our automatic segmentation to a single manual grader’s segmentation. Because our automatic segmentation downsampled the images, we upsampled and linearly interpolated our automatic segmentation before comparing it to the manual segmentation. We also corrected for manual grader bias [42] by randomly selecting one manually segmented B-scan from each of the seven corneas and adding the average bias to our automatic segmentation.

To test our segmentation correction via inpainting, we imaged the same seven corneas while a corneal surgery fellow positioned a 27-gauge needle in the volume above the cornea. We then compared the result of the corrected automatic segmentation to the manual segmentation with no needle in the cornea. To account for bulk cornea motion between the volume acquisitions with and without the needle, we registered B-scans [73] before comparing the segmentation.

We computed the vertical difference between manually and automatically segmented layers at each A-scan. Our error measurement was the mean absolute vertical difference between methods. We obtained the average error using the center 80% of A-scans in each B-scan to remove the influence of large errors in the clinically less relevant periphery.

2.4.2. Needle tracking

To evaluate needle tracking performance, we mounted a 27-gauge needle on a calibrated 3-axis micrometer stage and roughly aligned the axes of translation to those of the OCT volume using three rotation stages. By aligning the translation and volume axes, we were able to approximate the error along the insertion direction, the direction orthogonal to insertion, and in depth. The needle was inserted along the fast axis of our scan. We translated the needle by hand in a 64-point grid pattern inside the volume and recorded the output of our tracking. The pattern consisted of four points spaced over 7.5 mm along the direction of insertion, four points spaced over 6.0 mm along the direction perpendicular to insertion, and four points spaced over 3.0 mm in depth. To compute the tracking error, we took the pattern as ground truth and computed the difference in incremental displacement between successive pattern points and our tracked positions.

To test the accuracy of our yaw/pitch rotation estimates, we mounted the needle on a calibrated rotation stage and rotated the needle in increments of 5° over 360° (yaw) and 1° over ±10° (pitch). We used the rotation stage as ground truth and error was measured as the difference in angle between our tracking and the rotation stage.

2.4.3. Entire system

We evaluated the efficacy of our segmentation and tracking by having surgeons perform the needle insertion step of DALK in ex vivo human donor corneas. We compared the performance of corneal surgical fellows using a stereoscopic operating microscope alone (i.e., the clinical standard of care) to their performance when they used the microscope and the output of our tracking/segmentation. The surgeons were provided a tracked cross section along the axis of their needle (as in Fig. 5) labeled with the needle’s calculated percent depth in the cornea on a monitor next to the microscope (Fig. 6). A total of three corneal surgical fellows performed needle insertions. Each surgeon completed a total of 24 consecutive trials on three different corneas, eight per cornea. Prior to performing trials, we showed each surgeon example OCT cross sections of various needle penetration depths (Fig. 7). In 12 insertions, the surgeon viewed the procedure through the surgical microscope only, and in the other 12 insertions the surgeon viewed the procedure both through the microscope and on a monitor near the microscope, which displayed the output of our segmentation/tracking algorithm. Surgeons roughly aligned their needle with the scan’s fast axis to ensure a high resolution cross section (~500 pixels wide vs ~96). We used the first four insertions, two microscope-only, and two microscope with segmentation/tracking to familiarize the surgeons with our setup. These were not included in the statistical analysis of surgeon performance. In each trial, surgeons attempted to insert a needle into the donor cornea to 80%–90% depth and indicated when they would have injected the air bubble to end the trial. The order of all trials was randomized.

 figure: Fig. 7

Fig. 7 Series of images depicting different needle penetration depths, as shown to all surgeons prior to performing the experiment. Needle percent depths are displayed at the bottom of each image.

Download Full Size | PDF

We recorded volume and segmentation/tracking time series for each insertion. From the volume time series, we manually determined the final needle depth at the end of each insertion and whether the surgeon punctured through the endothelial layer at any point. We compared the number of punctures, the final percent depth of non-puncture insertions, and the variance of the final percent depth of non-puncture insertions between the two types of visualizations (microscope only and microscope with segmentation/tracking). We modeled the likelihood of a puncture for the two visualization methods using a generalized linear mixed model to account for effects from surgeons and corneas. We used Levene’s test to check if there was a significant difference in the variance of the final percent depth between the two visualizations. Because there was a significant difference in the variance between the two visualization methods, we modeled the estimated final percent depth surgeons would achieve with a Gaussian location-scale linear mixed model [74] using data from the non-puncture trials. We also calculated the absolute error between our estimate of the final needle depth and the manually determined final needle depth.

3. Results

3.1. Real-time segmentation

The mean absolute ± standard deviation error between automatic and manual segmentation for the epithelial surface with no needle was 17 µm ± 13 µm and the error for the endothelial surface was 25 µm ± 23 µm. The error between the corrected automatic segmentation with a needle and the manual segmentation for the epithelial surface was 24 µm ± 26 µm and the error for the endothelial surface was 30 µm ± 32 µm. A qualitative comparison of manual versus automatic segmentation with and without a needle present is shown in Fig. 8. Segmentation error statistics are shown in Table 1. The mean time required to segment 16 B-scans during all trials was 79.6 ms ± 6.5 ms.

 figure: Fig. 8

Fig. 8 Comparison of manual and automatic segmentation for a B-scan with and without a needle. (A) Original B-scan, with no needle. (B) Segmented B-scan. Green denotes the manual segmentation and purple denotes the automatic segmentation. Where the green is not visible, the two methods segmented the same point. (C) Original B-scan with a needle. (D) Uncorrected automatic segmentation. (E) Corrected automatic segmentation (purple) and manual segmentation (green). Best viewed in color.

Download Full Size | PDF

Tables Icon

Table 1. Mean Absolute A-scan Segmentation Error

3.2. Needle tracking

The approximate RMS tracking errors between our algorithm and the calibrated micrometer stage along the insertion direction, orthogonal to the insertion direction, and in depth were 15 µm, 16 µm, and 7 µm, respectively. The overall RMS position tracking error in 3D was 12 µm. The RMS errors for yaw and pitch were 0.300° and 0.099°, respectively. Needle tracking RMS errors are shown in Table 2. The mean time required for the software to find the needle and correct the segmentation was 16.8 ms ± 6.4 ms. The mean total time needed to segment the last group of B-scans in the volume, track the needle, and correct the segmentation was 96.8 ms ± 8.9 ms, which was below the 170 ms real-time deadline.

Tables Icon

Table 2. Needle Tracking Position and Rotation Error

3.3. Entire system

Surgeons punctured through the endothelium in 2 of 30 trials using segmentation/tracking and in 15 of 30 trials using only the microscope. This reduction in puncture rate with segmentation/tracking was statistically significant (p < 0.001). A scatter plot of the final percent depth for trials where the surgeon did not puncture the endothelium is shown in Fig. 9(A), and a plot of the performance of our automatic needle depth calculation compared against manual segmentation is shown in Fig. 9(B). The standard deviation of the percent depth for segmentation/tracking trials was 6.68% (N = 28), and the standard deviation of the percent depth for microscope only trials was 17.16% (N = 15). The reduced variance in insertions with segmentation/tracking was statistically significant (p = 0.009). Our model estimated that surgeons able to view the output of our segmentation/tracking would achieve a mean percent depth of 79.3% and would achieve a mean percent depth of 62.2% given only the microscope. This increase in estimated final needle depth with segmentation/tracking was statistically significant (p < 0.001). The mean absolute error between our automatic needle percent depth calculation and the manual calculation was 6.83%(49 µm) ± 4.45%(34 µm), over 70 trials. The needle was not identified by our method in the final volume twice.

 figure: Fig. 9

Fig. 9 (A) Plot of the final needle depth expressed as a percent of corneal thickness for all trials in which the surgeon did not puncture the endothelium. A blue X indicates the mean of the group and error bars denote one standard deviation. (B) Plot illustrating performance of the automatic needle percent depth calculation compared to the manual calculation.

Download Full Size | PDF

4. Discussion

This work demonstrated how automatic real-time quantitative metrics obtained from OCT can drastically improve surgeon performance in an ex vivo setting when compared to using only the stereo microscope, the current clinical standard. We were able to segment images despite tissue deformation, shadowing, and artifacts introduced by the needle. Our computational based approach (as opposed to a hardware-based approach [56]) to needle tracking allowed us to track the needle without having to perform calibration between the OCT and needle coordinate frames. Although calibration is not computationally expensive, it must be performed any time either imaging system (OCT or tracking cameras) is moved. Our approach to tracking and segmentation also required no modifications to the needle, in contrast to prior work [10, 56, 58, 59]. We were able to visualize the needle and important boundaries by correcting the segmentation. However, the principal disadvantage of our approach was the tracking update rate which was limited by our 100 kHz A-scan rate. The delay surgeons experienced between performing an action and seeing it displayed depended on the location of their needle in the volume. If their needle was in the first B-scans of the volume, the worst case scenario, the approximate delay was 688 × 96 ÷ 105 + 96.8 ms = 758 ms. If their needle was in the last B-scans of the volume, the best case scenario, the approximate delay was 96.8 ms. For most insertions, the surgeon’s needle was near the middle of the volume and the approximate delay was 688 × 48 ÷ 105 + 96.8 ms = 427 ms. This latency could explain why surgeons still punctured the endothelium twice even with OCT guidance. We and others have demonstrated faster OCT systems [19,23,75–77], which would reduce this latency.

The need for segmentation and tracking to run in real-time was a major constraint when designing these methods. Any additional delay in providing feedback to surgeons decreased the usefulness of the feedback because of the dynamic nature of procedures. To meet this demand we segmented B-scans in parallel, greatly reduced the search space when segmenting the epithelial and endothelial surfaces (Fig. 2), and downsampled the data in all three dimensions prior to segmentation. By downsampling the data, we took advantage of the fact that corneal boundaries are naturally smooth (having been fit by low order polynomials in prior work [43]), while still showing fully sampled and rendered images to the surgeon for real time guidance. It is possible to design significantly more accurate segmentation methods [78].

The artifacts, shadowing, and disruption of the segmentation introduced by the needle necessitated use of empirically chosen parameters. We note that although we relied on these parameters, they were validated on a data set completely independent from the one with which they were selected. Additionally, because the segmentation and tracking methods operate in real-time, these parameters could be easily adjusted during live imaging to achieve the desired result.

While our results from the ex vivo needle insertions were encouraging, we recognize there were some limitations in our experimental design. We did not directly compare the performance of surgeons against a stereoscopic operating microscope with manually tracked OCT. This additional experiment might have provided insight into the reason for surgeons’ improved performance. However, we chose not to perform this experiment for two reasons. First, while intrasurgical OCT systems have recently become commercially available [14–16] and are promising for this application, these systems are not yet in widespread use. As such, the majority of surgeons in practice still perform the DALK procedure without OCT guidance. We believe that a comparison of our method against the current clinical standard is most appropriate. Second, surgeon performance with manual needle tracking critically depends on the ability of the operator to accurately position the displayed B-scan at the needle tip. We would have had difficulty distinguishing whether performance differences between automatic and manual tracking were due to operator performance or manual B-scan tracking error.

Additionally, to decrease the number of donor corneas required to perform the experiment, we did not have the surgeon inject an air bubble. Although needle penetration depth is strongly correlated with success of big bubble formation [38], injecting air would have provided a more definitive result. Finally, the simulated procedure did not capture all the complexity of live human surgery where patient motion, additional tools, and diseased corneas present additional barriers to successful segmentation and tracking.

5. Conclusion

In this work, we developed methods for real-time segmentation and needle tracking of volumetric corneal OCT data for use in an intrasurgical setting. Tracking and segmentation were used to provide surgeons with live feedback of their needle penetration depth in ex vivo DALK needle insertions. With this feedback, surgeons perforated less frequently, achieved higher penetration depths, and demonstrated improved consistency than when using only the surgical microscope. Automatically providing ophthalmic surgeons with the anatomically-relevant location of their tool during surgery has the potential to increase the success rate of technically difficult procedures such as DALK.

Funding

National Institutes of Health (R01 EY023039); Duke Coulter Translational Partnership (2016–2018); NVIDIA Global Impact Award (2016); Unrestricted grant by Research to Prevent Blindness to the Duke Eye Center.

Acknowledgments

The authors would like to thank Miracles in Sight (Winston-Salem, NC) for the use of research donor corneal tissue and the Duke Statistical Consulting Center for assistance with our statistical analysis.

Disclosures

ANK: Leica Microsystems (P). JAI: Leica Microsystems (P, R), Carl Zeiss Meditec (P, R).

References and links

1. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178 (1991). [CrossRef]   [PubMed]  

2. S. H. Chavala, S. Farsiu, R. Maldonado, D. K. Wallace, S. F. Freedman, and C. A. Toth, “Insights into advanced retinopathy of prematurity using handheld spectral domain optical coherence tomography imaging,” Ophthalmology 116, 2448–2456 (2009). [CrossRef]   [PubMed]  

3. P. N. Dayani, R. Maldonado, S. Farsiu, and C. A. Toth, “Intraoperative use of handheld spectral domain optical coherence tomography imaging in macular surgery,” Retina 29, 1457 (2009). [CrossRef]   [PubMed]  

4. G. Geerling, M. Müller, C. Winter, H. Hoerauf, S. Oelckers, H. Laqua, and R. Birngruber, “Intraoperative 2-dimensional optical coherence tomography as a new tool for anterior segment surgery,” Archives of Ophthalmology 123, 253–257 (2005). [CrossRef]   [PubMed]  

5. E. Lankenau, D. Klinger, C. Winter, A. Malik, H. H. Müller, S. Oelckers, H.-W. Pau, T. Just, and G. Hüttmann, “Combining optical coherence tomography (OCT) with an operating microscope,” in “Advances in Medical Engineering” (Springer, 2007), pp. 343–348. [CrossRef]  

6. Y. K. Tao, J. P. Ehlers, C. A. Toth, and J. A. Izatt, “Intraoperative spectral domain optical coherence tomography for vitreoretinal surgery,” Opt. Lett. 35, 3315–3317 (2010). [CrossRef]   [PubMed]  

7. S. Binder, C. I. Falkner-Radler, C. Hauger, H. Matz, and C. Glittenberg, “Feasibility of intrasurgical spectral-domain optical coherence tomography,” Retina 31, 1332–1336 (2011). [CrossRef]   [PubMed]  

8. J. P. Ehlers, Y. K. Tao, S. Farsiu, R. Maldonado, J. A. Izatt, and C. A. Toth, “Integration of a spectral domain optical coherence tomography system into a surgical microscope for intraoperative imaging,” Invest. Ophthalmol. Vis. Sci. 52, 3153–3159 (2011). [CrossRef]   [PubMed]  

9. J. P. Ehlers, Y. K. Tao, S. Farsiu, R. Maldonado, J. A. Izatt, and C. A. Toth, “Visualization of real-time intraoperative maneuvers with a microscope-mounted spectral domain optical coherence tomography system,” Retina 33, 232 (2013). [CrossRef]  

10. J. P. Ehlers, S. K. Srivastava, D. Feiler, A. I. Noonan, A. M. Rollins, and Y. K. Tao, “Integrative advances for OCT-guided ophthalmic surgery and intraoperative OCT: microscope integration, surgical instrumentation, and heads-up display surgeon feedback,” PloS one 9, e105224 (2014). [CrossRef]   [PubMed]  

11. Y. K. Tao, S. K. Srivastava, and J. P. Ehlers, “Microscope-integrated intraoperative OCT with electrically tunable focus and heads-up display for imaging of ophthalmic surgical maneuvers,” Biomed. Opt. Express 5, 1877–1885 (2014). [CrossRef]   [PubMed]  

12. O. Carrasco-Zevallos, B. Keller, C. Viehland, L. Shen, G. Waterman, B. Todorich, C. Shieh, P. Hahn, S. Farsiu, A. Kuo, C. A. Toth, and J. A. Izatt, “Live volumetric (4D) visualization and guidance of in vivo human ophthalmic surgery with intraoperative optical coherence tomography,” Sci. Rep. 6, 31689 (2016). [CrossRef]   [PubMed]  

13. O. M. Carrasco-Zevallos, C. Viehland, B. Keller, M. Draelos, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Review of intraoperative optical coherence tomography: technology and applications,” Biomed. Opt. Express 8, 1607–1637 (2017). [CrossRef]   [PubMed]  

14. “ iOCT,” (2018). https://www.haag-streit.com/haag-streit-surgical/products/ophthalmology/ioct/.

15. “OPMI LUMERA 700 - surgical microscopes - retina - medical technology | zeiss united states,” (2018). https://www.zeiss.com/meditec/us/products/ophthalmology-optometry/retina/therapy/surgical-microscopes/opmi-lumera-700.html.

16. “EnFocus - product: Leica microsystems,” (2018). https://www.leica-microsystems.com/products/optical-coherence-tomography-oct/details/product/enfocus/.

17. K. K. Lee, A. Mariampillai, X. Joe, D. W. Cadotte, B. C. Wilson, B. A. Standish, and V. X. Yang, “Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit,” Biomed. Opt. Express 3, 1557–1564 (2012). [CrossRef]   [PubMed]  

18. Y. Jian, K. Wong, and M. V. Sarunic, “Graphics processing unit accelerated optical coherence tomography processing at megahertz axial scan rate and high resolution video rate volumetric rendering,” J. Biomed. Opt. 18, 026002 (2013). [CrossRef]  

19. D. hak Choi, H. Hiro-Oka, K. Shimizu, and K. Ohbayashi, “Spectral domain optical coherence tomography of multi-MHz A-scan rates at 1310 nm range and real-time 4D-display up to 41 volumes/second,” Biomed. Opt. Express 3, 3067–3086 (2012). [CrossRef]  

20. K. Zhang and J. U. Kang, “Real-time 4D signal processing and visualization using graphics processing unit on a regular nonlinear-k Fourier-domain OCT system,” Opt. Express 18, 11772–11784 (2010). [CrossRef]   [PubMed]  

21. K. Zhang and J. U. Kang, “Real-time intraoperative 4D full-range FD-OCT based on the dual graphics processing units architecture for microsurgery guidance,” Biomed. Opt. Express 2, 764–770 (2011). [CrossRef]   [PubMed]  

22. J. U. Kang, Y. Huang, J. Cha, K. Zhang, Z. Ibrahim, W. A. Lee, G. Brandacher, and P. Gehlbach, “Real-time three-dimensional fourier-domain optical coherence tomography video image guided microsurgeries,” J. Biomed. Opt. 17, 081403 (2012). [CrossRef]   [PubMed]  

23. W. Wieser, W. Draxinger, T. Klein, S. Karpf, T. Pfeiffer, and R. Huber, “High definition live 3D-OCT in vivo: design and evaluation of a 4D OCT engine with 1 GVoxel/s,” Biomed. Opt. Express 5, 2963–2977 (2014). [CrossRef]   [PubMed]  

24. L. Shen, O. Carrasco-Zevallos, B. Keller, C. Viehland, G. Waterman, P. S. Hahn, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography,” Biomed. Opt. Express 7, 1711–1726 (2016). [CrossRef]  

25. P. Steven, C. Le Blanc, K. Velten, E. Lankenau, M. Krug, S. Oelckers, L. M. Heindl, U. Gehlsen, G. Hüttmann, and C. Cursiefen, “Optimizing descemet membrane endothelial keratoplasty using intraoperative optical coherence tomography,” JAMA Ophthalmology 131, 1135–1142 (2013). [CrossRef]   [PubMed]  

26. P. Steven, C. Le Blanc, E. Lankenau, M. Krug, S. Oelckers, L. M. Heindl, U. Gehlsen, G. Huettmann, and C. Cursiefen, “Optimising deep anterior lamellar keratoplasty (DALK) using intraoperative online optical coherence tomography (iOCT),” Br. J. Ophthalmol. 98, 900–904 (2014). [CrossRef]   [PubMed]  

27. C. I. Falkner-Radler, C. Glittenberg, M. Gabriel, and S. Binder, “Intrasurgical microscope-integrated spectral domain optical coherence tomography–assisted membrane peeling,” Retina 35, 2100–2106 (2015). [CrossRef]   [PubMed]  

28. M. Pfau, S. Michels, S. Binder, and M. D. Becker, “Clinical experience with the first commercially available intraoperative optical coherence tomography system,” Ophthalmic Surgery, Lasers and Imaging Retina 46, 1001–1008 (2015). [CrossRef]   [PubMed]  

29. A. Saad, E. Guilbert, A. Grise-Dulac, P. Sabatier, and D. Gatinel, “Intraoperative OCT-assisted DMEK: 14 consecutive cases,” Cornea 34, 802–807 (2015). [CrossRef]   [PubMed]  

30. X. Li, L. Wei, X. Dong, P. Huang, C. Zhang, Y. He, G. Shi, and Y. Zhang, “Microscope-integrated optical coherence tomography for image-aided positioning of glaucoma surgery,” J. Biomed. Opt. 20, 076001 (2015). [CrossRef]  

31. J. P. Ehlers, J. Goshe, W. J. Dupps, P. K. Kaiser, R. P. Singh, R. Gans, J. Eisengart, and S. K. Srivastava, “Determination of feasibility and utility of microscope-integrated optical coherence tomography during ophthalmic surgery: the DISCOVER study RESCAN results,” JAMA Ophthalmology 133, 1124–1132 (2015). [CrossRef]   [PubMed]  

32. S. Siebelmann, P. Steven, D. Hos, G. Hüttmann, E. Lankenau, B. Bachmann, and C. Cursiefen, “Advantages of microscope-integrated intraoperative online optical coherence tomography: usage in boston keratoprosthesis type I surgery,” J. Biomed. Opt. 21, 016005 (2016). [CrossRef]  

33. V. M. Borderie, O. Sandali, J. Bullet, T. Gaujoux, O. Touzeau, and L. Laroche, “Long-term results of deep anterior lamellar versus penetrating keratoplasty,” Ophthalmology 119, 249–255 (2012). [CrossRef]  

34. D. C. Han, J. S. Mehta, Y. M. Por, H. M. Htoon, and D. T. Tan, “Comparison of outcomes of lamellar keratoplasty and penetrating keratoplasty in keratoconus,” Am. J. Ophthalmol. 148, 744–751 (2009). [CrossRef]   [PubMed]  

35. M. Anwar and K. D. Teichmann, “Big-bubble technique to bare Descemet’s membrane in anterior lamellar keratoplasty,” Journal of Cataract & Refractive Surgery 28, 398–403 (2002). [CrossRef]  

36. D. Smadja, J. Colin, R. R. Krueger, G. R. Mello, A. Gallois, B. Mortemousque, and D. Touboul, “Outcomes of deep anterior lamellar keratoplasty for keratoconus: learning curve and advantages of the big bubble technique,” Cornea 31, 859–863 (2012). [CrossRef]   [PubMed]  

37. U. K. Bhatt, U. Fares, I. Rahman, D. G. Said, S. V. Maharajan, and H. S. Dua, “Outcomes of deep anterior lamellar keratoplasty following successful and failed ‘big bubble’,” Br. J. Ophthalmol. 96, 564–569 (2012). [CrossRef]  

38. N. D. Pasricha, C. Shieh, O. M. Carrasco-Zevallos, B. Keller, D. Cunefare, J. S. Mehta, S. Farsiu, J. A. Izatt, C. A. Toth, and A. N. Kuo, “Needle depth and big-bubble success in deep anterior lamellar keratoplasty: An ex vivo microscope-integrated OCT study,” Cornea 35, 1471–1477 (2016). [CrossRef]   [PubMed]  

39. Y. Li, R. Shekhar, and D. Huang, “Segmentation of 830- and 1310-nm LASIK corneal optical coherence tomography images,” Proc. SPIE 4684, 167–179 (2002). [CrossRef]  

40. J. Eichel, A. Mishra, P. Fieguth, D. Clausi, and K. Bizheva, “A novel algorithm for extraction of the layers of the cornea,” in Canadian Conference of Computer and Robot Vision, (IEEE, 2009), pp. 313–320.

41. N. Hutchings, T. L. Simpson, C. Hyun, A. A. Moayed, S. Hariri, L. Sorbara, and K. Bizheva, “Swelling of the human cornea revealed by high-speed, ultrahigh-resolution optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 51, 4579 (2010). [CrossRef]   [PubMed]  

42. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18, 19413–19428 (2010). [CrossRef]   [PubMed]  

43. F. LaRocca, S. J. Chiu, R. P. McNabb, A. N. Kuo, J. A. Izatt, and S. Farsiu, “Robust automatic segmentation of corneal layer boundaries in SDOCT images using graph theory and dynamic programming,” Biomed. Opt. Express 2, 1524–1538 (2011). [CrossRef]   [PubMed]  

44. S. J. Chiu, J. A. Izatt, R. V. O’Connell, K. P. Winter, C. A. Toth, and S. Farsiu, “Validated automatic segmentation of AMD pathology including drusen and geographic atrophy in SD-OCT images,” Invest. Ophthalmol. Vis. Sci. 53, 53–61 (2012). [CrossRef]  

45. P. Shu and Y. Sun, “Automated extraction of the inner contour of the anterior chamber using optical coherence tomography images,” Journal of Innovative Optical Health Sciences 05, 1250030 (2012). [CrossRef]  

46. P. A. Dufour, L. Ceklic, H. Abdillahi, S. Schroder, S. De Dzanet, U. Wolf-Schnurrbusch, and J. Kowal, “Graph-based multi-surface segmentation of oct data using trained hard and soft constraints,” IEEE Trans. Med. Imag. 32, 531–543 (2013). [CrossRef]  

47. B. J. Antony, M. D. Abràmoff, M. M. Harper, W. Jeong, E. H. Sohn, Y. H. Kwon, R. Kardon, and M. K. Garvin, “A combined machine-learning and graph-based framework for the segmentation of retinal surfaces in SD-OCT volumes,” Biomed. Opt. Express 4, 2712–2728 (2013). [CrossRef]  

48. D. Williams, Y. Zheng, F. Bao, and A. Elsheikh, “Automatic segmentation of anterior segment optical coherence tomography images,” J. Biomed. Opt. 18, 056003 (2013). [CrossRef]  

49. J. Tian, B. Varga, G. M. Somfai, W.-H. Lee, W. E. Smiddy, and D. C. DeBuc, “Real-time automatic segmentation of optical coherence tomography volume data of the macular region,” PloS One 10, e0133908 (2015). [CrossRef]   [PubMed]  

50. J. Xu, K. S. Wong, V. Wong, M. Heisler, S. Lee, M. Cua, Y. Jian, and M. V. Sarunic, “Enhancing the visualization of human retina vascular networks by graphics processing unit accelerated speckle variance OCT and graph cut retinal layer segmentation,” Proc. SPIE 9132, 93122H (2015).

51. D. Kaba, Y. Wang, C. Wang, X. Liu, H. Zhu, A. Salazar-Gonzalez, and Y. Li, “Retina layer segmentation using kernel graph cuts and continuous max-flow,” Opt. Express 23, 7366–7384 (2015). [CrossRef]   [PubMed]  

52. B. Keller, D. Cunefare, D. S. Grewal, T. H. Mahmoud, J. A. Izatt, and S. Farsiu, “Length-adaptive graph search for automatic segmentation of pathological features in optical coherence tomography images,” J. Biomed. Opt. 21, 076015 (2016). [CrossRef]  

53. D. Williams, Y. Zheng, P. G. Davey, F. Bao, M. Shen, and A. Elsheikh, “Reconstruction of 3D surface maps from anterior segment optical coherence tomography images using graph theory and genetic algorithms,” Biomed. Signal Process. Control 25, 91–98 (2016). [CrossRef]  

54. L. Fang, D. Cunefare, C. Wang, R. H. Guymer, S. Li, and S. Farsiu, “Automatic segmentation of nine retinal layer boundaries in oct images of non-exudative amd patients using deep learning and graph search,” Biomed. Opt. Express 8, 2732–2744 (2017). [CrossRef]   [PubMed]  

55. P. Hahn, O. Carrasco-Zevallos, D. Cunefare, J. Migacz, S. Farsiu, J. A. Izatt, and C. A. Toth, “Intrasurgical human retinal imaging with manual instrument tracking using a microscope-integrated spectral-domain optical coherence tomography device,” Translational Vision Science & Technology 4, 1 (2015). [CrossRef]  

56. M. T. El-Haddad and Y. K. Tao, “Automated stereo vision instrument tracking for intraoperative OCT guided anterior segment ophthalmic surgical maneuvers,” Biomed. Opt. Express 6, 3014–3031 (2015). [CrossRef]   [PubMed]  

57. C. Viehland, B. Keller, O. Carrasco-Zevallos, D. Cunefare, L. Shen, C. Toth, S. Farsiu, and J. A. Izatt, “Novel real-time volumetric tool segmentation algorithm for intraoperative microscope integrated OCT,” in SPIE BiOS (International Society for Optics and Photonics, 2016), p. 969702.

58. N. Gessert, M. Schlüter, and A. Schlaefer, “A deep learning approach for pose estimation from volumetric OCT data,” Medical Image Analysis 46, 162–179 (2018). [CrossRef]   [PubMed]  

59. S. Shin, J. K. Bae, Y. Ahn, H. Kim, G. Choi, Y.-S. Yoo, C.-K. Joo, S. Moon, and W. Jung, “Lamellar keratoplasty using position-guided surgical needle and M-mode optical coherence tomography,” J. Biomed. Opt. 22, 125005 (2017). [CrossRef]  

60. “OpenMP C and C++ application program interface,” (2002). http://www.openmp.org/wp-content/uploads/cspec20.pdf.

61. P. W. Holland and R. E. Welsch, “Robust regression using iteratively reweighted least-squares,” Communications in Statistics-theory and Methods 6, 813–827 (1977). [CrossRef]  

62. W. Dumouchel and F. O’Brien, “Integrating a robust option into a multiple regression computing environment,” Institute for Mathematics and Its Applications 36, 41 (1991).

63. R. C. Wolfs, C. C. Klaver, J. R. Vingerling, D. E. Grobbee, A. Hofman, et al., “Distribution of central corneal thickness and its association with intraocular pressure: The rotterdam study,” Am. J. Ophthalmol. 123, 767–772 (1997). [CrossRef]   [PubMed]  

64. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numerische mathematik 1, 269–271 (1959). [CrossRef]  

65. C. de Boor, A Practical Guide to Splines, vol. 27 (Springer-VerlagNew York, 1978). [CrossRef]  

66. A. Rosenfeld and J. L. Pfaltz, “Sequential operations in digital picture processing,” Journal of the ACM (JACM) 13, 471–494 (1966). [CrossRef]  

67. P. J. BeslN. D. McKay, et al., “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992). [CrossRef]  

68. A. Barbier and E. Galin, “Fast distance computation between a point and cylinders, cones, line-swept spheres and cone-spheres,” Journal of Graphics Tools 9, 11–19 (2004). [CrossRef]  

69. M. Bertalmio, A. L. Bertozzi, and G. Sapiro, “Navier-stokes, fluid dynamics, and image and video inpainting,” in Proceedings of the 2001 IEEE Computer Society Conference on, vol. 1 (IEEE, 2001), p. I.

70. OpenCV, “Open Source Computer Vision Library,” (2017). https://github.com/opencv/opencv.

71. M. Zhao, A. N. Kuo, and J. A. Izatt, “3D refraction correction and extraction of clinical parameters from spectral domain optical coherence tomography of the cornea,” Opt. Express 18, 8923–8936 (2010). [CrossRef]   [PubMed]  

72. R. B. Mandell, “Corneal power correction factor for photorefractive keratectomy,” Journal of Refractive Surgery 10, 125–128 (1994).

73. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008). [CrossRef]   [PubMed]  

74. D. Hedeker, H. Demirtas, and R. J. Mermelstein, “A mixed ordinal location scale model for analysis of ecological momentary assessment (EMA) data,” Statistics and its Interface 2, 391 (2009). [CrossRef]   [PubMed]  

75. T. Klein and R. Huber, “High-speed OCT light sources and systems,” Biomed. Opt. Express 8, 828–859 (2017). [CrossRef]   [PubMed]  

76. O. Carrasco-Zevallos, C. Viehland, B. Keller, A. N. Kuo, C. A. Toth, and J. A. Izatt, “Microscope-integrated OCT at 800 kHz line rate for high speed 4D imaging of ophthalmic surgery,” Invest. Ophthalmol. Vis. Sci. 58, 3813 (2017).

77. M. T. El-Haddad, I. Bozic, and Y. K. Tao, “Spectrally encoded coherence tomography and reflectometry (SECTR): simultaneous en face and cross-sectional imaging at 2 gigapixels-per-second,” J. Biophoton. 11, e201700268 (2017). [CrossRef]  

78. T. B. DuBose, D. Cunefare, E. Cole, P. Milanfar, J. A. Izatt, and S. Farsiu, “Statistical models of signal and noise and fundamental limits of segmentation accuracy in retinal optical coherence tomography,” IEEE Trans. Med. Imag. (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Flow chart of segmentation and tracking to find needle penetration depth. The acquisition software acquired volumes of 96 B-scans in three groups of 32 B-scans. B-scan segmentation of each group occurred during the acquisition of the next group.
Fig. 2
Fig. 2 Illustration of corneal segmentation. (A) Original image obtained from human cadaver corneal sample. (B) Epithelial segmentation (orange) with epithelial constraints (magenta). (C) Epithelial and endothelial segmentation (orange) with endothelial constraints (magenta).
Fig. 3
Fig. 3 Representation of the process used to determine an estimate of the needle base and tip. (A) DC-subtracted maximum intensity projection (MIP). (B) Thresholded depth map. (C) Six largest connected components from the depth map. Only the green connected component fit the width criterion. (D) Needle base estimate (green circle) and needle tip estimate (red circle) based on the intersection of the line formed by the first principal component with the borders of the image (blue line). Pixels identified as the needle are orange. Best viewed in color.
Fig. 4
Fig. 4 Example needle shadow segmentation correction. (A) B-scan with uncorrected segmentation. Shadows from the needle interfere with the endothelial surface segmentation. Inflating the needle allows for the area by the white arrow to be corrected. (B) Corrected segmentation taken from height map in (F). (C) Height map of the endothelial surface of the original segmentation. Black arrow denotes corrupted segmentation caused by the needle. (D) Height map of the endothelial surface of the original segmentation with the inflated needle pixels marked in green and the location of B-scan (A) and (B) denoted by the blue line. (E) Height map of the endothelial surface of the original segmentation with pixels that changed after the trial inpainting marked in green. (F) Corrected height map after inpainting green pixels in (E). Black arrow denotes original location of corrupted segmentation.
Fig. 5
Fig. 5 Refraction corrected cross section along the axis of the needle. Green dots denote the epithelial surface point, needle tip, and endothelial surface point used to compute the depth along the magenta line.
Fig. 6
Fig. 6 Experimental setup for validation experiments. In the experiment where corneal fellows inserted needles into the cornea, a tracked cross section was displayed on the monitor next to the microscope.
Fig. 7
Fig. 7 Series of images depicting different needle penetration depths, as shown to all surgeons prior to performing the experiment. Needle percent depths are displayed at the bottom of each image.
Fig. 8
Fig. 8 Comparison of manual and automatic segmentation for a B-scan with and without a needle. (A) Original B-scan, with no needle. (B) Segmented B-scan. Green denotes the manual segmentation and purple denotes the automatic segmentation. Where the green is not visible, the two methods segmented the same point. (C) Original B-scan with a needle. (D) Uncorrected automatic segmentation. (E) Corrected automatic segmentation (purple) and manual segmentation (green). Best viewed in color.
Fig. 9
Fig. 9 (A) Plot of the final needle depth expressed as a percent of corneal thickness for all trials in which the surgeon did not puncture the endothelium. A blue X indicates the mean of the group and error bars denote one standard deviation. (B) Plot illustrating performance of the automatic needle percent depth calculation compared to the manual calculation.

Tables (2)

Tables Icon

Table 1 Mean Absolute A-scan Segmentation Error

Tables Icon

Table 2 Needle Tracking Position and Rotation Error

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

w i j = d i j ( 2 ( G i + G j ) + 10 5 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.