Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D co-registration algorithm for catheter-based optical coherence tomography

Open Access Open Access

Abstract

Applications of catheter-based optical coherence tomography (OCT) - originally developed for cardiovascular imaging - have expanded to other organ systems. However, currently available algorithms to co-register 3D OCT data to a second imaging modality were developed for cardiovascular applications and with it, are tailored to small tubular tissue structures. The available algorithms can often not be applied outside the cardiovascular system, e.g. when an OCT probe is introduced into the kidney, lungs, or wrist. Here, we develop a generic co-registration algorithm with potentially numerous applications. This algorithm only requires that the OCT probe is visible on the second imaging modality and that a single OCT image can be matched to the second imaging modality based on shared image features. We investigate the accuracy and thereby the limitations of our co-registration algorithm as an important step towards implementing the algorithm in clinical practice.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical coherence tomography (OCT) is an imaging technique that has the potential to detect changes in tissue morphology in vivo, with a resolution of 10-15 μm and an imaging depth of 1-2 millimeters. The integration of OCT probes into catheters [1,2] facilitates minimally invasive imaging of internal organ systems. In most OCT catheter probes, light is deflected sideways at the tip. Rotation of these probes produces a cross-sectional image and simultaneous retraction results in an image series over a few centimeters. Catheter-based OCT probes were initially developed for the cardiovascular field. However, since OCT has been proven to be useful for applications in other fields such as oncology [3], applications have expanded in recent years to other internal organs – often by introducing OCT catheter-probes through working channels of endoscopes or biopsy needles [46]. For example, catheter-based OCT was investigated as a tool to detect cancer in e.g. urology [714], in gastroenterology to image Barrett’s esophagus [1517], and in pulmonology to image airways. However, algorithms to co-register OCT to other imaging modalities were developed with cardiovascular applications in mind and are, regrettably, not widely applicable outside the cardiovascular system.

Co-registration of OCT to other imaging modalities is important for a number of reasons. First, co-registered datasets from different imaging modalities can provide the physician with complementary information. Secondly, co-registering OCT to an imaging modality with a large field of view enables physicians to return to areas that were previously imaged with OCT for follow-up. Thirdly, it enables validation studies where OCT is compared to current gold standards.

Since presently existing co-registration algorithms were developed for the cardiovascular system, they are tailored to small tubular structures such as blood vessels. Most existing co-registration algorithms determine the location of OCT images with respect to the second imaging modality based on the centerline of the tubular tissue structure (Fig. 1) [1822]. First, the blood vessel lumen is reconstructed from e.g. bi-plane angiograms, and the centerline of this lumen is determined (Fig. 1(A)). Next, the contour of the blood vessel is detected in the OCT image and from this contour, the center of the blood vessel in the OCT image is determined. Next, the OCT contour center is placed on the lumen centerline based on the pullback speed of the OCT probe.

 figure: Fig. 1.

Fig. 1. Lumen centerline approach to determine the location of OCT images of a tubular tissue structure. (A) The tissue lumen is reconstructed from the second imaging modality and its centerline is determined. (B) The tissue contour is detected in each OCT image and its center is determined. (C) This contour center is placed along the centerline obtained from the second imaging modality.

Download Full Size | PDF

However, in many cases outside the cardiovascular system, it is not possible to reconstruct a centerline of a tissue lumen. Most often because there is no tubular structure, e.g. when an OCT probe is introduced into tissue through a needle such as the prostate [7], kidney [8], or wrist [23]. It is also not possible to use the lumen centerline approach if the center of the tissue lumen cannot be determined within the OCT image, for example when a tissue border is not visible throughout the entire circumference of an OCT image, e.g. in airways with side branches.

To circumvent the issue of the absence of a recognizable lumen, one can segment the OCT probe from the second imaging modality and assume that the probe pullback trajectory will follow the centerline of the segmented probe at the start of the pullback [24]. This assumption is based on the fact that during OCT image acquisition only the probe within the surrounding sheath is pulled back, while the sheath itself remains in its position. Since the probe-centerline approach only requires the OCT probe to be visible on the second imaging modality it is widely applicable – also in non-tubular structures. Therefore, we will use this approach in our co-registration algorithm to determine the location of OCT images.

Furthermore, in our method, we will include the accurate determination of the rotational orientation of OCT images. As a result of curvature in a pullback trajectory, the orientation of OCT images along the trajectory changes. We will account for this relative rotation of OCT images. After determining the relative rotation between OCT images, the absolute orientation is still ambiguous. We will establish the absolute orientation of the OCT images by determining a single rotation angle for all the OCT images by matching one (reference) OCT image to the second imaging modality.

We investigate the accuracy and thereby the limitations of our co-registration algorithm as an important step towards implementing the algorithm in clinical practice. In this study, we perform co-registration of OCT to CT scans, since the high-resolution of OCT could provide valuable additional information to CT scans for many clinical applications in which CT scans are routinely used. Here, we will explore an application in the wrist to image cartilage and in the airway to visualize airway wall layers, features that are not visible on CT images. It should be noted that our algorithm can be used to co-register OCT images to any other imaging modality, as long as the OCT catheter is visible.

Our co-registration algorithm only requires that (1) the OCT probe is visible on the second imaging modality and (2) there is a single OCT image in which features are visible that can be matched to the second imaging modality. Since these requirements are often easily met in practice, our algorithm greatly expands the possibilities of co-registered OCT datasets outside the cardiovascular system.

2. Materials and methods

2.1. Co-registration algorithm

The co-registration algorithm consists of five steps (Fig. 2). The first step is the segmentation of the OCT probe on images from a second imaging modality made immediately before the OCT pullback. In this study, we used a 3D CT scan, where the OCT probe is visible as hyperdense (white).

 figure: Fig. 2.

Fig. 2. The co-registration algorithm. (Step 1) Probe segmentation from CT images, probe indicated by a red circle. The green square shows an enlarged portion of the CT scan with the OCT probe (Step 2) Determining the location of OCT images along this pullback trajectory assuming the pullback trajectory will follow the probe centerline. (Step 3) Accounting for image rotation due to curvature in the pullback trajectory using the sequential triangulation algorithm. (Step 4) Determining an additional rotation angle θadd that results in the best match between a single OCT image and its corresponding slice through the CT dataset. (Step 5) Rotate all OCT images around the pullback trajectory with θadd to obtain the final co-registered dataset.

Download Full Size | PDF

The second step is the determination of the location of OCT images along the pullback trajectory using the frame rate and pullback speed of the OCT system, assuming the pullback trajectory of the probe equals the segmented probe-centerline at the start of the pullback. This assumption is based on the fact that during OCT image acquisition only the probe within the sheath is pulled back, while the sheath itself remains in position.

The third step accounts for relative rotation between OCT images as a result of curvature in the pullback trajectory (Fig. 3), which can be accounted for by using the sequential triangulation algorithm [25]. This algorithm is a discrete approximation of the Frenet-Serret formulas which describe the motion of a particle along a three-dimensional curve. For our co-registration algorithm, the Frenet-Serret formulas describe the movement of the center of the OCT image (the particle) along the pullback trajectory (the curve) and calculate the corresponding orientation of the OCT image frame. The sequential triangulation algorithm determines the rotation of the so-called ‘Frenet frame’ along the probe trajectory: the vector T lies along the probe trajectory and the vectors N and B correspond to the vertical and horizontal axis of the OCT image, respectively.

 figure: Fig. 3.

Fig. 3. (A) In catheter OCT probes an image is formed by a 360 degrees rotation of the probe within the surrounding sheath (indicated by the blue circle) and a stack of images is created by performing a simultaneous pullback of the probe. We can define a coordinate frame of the OCT image using three vectors: a green vector N that lies along the vertical axis of the OCT image, a blue vector B along the horizontal axis, and a red vector T perpendicular to the OCT image in the pullback direction. (B) In a straight pullback trajectory, all the OCT images (light gray circles) have the same orientation, as visualized by the coordinate frames that all have the same orientation. (C) In a curved pullback trajectory, the orientation of OCT images changes throughout the pullback. They are rotated around the pullback trajectory (dark gray line). We account for this relative rotation of OCT images by determining the orientation of the coordinate frames using the sequential triangulation algorithm.

Download Full Size | PDF

After the determination of the relative rotation between OCT images, the absolute orientation of OCT images is still ambiguous. Step 4 is to establish the absolute orientation of OCT images by determining an additional single rotation angle around the probe trajectory for all OCT images. This is done by visual inspection of a single OCT image with a distinguishing feature and its corresponding slice through the CT dataset (after re-slicing the CT dataset so that the plane of the CT image matches the plane of this OCT image). The angle between the feature in the OCT and CT image, θadd, is determined. The fifth and final step is rotating all OCT images around the probe centerline over this single angle to obtain the final co-registration.

To rotate and translate the OCT images we used transformation matrices. Each total transformation matrix (M) was obtained by multiplying three transformation matrices: M = P·R·S , where S is the rotation matrix corresponding to the single rotation angle (Fig. 2, Step 4), R is the rotation matrix corresponding to the relative rotation (Fig. 2, Step 3), and P is the translation matrix to place the probe in the OCT image at its position along the pullback trajectory (Fig. 2, Step 2).

2.2. Software

We implemented the co-registration algorithm using MATLAB (MathWorks, USA) and Amira (FEI Visualization Sciences Group, USA). We used Amira to segment the probe from CT images (Fig. 2, Step 1) and visualize the co-registered images (Fig. 2, Steps 4 and 5). After manual identification of a point in the CT dataset that is part of the OCT probe, Amira automatically performs the segmentation of the entire OCT probe.

MATLAB was used to determine the location of OCT images along the probe centerline (Fig. 2, Step 2), perform the sequential triangulation algorithm (Fig. 2, Step 3), and produce the transformation matrices. For visualization purposes, we used volume rendering of the OCT images in Amira, which can only be done for groups of images. We grouped five subsequent OCT images and used the transformation matrix corresponding to the third (middle) image of each group.

2.3. Instrumentation

High-resolution CT images were acquired using the Brilliance CT scanner (Philips Healthcare, the Netherlands), with a 120 kV tube voltage, a pixel size of 0.33 mm, and a slice thickness of 0.55 mm for the lung and phantoms, and 0.67 mm for the wrist. OCT images were acquired using the Ilumien Optis System (St. Jude Medical/Abbott, USA) in combination with C7 Dragonfly probes. This system has an axial resolution of 15 μm and a lateral resolution of 20 μm. A marker on C7 Dragonfly probes is located 2 mm proximal to the location where the first OCT image is recorded, which is visible on the CT scan. The locations of subsequent images were calculated using the pullback speed (10 mm/s) and frame rate (100 fps) of the system. We recorded 540 images over a trajectory of 54 mm. The depth axis was corrected for the assumed refractive index of our samples of 1.33.

2.4 Testing the accuracy of the algorithm

We designed and imaged phantoms for which we determined the accuracy of the location and orientation of OCT images with respect to CT. A wide range of geometries can be encountered outside the cardiovascular system, which can broadly be divided into straight and curved geometries. Therefore, we designed three silicon-tubing phantoms: straight and curved with bending radii of 2 and 1 cm (Fig. 7). These phantoms had an inner diameter of 3 mm and were mounted on a cork surface.

We determined the accuracy of the co-registration as a whole. In order to provide a better understanding of the origins of possible inaccuracies, we also separately investigated errors in the estimated pullback trajectory and rotational errors as a result of torsion in the probe which is caused by friction between the OCT probe and the sheath during the pullback. We tested the accuracy of the estimated pullback trajectory since even though this approach has previously been used [24], its validity has not been tested extensively. Torsion in the probe results in variations in the rotation speed, resulting in non-uniform rotation distortions (NURD) [26] within OCT images. NURD introduces errors in the rotational orientation of the co-registered dataset, even though they are not the result of the co-registration algorithm itself. NURD occurs in the original OCT dataset before the co-registration algorithm is applied. Therefore, NURD should be subtracted from rotational errors in the co-registered datasets to determine errors of the co-registration algorithm.

2.4.1. Co-registration CT and OCT images of phantoms

The phantoms were filled with water to obtain a refractive index of 1.33, similar to tissue. The accuracy of the co-registration algorithm was evaluated using two parameters: the angle between the cork layer in OCT and CT cross-sections, and the distance between the centers of the tubing cross-section in both imaging modalities (Fig. 4). In Amira, we displayed the co-registered dataset of the OCT and CT images. Next, we resliced the CT dataset with Amira to visualize a CT slice that was in the same 2D-plane as the OCT image of interest. Subsequently, we identified the cork layer and the center of the tubing cross-section in both images and calculated the angle between the cork layers and the distance between the centers of the tubing cross-sections between both imaging modalities. These parameters were calculated every fifth OCT image in the reconstruction.

 figure: Fig. 4.

Fig. 4. Determination of the accuracy of the co-registration algorithm for the phantoms. For each fifth OCT image, we visualized a slice through the CT dataset in the same imaging plane. (A) We determined the angle between the cork layer in OCT (blue line) and CT cross-sections (yellow line). (B) We also determined the distance between the centers of the tubing cross-section in both imaging modalities. The yellow circle indicates the outline of the phantom in CT, the blue line in OCT. For each outline, the center was determined and the distance between those centers was calculated.

Download Full Size | PDF

2.4.2 Estimated pullback trajectory

We assessed deviations in distance between the assumed and true pullback trajectory in the phantoms. We used three OCT catheter probes that had been used previously that day in airway measurements. We performed five pullbacks per probe. We filmed OCT pullbacks with two cameras, positioned above and alongside the phantom (DCC1545M CMOS cameras, Thorlabs, USA, with 25 mm focal length lenses, 23FM25L, Tamron, Japan; C22525KP, Pentax, Japan). This enabled calculating the position of the OCT probe in 3D since the probe could be tracked in the horizontal and vertical planes. We placed filters to prevent saturation of the cameras by the OCT guide laser (FB550-40 and FES0650, Thorlabs, USA) and we used a 1951 USAF Resolution Test Target (Thorlabs, USA) to calibrate the scale of the videos. The camera recorded 14.54 frames per second, thus the probe moved approximately 0.7 mm between frames. In each video frame, we determined the coordinates of the tip of the OCT probe and the distance to the closest point on the assumed pullback trajectory (the centerline of the probe visible in the first frame of the video, before the pullback). Since the OCT pullback and video were not triggered, frame numbers in the video did not correspond to exactly the same location in the phantom between different pullbacks. Therefore, we divided each phantom into five segments and we calculated the median distances over five pullbacks within each segment. Since the side camera could not capture the whole pullback, the 3D distance could not be calculated for all segments (Fig. 5).

 figure: Fig. 5.

Fig. 5. Determining the accuracy of the estimated pullback trajectory. A pullback was filmed with two cameras. The red segment indicates the part of the pullback trajectory captured by the camera. This figure depicts an example for one of the three phantoms. (A) The side camera could only capture the curved segment. (B) The top camera could capture the entire pullback. (C) The 3D distance was calculated by combining the measurements from both cameras for the curved segment.

Download Full Size | PDF

2.4.3 Nonuniform rotation distortions

Deviations between the orientations of OCT images in the phantom with respect to the CT cross-sections are the result of two separate errors: (1) errors in the assumed probe pullback trajectory that will translate into errors in the relative rotation between frames and (2) NURD. The error in the estimated pullback trajectory is an error of the co-registration algorithm, whereas NURD is the result of friction between the probe and the surrounding sheath that results in variations in the probe rotation speed. Therefore, NURD has to be subtracted from the rotational errors determined for the entire co-registration (Section 2.4.1) to obtain the rotational errors as a result of the co-registration algorithm only. To that end, we determined NURD for the OCT probes used in this study.

We placed the OCT probes in a straight line on a flat reflecting surface. For a straight pullback trajectory, no relative rotation occurs between OCT images as a result of curvature in the trajectory and therefore any rotation is only the result of NURD. Within each OCT image, we determined the angle between the reflecting surface directly below the probe with respect to the horizontal axis of the OCT image.

We quantified NURD as the difference between the angle of the surface in a single OCT image compared to the average angle over the entire pullback (assuming this average orientation corresponds to zero NURD). We performed measurements with three probes that had been used previously that day in lung samples. We performed 10 pullbacks per probe and determined the orientation every 20 images. These results will provide an estimate of the minimum amount of NURD in datasets obtained with the OCT probes used in this study. When the OCT probe is in a curved shape within the tissue the friction between the probe and the sheath is expected to increase and thus the amount of NURD will be higher.

2.5 Feasibility

To examine the feasibility of applying the co-registration algorithm outside the cardiovascular system, we performed ex vivo measurements in the trapeziometacarpal joint of a post-mortem human wrist, and in an airway of a lobectomy specimen - two cases where other available co-registration algorithms would not have been applicable.

The wrist measurement data were previously reported by Cernohorsky et al., [23]. The ethical committee of the Amsterdam University Medical Centers waived the need for evaluation in this cadaver study. The OCT probe was inserted into the trapeziometacarpal joint on the dorsal side of a fresh-frozen post-mortem human wrist, through an 18-gauge MicroLance IV-cannula. Note that contrary to our present study, in the study of Cernohorsky et al., 3D OCT co-registration was performed manually based on landmarks.

Airway measurements were done on a lobectomy specimen, which was obtained from the pathology department of our institute within three hours after surgical resection. Excessive mucus and blood were removed by rinsing the tissue with saline and the airways were instilled with phosphate-buffered saline. These measurements were approved by our institute's internal review board (METC 2014_361, NL51605.018.14). The OCT measurement was previously reported by d’Hooghe et al., where they used OCT images to determine airway wall layer thickness [27].

For the wrist measurements, we determined the accuracy of the OCT image orientation by examining the positioning of the trapezium in both imaging modalities. The accuracy of the location of the OCT images cannot be determined since any minor lateral displacement of OCT images within the joint cavity would not lead to visible errors. For the airway measurements, the accuracy of the location and orientation of the OCT images with respect to the CT images could not be determined since the resolution of the acquired CT images was insufficient to allow a comparison.

3. Results

3.1. Testing the accuracy of the algorithm with phantoms

3.1.1 Co-registration CT and OCT images of phantoms

The CT and OCT reconstructions of the straight phantom (Fig. 6(A)) and the phantom with the 1 cm bending radius (Fig. 6(C)) matched well over their full length. For the phantom with the 2 cm bending radius (Fig. 6(B)), the majority of the OCT reconstruction was displaced vertically with respect to the CT reconstruction (indicated by a blue arrow). The median difference between the location of the center of the phantom in OCT and CT cross-sections (Fig. 6, middle panels) was 0.21 mm (±0.11 mm std; max. 0.55 mm) for the straight phantom; 0.45 mm (±0.28 mm std; max. 0.95 mm) for the phantom with 2 cm bend radius; and 0.19 mm (±0.12 mm std; max. 0.51 mm) for the phantom with 1 cm bend radius.

 figure: Fig. 6.

Fig. 6. Co-registration of CT reconstruction (light grey) and OCT images (red-brown) in the (A) straight, (B) 2 cm radius bend, and (C) 1 cm radius bend phantoms. In the OCT images, the cork on which the silicon tubing was mounted was visible (indicated by yellow arrows). For the phantom with the 2 cm bending radius, it can be seen that the majority of the OCT reconstruction is displaced vertically with respect to the CT reconstruction (indicated by a blue arrow). The white arrow indicates the start and direction of the OCT pullback. The middle panels show the distance (mm) between the center of the phantom tubing cross-section in OCT and CT cross-sections. The lower panels show the angle (°) between the cork layer in OCT and CT cross-sections.

Download Full Size | PDF

Each phantom had a segment where the rotational error fluctuated around 0 degrees (Fig. 6, lower panels). One OCT image in these segments had been used in the co-registration algorithm to determine the single rotational angle that would result in the best visual match between OCT and CT images. For phantoms (A) and (B), the rotational difference increased or decreased steadily within the pullback. For phantom (C), however, the first group of OCT images had a large rotational error of 37 degrees, which steeply decreased during the first half of the pullback before it becomes more stable for 200 OCT images and then increases again. The median absolute rotational errors in the phantoms were 9.9, 4.0, and 6.5 degrees for phantoms (A), (B), and (C), respectively.

3.1.2. Estimated pullback trajectory

The average difference between the true and assumed pullback trajectory (Fig. 7) was smallest in the straight phantom (0.10-0.20 mm), followed by the phantom with a 1 cm bend radius (0.18-0.66 mm), and the phantom with a 2 cm bend radius (0.41-1.12 mm). In the curved phantoms, the difference between the true and assumed pullback trajectory increased considerably in the curved segments.

 figure: Fig. 7.

Fig. 7. Mean distance in mm (+/- std) between true and assumed pullback trajectory for 3 different phantoms: straight and with bending radii of 2 and 1 cm, respectively. Arrows indicate the start and direction of the pullback. The side camera and top camera measured vertical and horizontal displacement, which combined allowed for the calculation of the mean 3D distance. Since the side camera could not capture the whole pullback for the curved phantoms, the 3D distance could not be calculated for all segments.

Download Full Size | PDF

3.1.3 Nonuniform rotation distortions

In a typical OCT pullback, the position of the reflecting surface below the probe was found at irregular rotational angles between images (Fig. 8(A)), indicating speed variations of the probe tip rotation, demonstrating the presence of NURD. The median absolute rotation within a pullback (Fig. 8(B)) was 2.3° for all probes combined (probe 1: 3.0°; probe 2: 2.3°; probe 3: 2.0°).

 figure: Fig. 8.

Fig. 8. (A) Example pullback with NURD (B) Rotation between images within a straight OCT pullback for three probes.

Download Full Size | PDF

3.1.4 Summary of accuracy testing

All the results for the accuracy testing are summarized in Table 1 for the three phantoms. The first column describes the mean distance between the center of the phantom tubing cross-section in OCT and CT images, which is related to the mean errors in the assumed pullback trajectory in the second column. The third column describes the mean rotational difference between the cork layer in OCT and CT cross-sections, of which part is the result of NURD (column four).

Tables Icon

Table 1. Summary of errorsa

3.2 Feasibility

By co-registering CT and OCT images from measurements in the wrist using the proposed algorithm, we obtained a 3D reconstruction where the location of the OCT images was visualized (Fig. 9(A)). In Amira, we could rotate this 3D reconstruction and zoom in on cross-sections (Fig. 9(B)). In the OCT image (Fig. 9(B)), the cartilage between the trapezium and first metacarpal bone was visible. The cartilage was not visible on the CT cross-section (Fig. 9(C)) due to relatively low resolution. The median rotation between the trapezium as visible in the OCT and the CT images was 3.4 degrees.

 figure: Fig. 9.

Fig. 9. (A) Overview of co-registered 3D CT and OCT images in the joint cavity of the trapeziometacarpal joint; (B) Cross-section of the co-registered dataset; (C) The corresponding CT image. Trap: trapezium, MC1: first metacarpal bone.

Download Full Size | PDF

By co-registering CT and OCT images from measurements in the airway using the proposed algorithm, the location of the OCT images could be visualized within the lower right lobe (Fig. 10(A)). In Amira, we could rotate this 3D reconstruction and zoom in on cross-sections (Fig. 10(B)). In the OCT images, airway wall layers were visible: mucosa (light blue arrows), submucosa (dark blue arrows), and cartilage (green arrows). These layers were not visible in the corresponding CT slice (Fig. 10(C)). Since these measurements were performed ex vivo, the airways were not fully inflated, which resulted in a lower quality CT scan. However, since the pixel size of the CT scan was only 0.33 mm, and the mucosa and submucosa had a thickness of approximately 100 μm in the OCT images, these airway wall layers would also not have been visible on a CT scan acquired in vivo.

 figure: Fig. 10.

Fig. 10. (A) Overview of co-registered CT and OCT images of an ex vivo airway. (B) Cross-section of the co-registered dataset, where airway wall layers are visible on the OCT image, including mucosa (light blue arrows), submucosa (dark blue arrows), and cartilage (green arrows). (C) The image from CT, which does not show these layers.

Download Full Size | PDF

4. Discussion

Currently available co-registration algorithms were developed for the cardiovascular system and are generally not suitable for applications in other organ systems since they require the reconstruction of a tissue lumen centerline. Here, we developed a widely applicable algorithm. An additional strength of our study is that we extensively tested the accuracy of our algorithm. Furthermore, we demonstrated the feasibility of the co-registration algorithm for two cases where other available co-registration algorithms would not have been applicable, in a wrist and an airway sample. In both cases, a tissue lumen centerline could not have been extracted. Our algorithm only requires that the probe is visible on the second imaging modality and that at least one image in the OCT dataset is available with a tissue feature that can be matched to the second imaging modality. Therefore, our algorithm can now be applied in situations that were previously out of reach, such as when an OCT probe is introduced into tissue through a needle (e.g. prostate, kidney, or wrist) or when OCT is introduced into a tissue lumen with side branches (e.g. airways). We demonstrated our algorithm for the co-registration with CT, but the reconstruction algorithm can also be used to co-register OCT to other imaging modalities as long as the OCT probe is visible. Furthermore, even though our applications were performed on ex vivo tissue samples, our algorithm can also be applied to in vivo measurements.

We investigated the accuracy of our co-registration algorithm using phantoms. Even though our co-registration algorithm was specifically designed for applications without a lumen, our phantoms with a lumen were appropriate to determine the accuracy of the algorithm. The fact that our phantoms had a closed lumen did not influence the accuracy of the co-registration algorithm. In our phantoms the catheter and the sheath only touched the outside bend of the phantom, thus the inside bend did not influence the pullback trajectory and, therefore, did not influence the co-registration accuracy. However, using a phantom with a lumen did enable a precise comparison between the co-registered OCT and CT images, which is why phantoms with a lumen were used.

We measured median errors in the orientation of OCT images from 4 to 10° in the phantoms and 3° in the wrist. We assume NURD accounts for at least 2° in the phantom measurements in this particular study, since more NURD is expected in curved segments. Therefore, the rotational errors as a result of our co-registration algorithm are on average less than 8° and represent a small error considering the sub-millimeter scale on which the algorithm operates. Therefore, found rotational errors will most likely still produce results on which clinical decisions can be based.

Since NURD is the result of friction between the probe and the sheath, NURD is expected to be higher in the curved phantoms, especially at the start of the pullback. This could explain why the rotational errors are highest at the start of the pullback for the phantom with the 1 cm bend radius and decrease during the pullback. NURD can be reduced by using probes where the micromotor is located at the probe tip [28,29] or by pre-processing the data using a NURD correction algorithm [30].

In the phantoms, we measured median errors in the location of OCT images ranging from 0.19 to 0.45 mm. To further understand the origins of errors we separately investigated the assumption that the pullback trajectory is equal to the probe centerline at the start of the pullback. From our results, it can be concluded that the accuracy of the assumed pullback trajectory depends on the shape of the tissue surrounding the OCT sheath. This relationship can be explained by considering both the preferred pullback trajectory of the probe and the freedom of the sheath to move inside the tissue. During a pullback, the probe inside the sheath prefers to follow the trajectory corresponding to the minimum bending energy but is constrained by the sheath that is not pulled back. The difference between the true and assumed pullback trajectory was smallest in the straight phantom, followed by the curved phantom with a bending radius of 1 cm, and the curved phantom with a bending radius of 2 cm. When the probe is in a straight line (whether in a larger lumen or surrounded by tissue), even if the sheath has the freedom to move, the difference between the preferred and assumed pullback trajectory is small. Hence, it can be assumed that the difference between the true and assumed pullback trajectory will be even smaller for a straight OCT probe position in which the sheath is wedged between tissue structures and has, therefore, less freedom of movement. Such a situation is similar to the wrist measurement. For tissues where the OCT probe is in a curved position, a smaller bending radius increases the difference between the preferred and assumed pullback trajectory. However, a smaller bending radius also exerts more force on the sheath and thereby constrains its movement, which may account for our finding that the difference between the true and assumed pullback trajectory was smaller in the 1 cm bending radius phantom compared to the 2 cm bending radius phantom. In the airway measurements, the probe was in a slightly curved position, and thus the phantom with the larger bending radius is most representative of the airway measurements.

Thus, in general, but also using the proposed co-registration algorithm, the shape of the tissue surrounding the probe has to be taken into account when estimating the accuracy of the co-registration, keeping in mind that a relatively straight probe trajectory in which the probe has relatively low freedom of movement will generally yield higher co-registration accuracy.

Another possible approach to determine the pullback trajectory of the OCT probe could be to track the probe using the CT scanner. However, even for the fastest clinical CT scanners, the probe tip will have moved 0.6-1.3 mm between CT images. This shift will likely result in errors similar to our approach at best and increased errors for CT scanners with longer acquisition times or in scanning protocols with larger voxel sizes.

Our algorithm currently requires user-intervention at two steps: the identification of a single point within the CT scan that corresponds to the OCT probe (Step 1) and the visual inspection of one OCT image to determine an additional rotation angle (Step 4). All other steps of the algorithm are fully automated. For future implementations of our algorithm for clinical applications, Step 1 can be fully automated and in Step 4 the user-intervention can be minimized to only selecting an OCT image. Cross-correlation can then be used to match that OCT image to the corresponding CT image.

5. Conclusion

In this paper, we developed and validated an OCT co-registration algorithm that only requires that the OCT probe is visible on the second imaging modality and that there is one OCT image with distinguishing features that can be matched to the second imaging modality. The first step of the algorithm is the segmentation of the OCT probe. Next, based on the assumption that the probe pullback trajectory will follow the centerline of the segmented probe at the start of the pullback, the location of OCT images is determined. Finally, the rotational orientation of OCT images is based on mutual image features in only a single OCT image. The developed algorithm does not rely on the presence of tubular structures and is, therefore, widely applicable for numerous clinical applications.

Funding

Nederlandse Organisatie voor Wetenschappelijk Onderzoek (12707).

Disclosures

The authors declare no conflicts of interest.

References

1. G. J. Tearney, S. A. Boppart, B. E. Bouma, M. E. Brezinski, N. J. Weissman, J. F. Southern, and J. G. Fujimoto, “Scanning single-mode fiber optic catheter-endoscope for optical coherence tomography,” Opt. Lett. 21(7), 543–545 (1996). [CrossRef]  

2. M. J. Gora, M. J. Suter, G. J. Tearney, and X. Li, “Endoscopic optical coherence tomography: technologies and clinical applications [Invited],” Biomed. Opt. Express 8(5), 2405–2444 (2017). [CrossRef]  

3. L. van Manen, J. Dijkstra, C. Boccara, E. Benoit, A. L. Vahrmeijer, M. J. Gora, and J. S. D. Mieog, “The clinical usefulness of optical coherence tomography during cancer interventions,” J. Cancer Res. Clin. Oncol. 144(10), 1967–1990 (2018). [CrossRef]  

4. L. P. Hariri, M. Mino-Kenudson, M. B. Applegate, E. J. Mark, G. J. Tearney, M. Lanuti, C. L. Channick, A. Chee, and M. J. Suter, “Toward the guidance of transbronchial biopsy: Identifying pulmonary nodules with optical coherence tomography,” Chest 144(4), 1261–1268 (2013). [CrossRef]  

5. B. C. Quirk, R. A. McLaughlin, A. Curatolo, R. W. Kirk, P. B. Noble, and D. D. Sampson, “In situ imaging of lung alveoli with an optical coherence tomography needle probe,” J. Biomed. Opt. 16(3), 036009 (2011). [CrossRef]  

6. W.-C. Kuo, J. Kim, N. D. Shemonski, E. J. Chaney, D. R. Spillman, and S. A. Boppart, “Real-time three-dimensional optical coherence tomography image-guided core-needle biopsy system,” Biomed. Opt. Express 3(6), 1149–1161 (2012). [CrossRef]  

7. B. G. Muller, D. M. De Bruin, M. J. Brandt, W. Van Den Bos, S. Van Huystee, D. J. Faber, D. Savci, P. J. Zondervan, T. M. De Reijke, M. P. Laguna-pes, T. G. Van Leeuwen, J. J. M. C. H. De Rosette, D. M. de Bruin, M. J. Brandt, W. van den Bos, S. van Huystee, D. J. Faber, D. Savci, P. J. Zondervan, T. M. de Reijke, M. P. Laguna-pes, T. G. van Leeuwen, J. J. M. C. H. de la Rosette, D. M. De Bruin, M. J. Brandt, W. Van Den Bos, S. Van Huystee, D. J. Faber, D. Savci, P. J. Zondervan, T. M. De Reijke, M. P. Laguna-pes, T. G. Van Leeuwen, and J. J. M. C. H. De Rosette, “Prostate cancer diagnosis by optical coherence tomography: First results from a needle based optical platform for tissue sampling,” J. Biophotonics 9(5), 490–498 (2016). [CrossRef]  

8. P. G. K. Wagstaff, A. Ingels, D. M. De Bruin, M. Buijs, P. J. Zondervan, C. D. Savci Heijink, O. M. Van Delden, D. J. Faber, T. G. Van Leeuwen, R. J. A. Van Moorselaar, J. J. M. C. H. De La Rosette, and M. P. Laguna Pes, “Percutaneous needle based optical coherence tomography for the differentiation of renal masses: A pilot cohort,” J. Urol. 195(5), 1578–1585 (2016). [CrossRef]  

9. M. T. J. Bus, B. G. Muller, D. M. de Bruin, D. J. Faber, G. M. Kamphuis, T. G. van Leeuwen, T. M. de Reijke, and J. J. M. C. H. de la Rosette, “Volumetric in Vivo Visualization of Upper Urinary Tract Tumors Using Optical Coherence Tomography: A Pilot Study,” J. Urol. 190(1), 1–2 (2013). [CrossRef]  

10. U. L. Mueller-Lisse, O. A. Meissner, G. Babaryka, M. Bauer, R. Eibel, C. G. Stief, M. F. Reiser, and U. G. Mueller-Lisse, “Catheter-based intraluminal optical coherence tomography (OCT) of the ureter: Ex-vivo correlation with histology in porcine specimens,” Eur. Radiol. 16(10), 2259–2264 (2006). [CrossRef]  

11. A. Swaan, C. K. Mannaerts, B. G. Muller, R. A. Kollenburg, M. Lucas, C. D. Savci-Heijink, T. G. Leeuwen, T. M. Reijke, and D. M. Bruin, “The First In Vivo Needle-Based Optical Coherence Tomography in Human Prostate: A Safety and Feasibility Study,” Lasers Surg. Med. 51(5), 390–398 (2019). [CrossRef]  

12. B. G. Muller, D. M. de Bruin, W. van den Bos, M. J. Brandt, J. F. Velu, M. T. J. Bus, D. J. Faber, D. Savci, P. J. Zondervan, T. M. de Reijke, P. L. Pes, J. de la Rosette, and T. G. van Leeuwen, “Prostate cancer diagnosis: the feasibility of needle-based optical coherence tomography,” J. Med. Imag. 2(3), 037501 (2015). [CrossRef]  

13. M. T. J. Bus, P. Cernohorsky, D. M. de Bruin, S. L. Meijer, G. J. Streekstra, D. J. Faber, G. M. Kamphuis, P. J. Zondervan, M. van Herk, M. P. Laguna Pes, M. J. Grundeken, M. J. Brandt, T. M. de Reijke, J. J. M. C. H. de la Rosette, and T. G. van Leeuwen, “Ex-vivo study in nephroureterectomy specimens defining the role of 3-D upper urinary tract visualization using optical coherence tomography and endoluminal ultrasound,” J. Med. Imag. 5(01), 1 (2018). [CrossRef]  

14. J. E. Freund, D. J. Faber, M. T. Bus, T. G. van Leeuwen, and D. M. de Bruin, “Grading upper tract urothelial carcinoma with the attenuation coefficient of in-vivo optical coherence tomography,” Lasers Surg. Med. 8(1), 1–6 (2019). [CrossRef]  

15. B. Bouma and G. Tearney, “High-resolution imaging of the human esophagus and stomach in vivo using optical coherence tomography,” Gastrointest. Endosc. 51(4), 467–474 (2000). [CrossRef]  

16. J. A. Evans, J. M. Poneros, B. E. Bouma, J. Bressner, E. F. Halpern, M. Shishkov, G. Y. Lauwers, M. Mino-Kenudson, N. S. Nishioka, and G. J. Tearney, “Optical coherence tomography to identify intramucosal carcinoma and high-grade dysplasia in Barrett’s esophagus,” Clin. Gastroenterol. Hepatol. 4(1), 38–43 (2006). [CrossRef]  

17. G. Zuccaro, N. Gladkova, J. Vargo, F. Feldchtein, E. Zagaynova, D. Conwell, G. Falk, J. Goldblum, J. Dumot, J. Ponsky, G. Gelikonov, B. Davros, E. Donchenko, and J. Richter, “Optical coherence tomography of the esophagus and proximal stomach in health and disease,” Am. J. Gastroenterol. 96(9), 2633–2639 (2001). [CrossRef]  

18. D. Prabhu, E. Mehanna, M. Gargesha, D. Wen, E. Brandt, N. S. van Ditzhuijzen, D. Chamie, H. Yamamoto, Y. Fujino, A. Farmazilian, J. Patel, M. Costa, H. G. Bezerra, and D. L. Wilson, “3D registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation,” J. Med. Imag. 3(02), 1 (2016). [CrossRef]  

19. L. M. Ellwein, H. Otake, T. J. Gundert, B. K. Koo, T. Shinke, Y. Honda, J. Shite, and J. F. LaDisa, “Optical Coherence Tomography for Patient-specific 3D Artery Reconstruction and Evaluation of Wall Shear Stress in a Left Circumflex Coronary Artery,” Cardiovasc. Eng. Technol. 2(3), 212–227 (2011). [CrossRef]  

20. C. V. Bourantas, M. I. Papafaklis, L. Athanasiou, F. G. Kalatzis, K. K. Naka, P. K. Siogkas, S. Takahashi, S. Saito, D. I. Fotiadis, C. L. Feldman, P. H. Stone, and L. K. Michalis, “A new methodology for accurate 3-dimensional coronary artery reconstruction using routine intravascular ultrasound and angiographic data: Implications for widespread assessment of endothelial shear stress in humans,” EuroIntervention 9(5), 582–593 (2013). [CrossRef]  

21. S. Migliori, C. Chiastra, M. Bologna, E. Montin, G. Dubini, C. Aurigemma, R. Fedele, F. Burzotta, L. Mainardi, and F. Migliavacca, “A framework for computational fluid dynamic analyses of patient-specific stented coronary arteries from optical coherence tomography images,” Med. Eng. Phys. 47, 105–116 (2017). [CrossRef]  

22. M. I. Papafaklis, C. V. Bourantas, T. Yonetsu, R. Vergallo, A. Kotsia, S. Nakatani, L. S. Lakkas, L. S. Athanasiou, K. K. Naka, D. I. Fotiadis, C. L. Feldman, P. H. Stone, P. W. Serruys, I.-K. Jang, and L. K. Michalis, “Anatomically correct three-dimensional coronary artery reconstruction using frequency domain optical coherence tomographic and angiographic data: head-to-head comparison with intravascular ultrasound for endothelial shear stress assessment in humans,” EuroIntervention 11(4), 407–415 (2015). [CrossRef]  

23. P. Cernohorsky, S. D. Strackee, G. J. Streekstra, J. P. van den Wijngaard, J. A. E. Spaan, M. Siebes, T. G. van Leeuwen, and D. M. de Bruin, “Computed Tomography–Mediated Registration of Trapeziometacarpal Articular Cartilage Using Intraarticular Optical Coherence Tomography and Cryomicrotome Imaging: A Cadaver Study,” Cartilage (2019).

24. S. Carlier, R. Didday, T. Slots, P. Kayaert, J. Sonck, M. El-Mourad, N. Preumont, D. Schoors, and G. Van Camp, “A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography,” Cardiovasc. Revasc. Med. 15(4), 226–232 (2014). [CrossRef]  

25. A. Wahle, P. M. Prause, S. C. DeJong, and M. Sonka, “Geometrically correct 3-D reconstruction of intravascular ultrasound images by fusion with biplane angiography–methods and validation,” IEEE Trans. Med. Imaging 18(8), 686–699 (1999). [CrossRef]  

26. Y. Kawase, Y. Suzuki, F. Ikeno, R. Yoneyama, K. Hoshino, H. Q. Ly, G. T. Lau, M. Hayase, A. C. Yeung, R. J. Hajjar, and I. K. Jang, “Comparison of nonuniform rotational distortion between mechanical IVUS and OCT using a phantom model,” Ultrasound Med. Biol. 33(1), 67–73 (2007). [CrossRef]  

27. J. N. S. D’Hooghe, A. W. M. Goorsenberg, D. M. De Bruin, J. J. T. H. Roelofs, J. T. Annema, and P. I. Bonta, “Optical coherence tomography for identification and quantification of human airway wall layers,” PLoS One 12, e0184145 (2017). [CrossRef]  

28. P. H. Tran, D. S. Mukai, M. Brenner, and Z. Chen, “In vivo endoscopic optical coherence tomography by use of a rotational microelectromechanical system probe,” Opt. Lett. 29(11), 1236 (2004). [CrossRef]  

29. P. R. Herz, Y. Chen, A. D. Aguirre, K. Schneider, P. Hsiung, J. G. Fujimoto, K. Madden, J. Schmitt, J. Goodnow, and C. Petersen, “Micromotor endoscope catheter for in vivo, ultrahigh-resolution optical coherence tomography,” Opt. Lett. 29(19), 2261–2263 (2004). [CrossRef]  

30. G. van Soest, J. G. Bosch, and A. F. W. van der Steen, “Azimuthal registration of image sequences affected by nonuniform rotation distortion,” IEEE Trans. Inf. Technol. Biomed. 12(3), 348–355 (2008). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Lumen centerline approach to determine the location of OCT images of a tubular tissue structure. (A) The tissue lumen is reconstructed from the second imaging modality and its centerline is determined. (B) The tissue contour is detected in each OCT image and its center is determined. (C) This contour center is placed along the centerline obtained from the second imaging modality.
Fig. 2.
Fig. 2. The co-registration algorithm. (Step 1) Probe segmentation from CT images, probe indicated by a red circle. The green square shows an enlarged portion of the CT scan with the OCT probe (Step 2) Determining the location of OCT images along this pullback trajectory assuming the pullback trajectory will follow the probe centerline. (Step 3) Accounting for image rotation due to curvature in the pullback trajectory using the sequential triangulation algorithm. (Step 4) Determining an additional rotation angle θadd that results in the best match between a single OCT image and its corresponding slice through the CT dataset. (Step 5) Rotate all OCT images around the pullback trajectory with θadd to obtain the final co-registered dataset.
Fig. 3.
Fig. 3. (A) In catheter OCT probes an image is formed by a 360 degrees rotation of the probe within the surrounding sheath (indicated by the blue circle) and a stack of images is created by performing a simultaneous pullback of the probe. We can define a coordinate frame of the OCT image using three vectors: a green vector N that lies along the vertical axis of the OCT image, a blue vector B along the horizontal axis, and a red vector T perpendicular to the OCT image in the pullback direction. (B) In a straight pullback trajectory, all the OCT images (light gray circles) have the same orientation, as visualized by the coordinate frames that all have the same orientation. (C) In a curved pullback trajectory, the orientation of OCT images changes throughout the pullback. They are rotated around the pullback trajectory (dark gray line). We account for this relative rotation of OCT images by determining the orientation of the coordinate frames using the sequential triangulation algorithm.
Fig. 4.
Fig. 4. Determination of the accuracy of the co-registration algorithm for the phantoms. For each fifth OCT image, we visualized a slice through the CT dataset in the same imaging plane. (A) We determined the angle between the cork layer in OCT (blue line) and CT cross-sections (yellow line). (B) We also determined the distance between the centers of the tubing cross-section in both imaging modalities. The yellow circle indicates the outline of the phantom in CT, the blue line in OCT. For each outline, the center was determined and the distance between those centers was calculated.
Fig. 5.
Fig. 5. Determining the accuracy of the estimated pullback trajectory. A pullback was filmed with two cameras. The red segment indicates the part of the pullback trajectory captured by the camera. This figure depicts an example for one of the three phantoms. (A) The side camera could only capture the curved segment. (B) The top camera could capture the entire pullback. (C) The 3D distance was calculated by combining the measurements from both cameras for the curved segment.
Fig. 6.
Fig. 6. Co-registration of CT reconstruction (light grey) and OCT images (red-brown) in the (A) straight, (B) 2 cm radius bend, and (C) 1 cm radius bend phantoms. In the OCT images, the cork on which the silicon tubing was mounted was visible (indicated by yellow arrows). For the phantom with the 2 cm bending radius, it can be seen that the majority of the OCT reconstruction is displaced vertically with respect to the CT reconstruction (indicated by a blue arrow). The white arrow indicates the start and direction of the OCT pullback. The middle panels show the distance (mm) between the center of the phantom tubing cross-section in OCT and CT cross-sections. The lower panels show the angle (°) between the cork layer in OCT and CT cross-sections.
Fig. 7.
Fig. 7. Mean distance in mm (+/- std) between true and assumed pullback trajectory for 3 different phantoms: straight and with bending radii of 2 and 1 cm, respectively. Arrows indicate the start and direction of the pullback. The side camera and top camera measured vertical and horizontal displacement, which combined allowed for the calculation of the mean 3D distance. Since the side camera could not capture the whole pullback for the curved phantoms, the 3D distance could not be calculated for all segments.
Fig. 8.
Fig. 8. (A) Example pullback with NURD (B) Rotation between images within a straight OCT pullback for three probes.
Fig. 9.
Fig. 9. (A) Overview of co-registered 3D CT and OCT images in the joint cavity of the trapeziometacarpal joint; (B) Cross-section of the co-registered dataset; (C) The corresponding CT image. Trap: trapezium, MC1: first metacarpal bone.
Fig. 10.
Fig. 10. (A) Overview of co-registered CT and OCT images of an ex vivo airway. (B) Cross-section of the co-registered dataset, where airway wall layers are visible on the OCT image, including mucosa (light blue arrows), submucosa (dark blue arrows), and cartilage (green arrows). (C) The image from CT, which does not show these layers.

Tables (1)

Tables Icon

Table 1. Summary of errorsa

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.