Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hybrid registration of retinal fluorescein angiography and optical coherence tomography images of patients with diabetic retinopathy

Open Access Open Access

Abstract

Diabetic retinopathy (DR) is a common ophthalmic disease among diabetic patients. It is essential to diagnose DR in the early stages of treatment. Various imaging systems have been proposed to detect and visualize retina diseases. The fluorescein angiography (FA) imaging technique is now widely used as a gold standard technique to evaluate the clinical manifestations of DR. Optical coherence tomography (OCT) imaging is another technique that provides 3D information of the retinal structure. The FA and OCT images are captured in two different phases and field of views and image fusion of these modalities are of interest to clinicians. This paper proposes a hybrid registration framework based on the extraction and refinement of segmented major blood vessels of retinal images. The newly extracted features significantly improve the success rate of global registration results in the complex blood vessel network of retinal images. Afterward, intensity-based and deformable transformations are utilized to further compensate the motion magnitude between the FA and OCT images. Experimental results of 26 images of the various stages of DR patients indicate that this algorithm yields promising registration and fusion results for clinical routine.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Diabetic retinopathy (DR) damage occurs to the micro-vascular of the retina due to diabetes. Untreated early-stage DR causes the accumulation of fluid in the macula known as Diabetic Macular Edema (DME). DR and DME are a leading cause of the vision loss or blindness in the working-age population [1]. Retina capillaries start to change due to detectable small abnormalities such as micro-aneurysms. Retina capillaries consist of various cells that are en-sheathed by a membrane of Müller cells. Vessel leakage is the result of malfunctioning of Müller cells in diabetic retinas [2].

Retina’s abnormalities and the accumulation of intraretinal fluid are captured using Fluorescein Angiography (FA) imaging. FA imaging was introduced in 1961 by Novotny HR et al [3]. It is an invasive approach that injects fluorescein dye through optic veins and arteries with an adequate speed. FA imaging includes two phases: early frames and late frames. Focal DME associated with discrete point leakage that corresponds to microaneurysms appears in the early frames of FA imaging. In contrast, diffuse DME with an unknown source of the leakage is manifested in the late frames of FA imaging.

Nowadays, the Optical Coherence Tomography (OCT) imaging technique was widely used in ophthalmic imaging as it provides in-depth information of the eye [4,5]. Scanning Laser Ophthalmoscope (SLO) is another imaging technique to scan a specific region of the retina. In [6], a simultaneous imaging system is introduced which tacquires SLO and OCT in vivo. As a result, the OCT-SLO image is aligned with OCT-Bscans. Additionally, Optical Coherence Tomography Angiography (OCTA) imaging has recently become popular in ophthalmic imaging. It provides both functional and structural information in tandem. OCTA is a non-invasive approach that provides blood vessel structure (corresponding to 2D image) and depth information (corresponding to OCT image) in a matter of seconds. However, OCTA suffers from the limitation of Field of View (FOV) and the inability of leakage view [7]. Despite the advantages of OCT imaging such as non-invasive imaging and having depth information in micron ($\mu m$), FA remains the main source of diagnosis of abnormal fluid in diabetic retinal edema disease [8]. FA technique provides a higher resolution of optical veins and location of leakage source compare to other imaging systems. Both methods provide different valuable information from the retina structure.

To automatically detect DME or perform progression analysis in OCT-Bscans images, the information fusion of both imaging modalities (FA and OCT-Bscans) is of interest to clinicians. In this paper, we proposed a registration framework to register the FA to the OCT-SLO images which were introduced in a simultaneous imaging technique [6]. Therefore, tthe registration of the FA image to the OCT-SLO image will lead to the accessibility of depth information of retinal images. tThroughout the paper, the SLO image refers to the OCT-SLO image, and OCT-Bscans tis dedicated to the cross-sectional information of OCT images.

Image registration is a challenging task [9,10]. The state-of-the-art image registration methods for the retina image alignment can be divided into three categories: feature-based registration, intensity-based registration, and hybrid methods [11].

In feature-based registration, a set of prominent feature points are extracted for registration [12]. Then, the extracted features will be transformed into the target image in different transformation types such as rigid, affine, or deformable transformations. The extracted features include vessel structures, optic disc, branching, and bifurcation points. The feature-based registration of retina images typically involves vessel structure segmentation. In [1315], the vessel network is segmented and designated as a matching metric for registration. In [1619], bifurcation points are selected as a prominent feature of tthe retina vascular network. The extraction of vascular tnetworks using appropriate features such as bifurcation points is the main challenge in complex vascular structure of retinal images. Mustafa Arikan et al [20] tused deep learning approach in the vessel segmentation and detection of vessel bifurcation tpoints. Next, tan affine transformation is applied in point-based registration and intensity-based registration. Finally, the method is evaluated for images with equal tFOVs. In [21], the OCT Fundus Image (OFI) is registered to Color Fundus Photograph (CFP) in different FOVs. The proposed algorithm tsuggests to take blood vessel ridges as a feature. Finally, the Iterative Closest Point (ICP) algorithm and intensity-based registration tare used to register OFI to CFP.

Although the vascular segmentation of retinal images is widely used in feature based registration techniques, some researchers claim optic disc is common feature which provides acceptable results in low resolution images [22,23]. Some research groups investigated Speeded-Up Robust Features (SURF) [24] and Scale-Invariant Feature Transform (SIFT) [19,25] descriptors in the registration of retinal images. These descriptors are applied as a distinctive local feature that is not dependent on the vascular network. In [26], the integration of SURF descriptor and partial intensity invariant feature descriptors are utilized to find corresponding points in the target image.

Intensity-based image registration transforms the source image to match the target image based on the similarity metrics of pixels through an optimization process. Similar to feature-based registration, the transformation of images could be rigid, affine, or deformable. Myronenko et al [27] introduced a new similarity metric that is driven by minimizing residual complexity between two images. In [28], rotation, translation, and scaling parameters are modified in a Mutual Information (MI) registration, which significantly increases the success rate in retinal image registration. In [29], an enhanced MI has been proposed to compensate motion in retinal images which are based on the maximization of Principal Component Analysis (PCA).

The combination of features enhances the registration accuracy [30]. Some researchers combined the feature-based and intensity-based registration of retinal image registration. In [31], a hybrid registration framework is proposed by combining bifurcation point matching and MI similarity measurement. In [32], landmarks extracted from tthe vascular network are combined with second-order polynomial order intensity registration. Jian Chen et al [33] proposed Partial Intensity Invariant Feature Descriptor (PIIFD) as a similarity metric for poor resolution retinal images. In [34], a hybrid registration method containing two steps is proposed: first images are globally registered using a descriptor matching on the mean phase; then, images are locally registered using a deformable registration algorithm.

To the best of our knowledge, the state-of-art in multi-modal retinal image registration is mainly focused on images that are captured with the same FOVs such as [20,35,36]. In contrast, less studies have done in multi-modal images which are captured in different FOVs such as [21] and [37]. In the next section, a hybrid registration framework is introduced to register the SLO and FA images of DME patients in different FOVs.

2. Materials and methods

The images of retinal vascular networks display various orientations, thicknesses, and intensities. In this section, we describe our solution based on the major blood vessels of the retinal network instead of a fully connected vascular network.

2.1 Data acquisition

Twenty-six images of diagnosed diabetic retinopathy volunteers have been selected for this study. An expert ophthalmologist was responsible to capture the images using FA and OCT modalities at the Didavaran ophthalmology clinic center. FA images were gray-scale $768\times 768$ pixels and captured by the Heidelberg Spectralis HRA+OCT (Heidelberg Engineering, Heidelberg, Germany) device. The FOV of the early and late phase frames are $30{}^{\circ }$ and $55{}^{\circ }$, respectively. The pixel size of the early images was $12.5$ $\mu$m while the pixel size of the late images was $25$ $\mu$m. In addition, each OCT data included the SLO image and thirty-one OCT-Bscans. The specification of the SLO images was similar to the early FA images. All OCT images were captured using the eye-tracking based followup function (EBF) technique [38] to automatically find the desired location in subsequent examinations.

2.2 Pre-processing

We first enhanced our images to get a more accurate segmentation of the vascular tree. The first step in our framework is to deduct the image noise using an average filter. We accomplish this by performing the predefined $3\times 3$ convolution of the images. Afterward, we enhanced the image using Contrast Limited Adaptive Histogram Equalization (CLAHE) filter. Next, the enhanced images were segmented by the global image threshold of Otsu’s method [39]. It is worth pointing out that we computed the complement of the SLO images before segmentation and registration.

2.3 Thickness map

Our objective is to cluster major blood vessels that appear as a vascular arch in retinal images. Narrow vessels and capilliaries are derived from major vessels with less thickness. Accordingly, our binary images can be purposely defined as a graph $G = (\boldsymbol {V},\boldsymbol {E})$ with 8 connected neighborhood, where $\boldsymbol {V}= \{v_i\}$ represents vertices and $\boldsymbol {E} = \{e_{i,j}\}$ denotes directed weight for edge between $V_i$ and $V_j$. From defined graph $G$, all connected pixels (8 connectivity) result in an object $K-label$ image:

$$\Pi = \Pi_0 \cup \Pi_1 \cup \cdots\Pi_k; \forall i \neq j, \Pi_{i} \cap \Pi_{j} = 0$$
where $\Pi _0$ represents the background (zero value pixel in binary images) and $\Pi _1,\ldots ,\Pi _k$ denotes the foreground ($k-$connected ridges). In graph $G$, we define pixel $v_i$ as a perimeter to its neighbor pixel $v_j$, if pixel $v_i$ belongs to the foreground, and there is at least one pixel in which $v_j$ belongs to the background. Therefore, remaining nodes in graph $P$ represent vessel borders.
$${P} = {\bigg\{} \begin{array}{c} v_{i} \in \Pi_{1},\ldots,\Pi_{k} \\ \exists~j\colon v_{j} \in \Pi_{0} \end{array}$$
The thickness map $\mathcal{T}$ of vessel borders is taken from the Euclidean distance of vertices of vessels in binary images (corresponding to pixels ($\mathbf{X},\mathbf{Y}$)) to nearest perimeter vertices defined in $P$ (corresponding to pixels ($\mathbf{X_p},\mathbf{Y_p}$)). The Euclidean thickness map $\sqrt{(\mathbf{X}-\mathbf{X_p})^{2} + (\mathbf{Y}-\mathbf{Y_p})^{2}}$ provides the distance information of vessel vertices to its nearest foreground vertices.

In our work, we approximated the thickness of vessels by extraction of morphological skeleton from primary binary images in given thickness map $\mathcal {T}$. Fig. 1 shows a particular retinal case in different FOVs and its thickness map $\mathcal {T}$. In Fig. 1, the largest thickness value in the early FA images is about 9 pixels while in the late images it is around 6 pixels approximately.

 figure: Fig. 1.

Fig. 1. The thickness map demonstration of retinal images. The left column shows FA images in FOVs $30^{\circ }$ and $55^{\circ }$, the right column represents the corresponded thickness map of FA images.

Download Full Size | PDF

2.4 Vessel clustering

The thickness map $\mathcal {T}$ from the previous step represents retinal images as a thickness map of all vessels including micro-vessels, minor vessels and major vessels. In order to cluster those vessels, we utilized Fuzzy C-Mean Clustering Method (FCM). FCM clustering method allows the partial intersection of a cluster to have intersection to other clusters. FCM is carried out through an iteration process and it is based on minimization of following function [40]:

$$J_m (U,\nu) = \sum_{k=1}^{N}\sum_{i=1}^{c} (u_{ik})^{m} ||y_k - \nu_i||^{2}$$
where $Y = \{y_1,\ldots , y_N\}$ is clustering data, $c$ is the number of clusters in Y (which is equal to 3 in this paper as we want to cluster vessels to micro vessels, minor and major vessels), $m$ is weighting exponent ,and it is experimental; in our study $m=2$, $U$ is fuzzy c-partition of Y, $\nu = \{\nu _1,\ldots , \nu _c\}$ is vectors of centers, $\nu _i = \{\nu _{i1},\ldots \nu _{in}\}$ is the center of cluster $i$ ,and $u_{ik}$ is the degree of membership of $y_k$ in the cluster $i$.

2.5 Producing connected ridge

Clustered ridges are classified into three categories: micro, minor ,and major vessels. Major vessels are chosen as a prominent feature which is a common feature between early and late images. As time goes on in the FA imaging system, injected fluorescein changes the thickness of vessels. We suggest a refining classification which results in maximizing the segmentation accuracy. We inspired by the algorithm proposed in [18] and rectied disconnected classified ridges by connecting them to vascular network. Let’s go back to graph $G$ and connect nodes defined in Eq. (1). Assume thick vessels are labeled as $\pi = \pi _1 \cup \cdots \cup \pi _n$ where $\pi \subset \Pi$. Therefore, the weighted graph presented based on the distance of vessel pixel $v_i$ to the closest pixel of vessel border $v_j$

$$e_{ij} = {\bigg\{} \begin{array}{cc} 0 & v_i \in \pi_1 \cup \cdots\cup \pi_n \\ g(v_i) & v_i \in (\Pi_1 \cup \cdots\cup \Pi_k) - \pi \end{array}$$
The $e_{ij} = 0$ when $v_i \in \pi _1 \cup \cdots \cup \pi _n$ so that all vertices are included in the shortest path. The weights for the remaining vertices $v_i \in (\Pi _1 \cup \cdots \cup \Pi _k) - \pi$ are satisfying a misplaced bridge which is defined by distance of nodes $d(v_i,v_j)$ and data term. The term $g(v_{i})$ finds the thickest path between disconnected nodes. In this study, we define $g(v_{i})$ as follow:
$$g(v_{i}) = 1 - \frac{D(v_{i})}{max(\mathcal{T})}$$
where $D(v_{i})$ denotes the distance of pixel $i$ to the closest vessel border and $max(\mathcal {T})$ represents the maximum thickness value in thickness map $\mathcal {T}$. We are interested in finding the shortest path from disconnected ridges using Dijkstra’s algorithm [41]. Therefore, the value of $g(v_{i})$ is normalized to the range of [0, 1] ,and the lowest value weight $e_{ij}$ will be the shortest path among the thickest segmented vessels.

To reduce the search space, we only kept vertices which corresponded to the segmented end-points. In addition, the biggest connected ridge $BCC$ considered as a starting ridge and its end points ($points\_BCC$). Afterwards, the distance of remaining ridges were compared to find out the shortest path between two ridges. The shortest path between the biggest ridge to other ridges were recorded ,and the minimum distance was considered as a target ridge to append to the biggest ridge. In images with FOV $55^{\circ }$, the vessel structure supposed to have no disconnection. Due to the possibility of noise or leakage of vessels, all single ridges will be discarded if there is no path from a particular ridge to the biggest ridge. The algorithm summarised in Algorithm 1 where $list\_r$ represents a set of clustering ridges. Finally, fully connected component label (BCC) presents retinal vascular arch in our image.

boe-12-3-1707-i001

The thicker path $e_{ij}$ between two disconnected ridges gives a smaller weight value (close to zero), and the weight of narrow paths is closer to 1. In our algorithm, there are four possible conditions for the bridges between two disconnected ridges: 1- a thick and short path, 2- a thick and long path, 3- a narrow and short path and 4- a narrow and long path. Our algorithm selects condition 1 and discards condition 4, while the thickness and the length of path compete between conditions 2 and 3. The very long path with a larger thickness might be discarded if there is another path with very short path and narrow thickness. Figure 2 shows disconnected ridges which could be connected by the micro-vessels (blue color) or minor vessels (red color). The red bridges are selected to connect black ridges.

 figure: Fig. 2.

Fig. 2. The minor vessels (red) are the true candidate to connect major vessels (black). Blue color vessels represent microvessels.

Download Full Size | PDF

Figure 3 shows vessel clustering in FOV $30^{\circ }$ and $55^{\circ }$. The red color presents major blood vessels and the blue color indicates minor and micro vessels. In contrast, the bottom row images illustrate connected major blood vessels. As we can see in the late image, all ridges are connected together and make a singular connected network. A ridge is discarded if it has no connectivity to other ridges. In contrast, single ridges are kept in the SLO and early images.

 figure: Fig. 3.

Fig. 3. The early images and clustered vessels (left column) compared to the clustered late image (right column). The top row shows major vessels (red) and minor vessels (blue) before the vessel refinement. The bottom row represents the refined major vessels (red color).

Download Full Size | PDF

2.6 Image registration

The retinal image registration of scanning laser ophthalmoscope and fluorescein angiography is a challenging task due to the complex structure of vessel networks in different FOVs and modalities. In general, image registration is the estimation process of transformation $T$ which maps the source image $A$ to the target image $B$. Therefore, the target image $B$ is approximately equal to the transformed source image $T(A)$ i.e. ${B} \approx {T}({A})$. In other words, all points $\mathbf {p}$ of image $A$ are transformed under transformation $T$ to a new location $\mathbf {p}' = {T}(\mathbf {p})$. In image registration, this transformation is presented by displacement field $\boldsymbol {\mu }$ i.e. ${T}(\mathbf {p}) = \mathbf {p} + \boldsymbol {\mu }(\mathbf {p})$. Image registration aims to estimate the displacement field $\mu$.

$${B}(\mathbf{p}) \approx {A}( \mathbf{p} + \boldsymbol{\mu}(\mathbf{p}))$$
In our presented framework in Fig. 4, we suggest registering the SLO image as a source image to the early image (target image) by transformation $T_{1}$. Afterward, the early image should be registered to the late image by transformation $T_{2}$. The transformation $T$ is shown by the following relationship of $T_{1}$ and $T_{2}$: $T(p) = T_{2}( T_{1}(p))$ which resulted in $\boldsymbol {\mu } =\boldsymbol {\mu }_{1} + \boldsymbol {\mu }_{2}$. The registration aims to find the corresponding points of all pixels in the SLO image to the late image. Therefore, the inverse transformation $T^{-1}$ resulted from inverse displacement field $\mu ^{-1}$ of late image to the SLO image. For the rest of this section, the transformation details and registration parameters are presented in three subsections.

 figure: Fig. 4.

Fig. 4. The registration framework. The SLO images transformed into the late images (transformation $T_{1}$) and the early image were transformed into $T_{2}$, The consequence of $T_{1}$ and $T{2}$ produced transformation $T$. The inverse transformation $T^{-1}$ maps corresponded late image to the SLO image.

Download Full Size | PDF

2.6.1 Global transformation

Global registration or feature-based registration is a part of the proposed registration framework that locally transforms the source image to the target corresponding points. In other words, feature-based registration focuses on a sparse set of the locations (corresponding points) of images to find a suitable mapping. In our study, the border of major blood vessels is chosen as a common feature between images that are captured in different FOVs. We used Coherent Point Drift (CPD) [27] algorithm to assign the correspondence between two sets of major blood vessels. CPD is a probabilistic approach to fit the centroid of a point cloud to another point cloud by maximizing the likelihood. This approach limits the movement of the Gaussian Mixture Model (GMM) to preserve the structure of the point cloud.

For the first step of registration (transformation $T_{1}$), an affine transformation is selected to register SLO major vessels to the corresponding points in the early image. For this transformation, we manipulated CPD registration in rotation and scaling factors. The rotation angle was limited to $\pm 5^{\circ }$, and the scaling factor was restricted between $.95$ and $1.05$. The transformation $T_{2}$ corresponds to the registration of the early image to the late image. The transformation $T_{2}$ includes two steps: i) a rigid (linear) transformation with a limited rotation angle $\pm 5^{\circ }$ and a restricted scaling factor $0.65$ and ii) an affine transformation with restricted rotation angle $\pm 5^{\circ }$, and a scaling factor resize between $.50$ and $.80$. In fact, the rigid transformation of the first step initializes a geometric transformation for the affine transformation. Figure 5 depicts the point cloud corresponding to the border of major blood vessels before and after transformation respectively. Figure 5(a) and Fig. 5(b) illustrate the SLO and early point clouds before and after the global registering transformation, respectively. In contrast, Fig. 5(c) and Fig. 5(d) demonstrate corresponding points of major vessels before and after global registration.

 figure: Fig. 5.

Fig. 5. CPD registration of vessels point cloud. (a) and (b) depict major vessels of SLO (green) and early (blue) images before and after registration. (c) and (d) demonstrate early (blue) and late (red) corresponded points before and after global registration.

Download Full Size | PDF

2.6.2 Intensity-based registration

Intensity-based registration evaluates the similarity of each pixel of both source and target images to find the best match between them. The globally transformed images are registered for the second time by intensity-based registration and affine transformation. Intensity-based image registration is broadly composed of four main components: a cost function, an interpolation, an optimization, and a transformation. The optimization process minimizes the cost function measuring similarity metric $S$ over the target image. The similarity measurement metrics that are considered in this study include:

1) Mutual Information: A Mutual Information (MI) metric is a popular metric in multi-modal retinal image registration. In multi-modal images, the cost function compares the correlation or MI between images. The SLO and early images are captured in different modalities, and, mutual information was chosen in $T_{1}$ transformation. Assume joint density function $pdf$ of target image $B$ and transformed source image $A'$ of two random pixel variables $X$ and $Y$ is $\mathcal {P_{B,A'}(X,Y)}$ and marginal $pdf$ of random pixel variables $X$ and $Y$ obtained through the marginalization in $B$ and $A'$ are $\mathcal {P_{B}(X)}$ and $\mathcal {P_{A'}(Y)}$, respectively. Mutual information is described as:

$$I(X,Y) = \sum_{x\in X}\sum_{y \in Y} P_{B,A'}(x,y) \log\frac{P_{B}(x){P_{A'}(y)}}{P_{B,A'}(x,y)}$$
2) Sum of Squared Differences: The sum of Squared Difference (SSD) metric measures the intensity difference between target image $B$ and transformed source image $A'$.
$$S_{SSD}{(B,A')} = 1/N \sqrt{ \Sigma_{i=1}^{N}[B(i)-A'(i)]^{2} }$$
where $S$ measures the intensity difference of corresponding pixels between target image $B$ and transformed image $A'$. If image registration is ideally aligned between both images, the SSD will be zero. We utilized the SSD similarity metric in the registration of the early image to the late image.

In our study, the transformation of $T_{1}$ and $T_{2}$ were re-aligned through the gradient descent optimization process to minimize the intensity of the transform image to the target image over the SSD cost function.

2.6.3 Deformable image registration

The global and intensity-based registration methods compensate for motion regarding rigid and non-rigid transformations; however, due to fluorescein injection, the thickness of blood vessels might change partially. To compensate the motion magnitude of deformable changes, Free Form Deformation (FFD) [42] registration is performed to cover the remaining misregistration part. This transformation is based on uniform cubic B-Spline deformation. The cubic spacing grid which is selected for this study is 3 pixels for all images. In this paper, the MI similarity metric is chosen for deformable transformation $T_{1}$ since SLO and early images are acquired in different modalities. In contrast, early and late images are captured in the same modality and the SSD metric is utilized for deformable transformation $T_{2}$.

Since there is no gold standard method to evaluate image registration techniques, it is challenging to measure the accuracy of registration algorithms. However, some methods are more reliable compare to other methods. Target Registration Error (TRE) is one of the popular approaches to measure registration error [4345]. To calculate TRE, trained clinicians manually selected approximately 30 landmarks on vascular structure branch points in the SLO image and corresponding registered FA image (Fig. 7).

It is worth mentioning that our experiment results were done using a PC with Windows 10 64-bit operating system, 4 GB RAM, and Intel Core i5 Core i5 (3rd Gen) 3317U / 1.7 GHz Max Turbo Speed 2.6 GHz. We implemented our method partially in Matlab and C++. The average running process is about 195 sec per image.

3. Results

The transformation $T = T_{2}(T_{1})$ represents the motion magnitude transformation of SLO image to the late image. The inverse transformation $T^{-1}$, register the late image to the SLO image. Figure 6 depicts the transformation results of the SLO image to the early image (Fig. 6(a)), early to the late (Fig. 6(b)), the SLO to the late (Fig. 6(c)), and the late to the SLO image (Fig. 6(d)) which correspond to $T_{1}$ , $T_{2}$, $T$ and $T^{-1}$ transformations, respectively.

 figure: Fig. 6.

Fig. 6. The transformed image of each step (a) the transformed SLO image to the early image $T_{1}$, (b) The transformed early image to the late image $T_{2}$, (c) The transformed SLO image to the late image $T$ and (d) The inverse transformation of $T$ which align the late image to SLO image $T^{-1}$

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. A landmark selection from the retinal vascular network of a sample late image

Download Full Size | PDF

In Table 1 our hybrid registration is compared to an intensity-based (affine) registration algorithm [46] of 26 cases. The hybrid registration algorithm includes a sequence of global CPD affine registration, intensity-based registration, and the deformable approach, respectively. Therefore, the final results are shown under the last column of Table 1. This table contains TRE misregistration of $T^{-1}$ transformation values between our method and intensity-based registration. Intensity-based registration is restricted in rotation and scaling parameters same as our proposed method. For both methods, the MI metric is selected for multi-modal transformation $T_{1}$ and the SSD metric is selected for mono-modal registration $T_{2}$. The results of our proposed method are shown into three columns: the global affine transformation (CPD), intensity-based registration, and the deformable transformation. The TRE values which are shown in Table 1 depict misregistration in micron ($\mu m$), and the results are evaluated by medical expert staff. At the bottom of the Table 1, we present the average of TRE values and standard deviations. From this table we can conclude that the average of TRE value of our hybrid registration framework decreases when we use the intensity-based registration and a deformable transformation after the CPD transformation. In addition, according to this table the intensity-based approach is able to successfully register 15 out of 26 images (success rate $58\%$) while our proposed hybrid registration method fails only in 3 images (success rate $89\%$).

Tables Icon

Table 1. The target registration error ($\mu m$) of vascular structure points. The gray column depicts intensity-based affine registration compared to our method in three steps: global, intensity-based and deformable transformations. In 3rd column, TRE for Intensity Based method was computed on data which was first registered with Global approach. In the last column, the TRE for Deformable method was computed on data that was first registered with Global method and then with Intensity Based technique.

Table 2 compares the success rate of images which are aligned correctly. The comparison depicts the success rate of registration algorithms in equal FOVs and different FOVs separately. Our proposed method in registration with equal FOVs presents $92\%$ accuracy compared to $89\%$ success rate in images with different FOVs.

Tables Icon

Table 2. The success rate of different registration methods on multi-modal datasets.

The image difference between the reference image and registered image can approximately visualize the accuracy of the image registration framework. The reference image should be normalized or have the same modalities to get the difference intensity between both images. In our study, we first inverted the SLO image to get a bright color for the vessel structure. Then, we matched the histogram of the SLO image to the histogram of the early image. Figure 8(a) and Fig. 8(b) illustrate the SLO and normalized SLO images, respectively. Figure 8(c) shows the final registration results and Fig. 8(d) depicts the image difference between normalized SLO and final registration image.

 figure: Fig. 8.

Fig. 8. The visualization of the registration accuracy by image differences, (a) the SLO image, (b) the inverted normalized SLO image, (c) the registered late image to the SLO image (d) the difference image between (b) and (c)

Download Full Size | PDF

The visualization of more cases is shown in Fig. 9. Each row contains five images, the first four are the SLO, early, late, registration results ($T^{-1}$), respectively. The fifth is a check-board image showing simultaneously the registered FA image and the original SLO image.

 figure: Fig. 9.

Fig. 9. The illustration of $T^{-1}$ transformation results for five cases. The first three columns show the SLO, early and late FA images, respectively, while the fourth column demonstrates the inverse transformed image. The last column shows the check-board images between the registered late FA and original SLO images.

Download Full Size | PDF

Finally, we show the application of our registration framework by analysis of microaneurysm and leakage abnormalities in involved areas of registered OCT-Bscan and FA images. In Fig. 10(a), two microaneurysms are shown by red circles, and Fig. 10(b) depicts the depth image of those areas corresponding to the OCT-Bscan image. Figure 10(c) and Fig. 10(d) illustrate a leakage spot in the FA image and corresponding OCT-Bscan image respectively.

 figure: Fig. 10.

Fig. 10. The application of the registered FA image to the SLO image in a DR case with a microaneurysme abnormality. (a) two microaneurysmes are shown in red, (b) the corresponding OCT-Bscan of green line in image (a) and involved microaneurysmes areas (red strips), (c) the selected region of leakages in an FA image (red rectangle), (d) the corresponding OCT-Bscan of green line in image (c) and the involved leakage region (red strips).

Download Full Size | PDF

To facilitate comparison by other groups, we have made a dataset including FA images in FOVs $30^{\circ }$ and $55^{\circ }$, corresponding OCT images; our registration results are available at https://misp.mui.ac.ir/en/golkar .

4. Discussion

In our algorithm, we produced a thickness map $\mathcal {T}$ from a retinal binary image. In practice, diabetic retinopathy might change vascular structure, i.e. the leakage of blood vessels which significantly change the thickness map $\mathcal {T}$. To overcome this problem and based on our database, we defined a heuristic threshold value $9$ pixels for SLO and early images and $6$ pixels for late images. All values beyond the threshold are omitted by a radius around that pixel in the detected area. The heuristic values are given from the average thickness of major vessels in the thickness map (9 pixels in SLO images and 6 pixels in late images). In addition, the rotation and scaling factors are restricted to get more accurate alignment. For this purpose, rotation is limited to $\pm 5^{\circ }$ since the head of patients is fixed during the image capturing and, there is not much rotation of retinal images. Besides, the scaling factor is limited to $\pm 0.05$ in equal FOVs and $\pm 0.15$ in different FOVs. In the registration of SLO and early images, the scaling factor is almost equal to 1. Therefore, we restrict the scaling factor between $0.95$ and $1.05$ ($1.0 \pm .05$). In the registration of images with different FOVs, the scale of late images are almost 65 percent smaller than early images, and the scaling factor are restricted between $0.5$ and $0.8$ ($0.65 \pm 0.15$).

This study proposed a registration algorithm to register Fluorescein Angiography late images with FOV $55^{\circ }$ to Scanning Laser Ophthalmoscope images with FOV $30^{\circ }$. However, we first register the SLO image to the FA early image ($T_{1}$), and then the early image is registered to the late image ($T_{2}$). The consequence of $T_{1}$ and $T_{2}$ provides displacement field $\mu$ that results in registration of the SLO image to the late image. Afterward, the inverse transformation ($T^{-1}$) align the late image to the SLO image. In our experiments, we found out that the global transformation and intensity-based registration are more likely to succeed on the FA images if the SLO image is registered to the early image which is then registered to the late image. Therefore, we used the inverse transformation to increase our accuracy results.

Feature-based methods align two images based on a the partial intersection of images. Therefore, the feature-based techniques globally register the source image to the target image. In intensity-based registration methods, all pixels are locally optimized around each pixel. These registration strategies fail in our dataset when applied separately. In contrast, our hybrid method combines the advantages of both strategies to find the best match between FA and OCT images with different FOVs.

The success rate of our method in the registration of different FOVs is higher than other methods except Ref. [37] in Table 2. We should mention that Ref. [37] proposes a method that utilizes the location of optic disc for global registration. However, this method is not applicable in our macular image dataset due to the absence of optic disc in SLO and early images. In another study [21], the quadratic deformable registration algorithm (with $77\%$ success rate) were used to align OCT fundus images and color fundus images. Finally, we compared our results with our recently published method [47] in the registration of images with similar and different FOVs. The results illustrated higher accuracy in registration with equal FOVs (with $97\%$ success rate) but less than our results in different FOVs (with $62\%$ success rate).

The methodological difference between our previous studies and our new study is that in [47], a feature-based method were used for rigid registration step based on a Gaussian model for the curved surface of the retina and then registration improved by local registration using a diffusing model. In Table 2, we compared our results with result of this method. In [15], a simple multi-step correlation-based algorithm was used for rigid registration followed by multi-resolution local registration around microaneurysm areas in OCT B-scans. Therefore, this method was not successful in the late images of our database in which microaneurysms were not visible sharply. In addition, the dataset used in our previous works does not include the late images or partially include the images with FOV $55^{\circ }$. However, in all images of our new study, we emphasized on the registration of images with different FOVs, and all of our images included late images. It is very important to register images in early and late images. Some abnormalities are visible in the early images and some are visible in the late phase.

The current paper proposed a hybrid registration framework based on the extraction and refinement of segmented major blood vessels of retinal images. The extracted features significantly improved the success rate of global registration results in the complex blood vessel network of retinal images. Afterward, intensity-based and deformable transformations were utilized to further compensate for the motion magnitude between FA and OCT images. Therefore, we would like to mention that our registration problem is much more complex than previously mentioned works, and popular registration toolboxes such as GIMP and ImageJ can not be used to align and register the images of our database. The main reasons are the lack of optic disc, lack of color modality, changing the intensity of image during image capturing of early and late phases, and most importantly having images with different FOVs ($55^{\circ }$ to $30^{\circ }$) in the images of proposed database.

To discuss more the advantages of this method in clinical practice, an ophthalmologist helped us to compare OCT-Bscans and registered FA images. The main advantage of FA and OCT registration is to simultaneously access to the depth information of the retina and 2D functional en-face information. The comparison has shown that although FA is a gold standard imaging method for the detection of retinal abnormalities, it has some limitations. For example, the source of the leakage may not be visible in FA images while the leakage source due to vascular bleeding, microaneurysms or vascular leakages is detectable in OCT-Bscans. Therefore, ophthalmologists will know the source of abnormalities to prescribe better treatment. In addition, some retinal abnormalities, which are not visible due to the time limitation in the FA image (such as microaneurysms in early and late images) can be recognized in OCT-Bscans. Therefore, the exact location of microaneurysms could be found even if it is not visible in FA images. It is noteworthy that the number of OCT-Bscans is very limited, so utilizing OCT-Bscans as a complementary imaging method for DR patients is suggested.

The approach we presented in this study assumed that macula and vascular arch are captured in both SLO and FA images. Then, the vascular arch in both images is selected as a prominent feature and global transformation utilizes this feature to register the late image to the SLO image. For cases, #12, #15, and #16, only the half part of the vascular arch are captured in the SLO and early images. In those cases, although the SLO image is transformed successfully to early image ($T_{1}$), the registration of the early image to the late image is failed ($T_{2}$). We note that the proposed algorithm may fail if only the top or downside of the vascular arch is captured in FOV $30^{\circ }$ images.

5. Conclusion

Fluorescein Angiography is recognized as a gold standard technique to evaluate retinal diseases. The registration of Fluorescein Angiography images with other image modalities can provide valuable information on top of Fluorescein Angiography to analyze retinal diseases such as diabetic retinopathy. In this paper, we presented a new method to register Fluorescein Angiography late images to Scanning Laser Ophthalmoscope images in different FOVs. We extracted and refined segmented major blood vessels of the retina in both modalities. The extracted features which present major blood vessels in a retinal arch significantly improve the success rate of global registration results in the complex blood vessel network in images with different FOVs. Experimental results of twenty-six retinal diabetic retinopathy images indicate that our method yields promising results for the registration and fusion of these images. Finally, by comparison of our results and OCT-Bscans, we suggest utilizing both modalities for diabetic retinopathy patients. In addition, with the aid of our proposed registration algorithm, the information fusion from both modalities guides ophthalmologists to decide about uncertainty points which are not clear in the FA images. The direction of future studies includes experimental studies on larger datasets. Moreover, the result can be extended to utilize machine learning methods to classify retinal abnormalities in OCT-Bscans.

Funding

Iran's National Elites Foundation; Isfahan University of Medical Sciences (198089).

Acknowledgments

Authors would like thank Prof. Sina Farsiu from Duke Eye Center for his valuable ideas and comments in starting and development of this study.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. H. R. Taylor and J. E. Keeffe, “World blindness: a 21st century perspective,” Br. J. Ophthalmol. 85(3), 261–266 (2001). [CrossRef]  

2. A. Bringmann, T. Pannicke, J. Grosche, M. Francke, P. Wiedemann, S. N. Skatchkov, N. N. Osborne, and A. Reichenbach, “Müller cells in the healthy and diseased retina,” Prog. Retinal Eye Res. 25(4), 397–424 (2006). [CrossRef]  

3. H. R. Novotny and D. L. Alvis, “A method of photographing fluorescence in circulating blood in the human retina,” Circulation 24(1), 82–86 (1961). [CrossRef]  

4. N. Tanno, T. Ichikawa, and A. Saeki, “Lightwave reflection measurement,” Jpn. patent 2010042 (1990).

5. A. Fercher, “Ophthalmic interferometry,” in Optics in Medicine, Biology and Environmental Research First International Conference on Optics Within Life Sciences (OWLS I), Garmisch-Partenkirchen, Germany, (1990), pp. 12–16.

6. M. Pircher, R. Zawadzki, J. Evans, J. S. Werner, and C. Hitzenberger, “Simultaneous imaging of human cone mosaic with adaptive optics enhanced scanning laser ophthalmoscopy and high-speed transversal scanning optical coherence tomography,” Opt. Lett. 33(1), 22–24 (2008). [CrossRef]  

7. T. E. De Carlo, A. Romano, N. K. Waheed, and J. S. Duker, “A review of optical coherence tomography angiography (octa),” Int. journal of retina and vitreous 1(1), 5 (2015). [CrossRef]  

8. N. Feucht, M. Maier, C. P. Lohmann, and L. Reznicek, “Oct angiography findings in acute central serous chorioretinopathy,” Ophthalmic Surgery, Lasers and Imaging Retin. 47(4), 322–327 (2016). [CrossRef]  

9. V. A. Zimmer, M. Á. G. Ballester, and G. Piella, “Multimodal image registration using laplacian commutators,” Inf. Fusion 49, 130–145 (2019). [CrossRef]  

10. Y. Li, Z. He, H. Zhu, W. Zhang, and Y. Wu, “Jointly registering and fusing images from multiple sensors,” Inf. Fusion 27, 85–94 (2016). [CrossRef]  

11. S. K. Saha, D. Xiao, A. Bhuiyan, T. Y. Wong, and Y. Kanagasingam, “Color fundus image registration techniques and applications for automated analysis of diabetic retinopathy progression: A review,” Biomed. Signal Process. Control. 47, 288–302 (2019). [CrossRef]  

12. X. Yuan, J. Zhang, and B. P. Buckles, “Evolution strategies based image registration via feature matching,” Inf. Fusion 5(4), 269–282 (2004). [CrossRef]  

13. B. Fang, W. Hsu, and M.-L. Lee, “Techniques for temporal registration of retinal images,” in 2004 International Conference on Image Processing, 2004. ICIP’04., vol. 2 (IEEE, 2004), pp. 1089–1092.

14. X. Guo, W. Hsu, M. L. Lee, and T. Y. Wong, “A tree matching approach for the temporal registration of retinal images,” in 2006 18th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’06), (IEEE, 2006), pp. 632–642.

15. Z. G. Kamasi, M. Mokhtari, and H. Rabbani, “Non-rigid registration of fluorescein angiography and optical coherence tomography via scanning laser ophthalmoscope imaging,” in Engineering in Medicine and Biology Society (EMBC), 2017 39th Annual International Conference of the IEEE (IEEE, 2017), pp. 4415–4418.

16. A. Can, C. V. Stewart, B. Roysam, and H. L. Tanenbaum, “A feature-based, robust, hierarchical algorithm for registering pairs of images of the curved human retina,” IEEE transactions on pattern analysis and machine intelligence 24(3), 347–364 (2002). [CrossRef]  

17. M. Fernandes, Y. Gavet, and J.-C. Pinoli, “A feature-based dense local registration of pairs of retinal images,” in VISAPP 2009: 4th International Conference on Computer VISion Theory and APplications, vol. 1 (INSTICC-Institut for Systems and Technologies of Information Control and …, 2009), p. 265.

18. L. Chen, X. Huang, and J. Tian, “Retinal image registration using topological vascular tree segmentation and bifurcation structures,” Biomed. Signal Process. Control. 16, 22–31 (2015). [CrossRef]  

19. J. A. Lee, J. Cheng, G. Xu, E. P. Ong, B. H. Lee, D. W. K. Wong, and J. Liu, “Registration of color and oct fundus images using low-dimensional step pattern analysis,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2015), pp. 214–221.

20. M. Arikan, A. Sadeghipour, B. Gerendas, R. Told, and U. Schmidt-Erfurt, “Deep learning based multi-modal registration for retinal imaging,” in Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support (Springer, 2019), pp. 75–82.

21. Y. Li, G. Gregori, R. W. Knighton, B. J. Lujan, and P. J. Rosenfeld, “Registration of oct fundus images with color fundus photographs based on blood vessel ridges,” Opt. Express 19(1), 7–16 (2011). [CrossRef]  

22. L. Andreou and A. Achim, “Temporal registration for low-quality retinal images of the murine eye,” in 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology (IEEE, 2010), pp. 6272–6275.

23. J. V. Di Xiao, J. Lock, S. Frost, M.-L. Tay-Kearney, and Y. Kanagasingam, “Retinal image registration and comparison for clinical decision support,” The Australas. medical journal 5(9), 507–512 (2012). [CrossRef]  

24. H. Rabbani, M. J. Allingham, P. S. Mettu, S. W. Cousins, and S. Farsiu, “Fully automatic segmentation of fluorescein leakage in subjects with diabetic macular edema,” Investig. ophthalmology & visual science 56(3), 1482–1492 (2015). [CrossRef]  

25. Z. Ghassabi, J. Shanbehzadeh, and A. Mohammadzadeh, “A structure-based region detector for high-resolution retinal fundus image registration,” Biomed. signal processing control 23, 52–61 (2016). [CrossRef]  

26. H. Tang, A. Pan, Y. Yang, K. Yang, Y. Luo, S. Zhang, and S. H. Ong, “Retinal image registration based on robust non-rigid point matching method,” J. Med. Imaging Heal. Informatics 8(2), 240–249 (2018). [CrossRef]  

27. A. Myronenko and X. Song, “Intensity-based image registration by minimizing residual complexity,” IEEE Transactions on Med. Imaging 29(11), 1882–1891 (2010). [CrossRef]  

28. Y.-M. Zhu, “Mutual information-based registration of temporal and stereo retinal images using constrained optimization,” Comput. Methods Programs Biomedicine 86(3), 210–215 (2007). [CrossRef]  

29. P. S. Reel, L. S. Dooley, K. P. Wong, and A. Börner, “Enhanced retinal image registration accuracy using expectation maximisation and variable bin-sized mutual information,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2014), pp. 6632–6636.

30. X. Peng, M. Ding, C. Zhou, and Q. Ma, “A practical two-step image registration method for two-dimensional images,” Inf. Fusion 5(4), 283–298 (2004). [CrossRef]  

31. T. Chanwimaluang, G. Fan, and S. R. Fransen, “Hybrid retinal image registration,” IEEE Trans. Inf. Technol. Biomed. 10(1), 129–142 (2006). [CrossRef]  

32. R. Kolar, V. Harabis, and J. Odstrcilik, “Hybrid retinal image registration using phase correlation,” The Imaging Sci. J. 61(4), 369–384 (2013). [CrossRef]  

33. J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010). [CrossRef]  

34. Z. Li, F. Huang, J. Zhang, B. Dashtbozorg, S. Abbasi-Sureshjani, Y. Sun, X. Long, Q. Yu, B. ter Haar Romeny, and T. Tan, “Multi-modal and multi-vendor retina image registration,” Biomed. Opt. Express 9(2), 410–422 (2018). [CrossRef]  

35. P. S. Reel, L. S. Dooley, K. C. P. Wong, and A. Börner, “Multimodal retinal image registration using a fast principal component analysis hybrid-based similarity measure,” in 2013 IEEE International Conference on Image Processing, (IEEE, 2013), pp. 1428–1432.

36. M. Mokhtari, H. Rabbani, A. Mehri-Dehnavi, R. Kafieh, M.-R. Akhlaghi, M. Pourazizi, and L. Fang, “Local comparison of cup to disc ratio in right and left eyes based on fusion of color fundus images and oct b-scans,” Inf. Fusion 51, 30–41 (2019). [CrossRef]  

37. M. S. Miri, M. D. Abràmoff, Y. H. Kwon, and M. K. Garvin, “Multimodal registration of sd-oct volumes and fundus photographs using histograms of oriented gradients,” Biomed. Opt. Express 7(12), 5252–5267 (2016). [CrossRef]  

38. L. J. Balk and A. Petzold, “Influence of the eye-tracking–based follow-up function in retinal nerve fiber layer thickness using fourier-domain optical coherence tomography,” Investig. Ophthalmology & Visual Science 54(4), 3045 (2013). [CrossRef]  

39. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Transactions on Syst. Man, Cybern. 9(1), 62–66 (1979). [CrossRef]  

40. J. C. Bezdek, R. Ehrlich, and W. Full, “Fcm: The fuzzy c-means clustering algorithm,” Comput. & Geosci. 10(2-3), 191–203 (1984). [CrossRef]  

41. E. W. Dijkstra, “A note on two problems in connexion with graphs,” Numer. mathematik 1(1), 269–271 (1959). [CrossRef]  

42. D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast mr images,” IEEE Transactions on Med. Imaging 18(8), 712–721 (1999). [CrossRef]  

43. K. K. Brock, “Results of a multi-institution deformable registration accuracy study (midras),” Int. J. Radiat. Oncol. Biol. Phys. 76(2), 583–596 (2010). [CrossRef]  

44. F. Ernst, R. Dürichen, A. Schlaefer, and A. Schweikard, “Evaluating and comparing algorithms for respiratory motion prediction,” Phys. Medicine & Biol. 58(11), 3911–3929 (2013). [CrossRef]  

45. K. Murphy, J. P. Pluim, E. M. van Rikxoort, P. A. de Jong, B. de Hoop, H. A. Gietema, O. Mets, M. de Bruijne, P. Lo, M. Prokop, and B. van Ginneken, “Toward automatic regional analysis of pulmonary function using inspiration and expiration thoracic ct,” Med. Phys. 39(3), 1650–1662 (2012). [CrossRef]  

46. M. Modat, G. R. Ridgway, Z. A. Taylor, M. Lehmann, J. Barnes, D. J. Hawkes, N. C. Fox, and S. Ourselin, “Fast free-form deformation using graphics processing units,” Comput. Methods and Programs in Biomedicine 98(3), 278–284 (2010). [CrossRef]  

47. R. Almasi, A. Vafaei, Z. Ghasemi, M. R. Ommani, A. R. Dehghani, and H. Rabbani, “Registration of fluorescein angiography and optical coherence tomography images of curved retina via scanning laser ophthalmoscopy photographs,” Biomed. Opt. Express 11(7), 3455–3476 (2020). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The thickness map demonstration of retinal images. The left column shows FA images in FOVs $30^{\circ }$ and $55^{\circ }$, the right column represents the corresponded thickness map of FA images.
Fig. 2.
Fig. 2. The minor vessels (red) are the true candidate to connect major vessels (black). Blue color vessels represent microvessels.
Fig. 3.
Fig. 3. The early images and clustered vessels (left column) compared to the clustered late image (right column). The top row shows major vessels (red) and minor vessels (blue) before the vessel refinement. The bottom row represents the refined major vessels (red color).
Fig. 4.
Fig. 4. The registration framework. The SLO images transformed into the late images (transformation $T_{1}$) and the early image were transformed into $T_{2}$, The consequence of $T_{1}$ and $T{2}$ produced transformation $T$. The inverse transformation $T^{-1}$ maps corresponded late image to the SLO image.
Fig. 5.
Fig. 5. CPD registration of vessels point cloud. (a) and (b) depict major vessels of SLO (green) and early (blue) images before and after registration. (c) and (d) demonstrate early (blue) and late (red) corresponded points before and after global registration.
Fig. 6.
Fig. 6. The transformed image of each step (a) the transformed SLO image to the early image $T_{1}$, (b) The transformed early image to the late image $T_{2}$, (c) The transformed SLO image to the late image $T$ and (d) The inverse transformation of $T$ which align the late image to SLO image $T^{-1}$
Fig. 7.
Fig. 7. A landmark selection from the retinal vascular network of a sample late image
Fig. 8.
Fig. 8. The visualization of the registration accuracy by image differences, (a) the SLO image, (b) the inverted normalized SLO image, (c) the registered late image to the SLO image (d) the difference image between (b) and (c)
Fig. 9.
Fig. 9. The illustration of $T^{-1}$ transformation results for five cases. The first three columns show the SLO, early and late FA images, respectively, while the fourth column demonstrates the inverse transformed image. The last column shows the check-board images between the registered late FA and original SLO images.
Fig. 10.
Fig. 10. The application of the registered FA image to the SLO image in a DR case with a microaneurysme abnormality. (a) two microaneurysmes are shown in red, (b) the corresponding OCT-Bscan of green line in image (a) and involved microaneurysmes areas (red strips), (c) the selected region of leakages in an FA image (red rectangle), (d) the corresponding OCT-Bscan of green line in image (c) and the involved leakage region (red strips).

Tables (2)

Tables Icon

Table 1. The target registration error (μm) of vascular structure points. The gray column depicts intensity-based affine registration compared to our method in three steps: global, intensity-based and deformable transformations. In 3rd column, TRE for Intensity Based method was computed on data which was first registered with Global approach. In the last column, the TRE for Deformable method was computed on data that was first registered with Global method and then with Intensity Based technique.

Tables Icon

Table 2. The success rate of different registration methods on multi-modal datasets.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

Π=Π0Π1Πk;ij,ΠiΠj=0
P={viΠ1,,Πk j:vjΠ0
Jm(U,ν)=k=1Ni=1c(uik)m||ykνi||2
eij={0viπ1πng(vi)vi(Π1Πk)π
g(vi)=1D(vi)max(T)
B(p)A(p+μ(p))
I(X,Y)=xXyYPB,A(x,y)logPB(x)PA(y)PB,A(x,y)
SSSD(B,A)=1/NΣi=1N[B(i)A(i)]2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.