Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps

Open Access Open Access

Abstract

Specular microscopy assessment of the human corneal endothelium (CE) in Fuchs’ dystrophy is challenging due to the presence of dark image regions called guttae. This paper proposes a UNet-based segmentation approach that requires minimal post-processing and achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs’ dystrophy. We cast the segmentation problem as a regression task of the cell and gutta signed distance maps instead of a pixel-level classification task as typically done with UNets. Compared to the conventional UNet classification approach, the distance-map regression approach converges faster in clinically relevant parameters. It also produces morphometric parameters that agree with the manually-segmented ground-truth data, namely the average cell density difference of -41.9 cells/mm2 (95% confidence interval (CI) [-306.2, 222.5]) and the average difference of mean cell area of 14.8 µm2 (95% CI [-41.9, 71.5]). These results suggest a promising alternative for CE assessment.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The human corneal endothelium (CE) is responsible for maintaining corneal transparency and its proper hydration, both critical for good vision. It can be imaged in vivo with specular microscopy, and the resulting images can be analyzed to obtain clinical information by quantifying cell morphometric parameters, like cell density [1]. However, this quantification requires the accurate detection of cell contours, which is especially challenging in the presence of corneal endotheliopathies, such as Fuchs’ dystrophy [1,2]. Commercially available software does not give satisfactory results [3]. As an alternative, in vivo confocal microscopy has been used to image the CE in Fuchs’ dystrophy with remarkable results. However, it has not become a routine imaging device because it is contact-based and more technically challenging [4]. Moreover, although anterior segment optical coherence tomography has been proposed for Fuchs’ dystrophy grading [5], it does not provide CE morphometric parameters, which specular microscopy does.

Specular microscopy is based on illuminating the cornea with a narrow beam of light (Fig. 1(a)) and capturing the specular reflection from the posterior corneal surface [6]. Most of the incident light is either transmitted into the eye’s anterior chamber (Fig. 1(b)) or reflected by the epithelial surface at the anterior surface of the cornea. None of these fractions of the incident light are useful for acquiring the endothelial image but the tiny fraction (about 0.22% [7]) reflected by the posterior corneal surface (Fig. 1(c)). The CE image is a trade-off between the width of the beam and the corneal thickness. Once acquired, the normal endothelial cells appear gray, forming a regular tessellation (Fig. 1(d)). The irregularities in the surface (Fig. 1(e)) produce reflected rays in directions other than the corresponding specular reflection and, consequently, appear as dark regions (Fig. 1(f)).

 figure: Fig. 1.

Fig. 1. Specular microscopy imaging of the corneal endothelium (CE). (a) Optical principle. A light source projects light onto the corneal surface from which only a fraction is reflected by the endothelium and collected by the microscope. (b) The anterior segment including the cornea and the anterior chamber. (c) The corneal layers including the endothelium. (d) Typical CE image. The cells are uniformly distributed along the entire CE tessellation. (e) Example of a corneal gutta showing the outgrowth produced due to abnormal CE cells. (f) CE image with Fuchs’ dystrophy with a large gutta in the center.

Download Full Size | PDF

Computer vision techniques are often used to carry out the CE cell segmentation task [8]. However, developing a fully automated method to assess CE health is currently a challenge in ophthalmology [9]. Scarpa and Ruggeri [10,11] proposed an automated method for cell segmentation using a genetic algorithm that combines information about the typical regularity of endothelial cell shape with the intensity of the image pixels. Watershed algorithms and morphological operations applied on thresholded images have been frequently used to perform cell segmentation [3,1214]. In contrast, other methods estimate cell density without needing segmentation by using spatial frequency analysis [3] or two-dimensional discrete Fourier transforms [15]. However, these methods are not immediately applicable in CE images with Fuchs’ dystrophy. At the same time, other authors have studied CE morphometry in both normal and dystrophic corneas. For instance, Giasson et al., [1] developed a contour detection algorithm based on morphological image transformations to quantify cells and guttae. However, the method required significant manual interaction.

Recent methods based on deep learning have achieved considerable improvements. More precisely, deep convolutional neural networks (CNN) are used for their capacity for feature extraction [16]. Daniel et al. [17] assessed the performance of the UNet (a CNN architecture for biomedical image segmentation [18]) in CE segmentation. They used a large dataset containing CE images of different quality and clinical conditions. However, their analysis focused on cell characterization despite the presence of guttae. Other authors have followed a similar approach to dealing with the cell segmentation problem [8,9,19]. Nevertheless, simultaneous automated characterization of CE cells and guttae remains a difficult problem, and guttae parametrization provides an opportunity for improving CE assessment.

Other methods based on neural networks have explored different codification strategies to improve performance. Vigueras-Guillén et al. [20,21] used a fully convolutional architecture based on the UNet model and a sliding-window CNN to assess the CE image for detecting cell edges. They also implemented a densely connected UNet architecture to find the region of interest (ROI) on specular microscopy images, where individual cells are easily recognizable [22]. They developed a CNN-based regression to estimate biomarkers in specular microscopy images, which combines the previously mentioned methods [23]. Vigueras-Guillén et al. [24] also proposed an attention mechanism called feedback non-local attention to infer cell edges in CE images with guttae for improving accuracy results. However, this method required many manually segmented images, using more than one deep learning model and complex post-processing and heuristics.

Nonetheless, existing automated software often fails due to severe corneal endothelial dysfunction, such as Fuchs’ dystrophy, one of the most common corneal diseases [2]. The global prevalence rate of Fuchs’ dystrophy is around 7% [25]. However, in populations above 50, the rate increases to 9%, being 2.2-times more likely in women than in men. Fuchs’ dystrophy is related to the accumulation of collagen secreted by abnormal endothelial cells to the Descemet’s membrane, as shown in Fig. 1(c). It produces outgrowths that protrude to the anterior chamber, also called guttae [2628]. In specular microscopy, they appear as dark regions without identifiable cells, as shown in Fig. 2(a), where cells are altered and apparently non functional [2932]. These guttae are produced due to the depth difference between the reflection plane and the protrusion (Fig. 1(e)). Accurately characterizing the CE stage with new parameters that include guttae should make progression follow-up more precise.

 figure: Fig. 2.

Fig. 2. (a) A CE image with Fuchs’ dystrophy. (b) The segmentation performed by the specular microscope software. The green color indicates regions labeled as cells. The software misclassifies abnormal regions (guttae) as large cells, as shown with red arrows. (c) The manually annotated ground-truth reference. The violet color represents the guttae. The microscope image and the segmentation were split into $96\times 96$ pixels image patches to generate the training and validation data sets. The estimated parameters are shown at the bottom left corner, indicating a significant difference.

Download Full Size | PDF

In this work, we propose a deep learning-based method to carry out reliably the segmentation task in specular microscopy images in the presence of cornea guttata. We use a CNN architecture based on the UNet model for mapping the input image to a signed distance map, from which we obtain the cell and guttae segmentation. Our network demonstrates rapid convergence and robustness in terms of clinically relevant CE morphometric measures. We evaluate the main CE morphometric parameters necessary to estimate its health status and compare them with manual references. Moreover, we compare the results with the evaluation performed by the CellCount microscope software from Topcon (where the cell size-dependent parameters are usually overestimated, as shown in Fig. 2). Our results show an improvement over conventional UNet-based methods and the Topcon software used routinely in the clinical setting.

2. Materials and methods

We cast the problem of corneal endothelium (CE) health evaluation as a supervised regression [33]. We used a CNN based on the UNet architecture to predict a signed distance map from a given CE image. A high-level description of the proposed codification strategy is shown in Fig. 3. The input CE images were acquired with a specular endothelial microscope (SP-3000P, Topcon Co., Japan; magnification 150$\times$, and image size of 0.25 $\times$ 0.5 mm). Each image was processed by the Topcon CellCount microscope software to perform an initial segmentation, which can be used to create ground truths. This initial segmentation had errors in various corneal regions, particularly in corneas with Fuchs’ dystrophy. Therefore, the initial segmentations were manually curated using the Data Annotation custom built software, as shown in Fig. 4. A distance transform was applied to the curated segmentations to generate the signed distance maps with which we train the network. We used thresholding and watershed transform to post-process the model output and calculate the main morphometric parameters. A detailed description of the proposed method is explained below. The study protocol was aproved by the ethics committee of the Universidad Tecnológica de Bolívar, Colombia, and the requirement for informed consent was waived because of the retrospective study design. The study adhered to the tenets of the Declaration of Helsinki.

 figure: Fig. 3.

Fig. 3. Data annotation stage. A trained physician manually annotates the segmentation using our custom-built software to produce the masks for cells and guttae. Then, a distance transform is applied on the two masks. Finally, negative values are assigned to the guttae distance map resulting image in a signed distance map, with positive values for cells (green) and negative values for guttae (violet).

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. GUI of the data annotation software developed for this research. Panel 1 shows basic information about the loaded file. Panel 2 shows the CE image overlayed with the current segmentation. Panels 3 and 4 display the CE calculated parameters, and panel 5 contains the editing tools.

Download Full Size | PDF

2.1 Data collection

We used a set of 90 in vivo specular microscopy images of CE acquired from 66 patients with both healthy (42) and dystrophic corneas (48), using a Topcon SP-3000P specular microscope. The set of images was divided into three data-sets: training (57), validation (10) and testing (23). The procedure was realized in automatic mode with the CellCount software equipped in the microscope, which provides an initial segmentation of the CE in a selected ROI. This software misclassifies guttae as cells, as shown in Fig. 2(b). It can be modified with editing tools in the microscope software, for instance, to draw or remove cells. However, removing erroneous detections may lead to over- or under-estimation of morphometric cell parameters. The microscope exports a two-channel TIFF file of $640 \times 480$ pixels that contains the acquired CE image and the corresponding initial segmentation.

2.2 Data annotation

Since the proposal of this work is a supervised deep learning-based approach, a curated dataset of CE segmentations is needed for training. Therefore, we developed a custom-built software to manually segments cells in CE images [34] to generate ground-truths. The software was made using the Python-based Tkinter library [35] to create the graphical user interface (GUI) shown in Fig. 4.

The image on the GUI’s right side is composed of the current segmentation overlaid on top of the CE image. The software allows a trained ophthalmologist to modify the initial segmentation made by the CellCount Topcon software, i.e., split or merge regions and classify these regions as cells or guttae. However, this segmentation is imported only to assist the ophthalmologist and reduce the time and effort spent on the annotation task. It can also be discarded to start the annotation from scratch. These tools are based on morphological operations applied to the binary image that contains the segmentation. After finishing the corrections, the software produces two masks corresponding to cell bodies and guttae/dystrophic regions, respectively, as shown in Fig. 3. From these images, all parameters are calculated: cell/guttae density, number of regions, minimum, maximum, and average area, and percentage of area occupied by each class in the segmented region. Finally, the corrected segmentation is saved as a three-page TIFF file with the new corrected segmentation. Six physicians defined the segmentation criteria, and three worked on independent subsets of 30 CE images to create the ground truths.

2.3 Distance maps

The main problem in microscopy image cell segmentation is that close or overlapping cells tend to be segmented as a single object. Moreover, since dysfunctional regions or guttae often produce false positives in conventional cell segmentation software, we need to establish a reliable approach to discern between the two features while separating individual cells effectively. Therefore, we cast the supervised learning problem as a regression of an input image to a signed distance map.

Typically, UNet-based models are trained under a supervised classification framework in which the network has to directly output a mask of different labels corresponding to the classes of objects in the image. This mask is computed via the softmax output of the neural network. However, often many stages of post-processing with heuristics need to be carried out to obtain the desired segmentation [20,24]. To avoid these post-processing stages we propose training our UNet with signed distance maps, which have several advantages, mainly: i) accurate segmentation of touching objects like cells; ii) robustness to unreliable ground-truth segmentations which are often problematic due to poorly-defined cell or guttae boundaries; iii) continuity and smoothness constraints implied in estimating distance maps facilitate the detection of guttae, especially when they cover a significant area of the image [36]; iv) reduced learning complexity and rapid convergence using a relatively small dataset. The network is intended to predict Euclidean distances between the center and edge of cells/guttae rather than a pixel-wise classification of the input image, which is prone to incorrect classification due to the the lack of uniform illumination, poor contrast, and cell visibility, and requires a large number of images to develop reliable mapping capabilities [24].

The signed distance maps are created as follows: given a grayscale CE image as input, the goal is to train a CNN model to predict a signed distance map where positive values indicate cell bodies and negative values guttae. For preparing the target signed distance maps, the reference segmentations (cell and guttae masks) are passed through a distance transform that assigns to each pixel the value of the Euclidean distance to the closest background pixel [37]. This procedure produces two distance maps, as depicted in Fig. 3 where the green map corresponds to cells and the violet map to guttae. The final signed distance map $\mathcal {D}_I$ is given by

$$\mathcal{D}_I = \mathcal{D}_c - \mathcal{D}_g,$$
where $\mathcal {D}_c$ is the calculated distance map from the cell mask, and $\mathcal {D}_g$ is the distance map corresponding to guttae regions. It encodes cells as positive values and guttae as negative values in a single scalar field image. The resulting signed distance map will feature larger values for larger regions [37], which, for this work, may be useful due to the significant differences between region sizes.

2.4 CNN architecture and implementation

To develop a robust algorithm for the CE image segmentation task, we aimed to use a deep learning-based method that would deliver accurate cell parameters and guttae information. For this purpose, we devised a a 5-layers UNet architecture to predict signed distance maps. As shown in Fig. 5, the model consists of two stages: encoding and decoding. The encoding stage (left side) follows the standard CNN: it consists of convolutional blocks, each followed by a $2 \times 2$ max-pooling operation with a stride of 2 for downsampling. Each convolutional block contains two sequences of $3 \times 3$ convolution layers with padding, Instance Normalization [38], and leaky rectified linear unit (Leaky ReLU) activation [39,40] with negative slope coefficient $\alpha = 0.1$. At each downsampling step the number of feature channels is doubled. The downmost layers are ResNet blocks, i.e., the input of the block is added to the output [41]. The use of these blocks is motivated by the need to tackle the vanishing gradient problem and to improve the latent space representation [42].

 figure: Fig. 5.

Fig. 5. Schematic of the CNN architecture. Each encoding path layer is made up of two sequences of $3 \times 3$ convolution blocks. Each block consists of Instance Normalization and Leaky ReLU activation with negative slope coefficient $\alpha = 0.1$. It is followed by a $2 \times 2$ max-pooling layer with a stride of 2 for downsampling. The decoding path consists of deconvolution layers (kernel size of $2 \times 2$ and stride of $2$) followed by two convolution blocks. Skip connections allow transferring data between layers of the same level in the downsampling/upsampling paths. The last layer is computed by a $1 \times 1$ convolution with linear activation to perform the output distance map. We trained the CNN using the MAE loss function and Adam optimizer.

Download Full Size | PDF

The decoding stage (the expansive path in the right side) is similar, except that max-pooling operations are replaced by transposed convolution (deconvolutions) layers with a kernel size of $2 \times 2$ and a stride of 2, and the number of feature channels is halved. Each deconvolution is followed by concatenation with the corresponding feature map from the encoding path to recover spatial information lost due to the downsampling operations, and a convolution block as in the encoding stage. At the final layer, a $1 \times 1$ convolution with linear activation is used to map each 16-component feature vector to the desired output. This network was implemented using the Python-based Keras library with a TensorFlow backend, and the Python-based DeepTrack library [43,44].

Before training, each patch is z-scored and normalized between -1 and 1 with the $\tanh (.)$ function. Then, during the training process, simple data augmentation operations are used, such as horizontal, vertical, and diagonal flips, and random rotations, to increase the number of samples from 148 to 2048, combined with the mean absolute error (MAE) loss function, and Adam optimizer with a learning rate of 0.001 [45]. Also, a continuous generator continuously creates new images during training by balancing the speed gained from reusing images and the generalization achieved from yielding new training data. We trained the model on a Tesla T4 GPU with 12GB of RAM. Training time takes, on average, 33 minutes. The model code is available in a GitHub repository specified in the code availability statement.

2.5 Post-processing stage

The output distance map predicted by the CNN is then binarized to separate guttae from cells as

$$ \mathcal{T}_g(i,j) = \begin{cases} 1, & \text{if}\ \mathcal{D}_{\mathcal{I}}(i,j) < 0 \\ 0, & \text{otherwise}. \end{cases}$$

However, if we use the condition $\mathcal {D}_{\mathcal {I}}(i,j) > 0$ we may get some cells touching each other. We found that a threshold slightly higher than 0 for cells would separate each cell from the surrounding ones for easier counting and post-processing. Therefore, we set this threshold empirically to 0.2, as

$$ \mathcal{T}_c(i,j) = \begin{cases} 1, & \text{if}\ \mathcal{D}_{\mathcal{I}}(i,j) > 0.2 \\ 0, & \text{otherwise} \end{cases} \enspace$$
to identify cells. Finally, we calculate $\mathcal {T}(\mathcal {D}) = \mathcal {T}_c \cup \mathcal {T}_g$ to perform the watershed transformation to ensure that boundaries between cells and guttae are well defined. The result after this process is the specular microscopy segmentation, which is separated into cells and guttae regions, avoiding complex post-processing operations.

2.6 Morphometric parameters

Here, we describe the most relevant morphometric parameters used to assess the CE state in the clinical setting and use them to evaluate the method’s performance [50]. Cells touching the borders have been automatically removed for calculating the parameters because they may correspond to a partial segmentation. The cell density (CD), given by

$$ \text{CD} = \frac{\#\,\text{of cells}}{\text{total segmented area}} \enspace,$$
which indicates the number of cells per unit area, measured in cells/mm$^2$. It is noteworthy that this parameter is often incorrectly estimated in the presence of cornea guttata, because the effective area has to include the area occupied by guttae or dystrophic regions, but the numerator has to account for healthy cells, while discounting any guttae misclassified as cells. The mean cell area (MCA), given by
$$ \text{MCA} = \frac{\text{total cell area}}{\#\,\text{of cells}} \enspace,$$
measured in $\mu \mathrm {m}^2$. This parameter is also prone to over-estimation in cornea guttata. Hexagonality (HEX%) also called pleomorphism, i.e., the percentage of hexagonal cells (cells with six neighbors), calculated as
$$ \text{HEX}\% = \frac{\#\,\text{of hexagonal cells}}{\#\,\text{of cells}} \times 100\% \enspace.$$

The coefficient of variation of cell area (CV%), also called polymegethism, calculated as

$$ \text{CV}\% = \frac{\text{std}(\text{cells area})}{\text{mean}(\text{cells area})} \times 100\% \enspace,$$
where std(.) and mean(.) are the standard deviation and the mean of the cell areas, respectively.

Finally, we report a new parameter to quantify the percentage of the segmented area affected by guttae. We called it the Guttae Area Ratio (GAR%), calculated as

$$ \text{GAR}\% = \frac{\text{total guttae area}}{\text{total segmented area}} \times 100\% \enspace.$$

We believe this parameter provides a complementary CE assessment tool for the clinician [49]. In Table 1, we briefly describe the clinical relevancy of these parameters and how they are typically affected in CE images with Fuchs’ dystrophy.

Tables Icon

Table 1. Summary of the main morphometric parameters used to assess the corneal endothelium health.

3. Results

We used a set of 23 images to evaluate the performance of four methods: the Topcon microscope CellCount software, the custom-built software (reference segmentation), the CNN-architecture shown in Fig. 5 trained using masks (UNet-mask) [49] and the proposed method trained using signed distance maps (UNet-dm). First, we show a comparison between the results obtained by the UNet-mask and the UNet-dm. Second, we discuss a qualitative comparison between several representative images of four grades of endothelial Fuchs dystrophy (grade 0 for a cornea without guttae, grade 1 for mild grade cornea guttata, grade 2 for moderate level, and grade 3 for severe cases of cornea guttata). The agreement was determined using the morphometric parameters described above and Bland-Altman plots.

3.1 Classification versus regression UNets

To show the advantages of the distance map approach, we trained the UNet under two scenarios: a classification (UNet-mask similar to [49]) and a regression approach (UNet-dm). Since the UNet-mask is a multi-class classification architecture, we had to make several modifications to the proposed method (Fig. 5): on the convolution layers, Instance Normalization was removed, and ReLU replaced Leaky ReLU activation. We also changed the loss function from MAE to weighted categorical cross-entropy. Finally, a softmax activation in the last layer computes the probability distribution over the three labels, i.e., cells, guttae, and intercellular space. The segmented image was post-processed with the watershed algorithm to separate cell boundaries.

The average accuracy was UNet-mask=83.06% vs UNet-dm=83.79%, which shows that both networks are able to identify sufficiently well the cells and guttae [51]. In Fig. 6, we show three examples of results obtained with UNet-mask and UNet-dm. Overall, both segmentations are similar in terms of the segmented areas. However, the UNet-mask results show problematic segmentations like merged cells. While there are ways to deal with these problems in post-processing, directly avoiding them through a distance-map codification is a substantial improvement. The results from the UNet-dm are much more similar to the ground-truth segmentations with well-defined cell boundaries.

 figure: Fig. 6.

Fig. 6. Comparison between UNet-mask and UNet-dm in one example of (a-d) non-guttae cornea, (e-h) grade 1 and (i-l) grade 3 of cornea guttata. Each result is composed of the specular microscopy image, the final segmentations with the UNet-mask and UNet-dm, and the reference segmentation (ground-truth). The estimated parameters are shown at the bottom right corner.

Download Full Size | PDF

Moreover, the morphometric parameters obtained from the UNet-dm are closer to the reference values than those obtained with the UNet-mask. For instance, in the second and third row of Fig. 6 the CD and HEX are underestimated.

The analysis of how the MAE of the morphometric parameters calculated on the testing set evolved every ten epochs of the training process shows a significant difference between the two versions of the UNet model. Figure 7 reveals that the UNet-mask approach does not achieve the same performance as the UNet-dm even after 100 epochs. In sharp contrast, the UNet-dm method quickly converges to optimal performance with much lower MAEs for all parameters, but especially for the CV shown in Fig. 7(d) in which the large error is due to many incorrectly segmented small cells.

 figure: Fig. 7.

Fig. 7. MAE calculated after each 10 training epochs of (a) Mean Cell Area, (b) Coefficient of Variation in Cell Area, (c) Cell Density and (d) Hexagonality. The graphs show the fast convergence and lower errors of the relevant morphometric parameters calculated using the UNet-dm compared to those calculated with the UNet-mask.

Download Full Size | PDF

3.2 Performance evaluation 1: qualitative analysis

Figure 8 shows several representative results of four grades of endothelial dysfunction, demonstrating the robustness of the proposed method in different scenarios. The first column shows the specular microscopy images acquired with the Topcon SP-3000P specular microscope. The original segmentation performed by the microscope CellCount software is shown in the second column. Column three contains their respective manually corrected segmentation. Finally, the automatic segmentations performed by the proposed method are shown in the fourth column. At the top-right corner of each segmentation, there are two of the main morphometric parameters: cell density (in cells/mm$^2$) and mean cell area (in $\mu \mathrm {m}^2$), in addition to guttae area ratio (in $\%$). It is noteworthy that the analyzed area is not the entire image but a ROI defined by the bounding box of the manually segmented area, which are surrounded by orange boxes in Fig. 8.

 figure: Fig. 8.

Fig. 8. Qualitative analysis. From top to bottom: images from four different stages of cornea guttata. From left to right: the CE image, the Topcon software segmentation, the manual reference segmentation, and the predicted segmentation using the proposed method. Red arrows indicate examples of misclassified regions and blue arrows examples of inaccurately segmented regions. The orange bounding boxes indicate the analyzed ROI in each example.

Download Full Size | PDF

The specular microscopy image of grade-0 cornea guttata (no guttae) in Fig. 8(a) has higher quality and contrast than the other examples in the other rows. Here, the cell boundaries are easily detected. Therefore, the three segmentation results have similar performance. However, each method segmented slightly different areas, which produced minor discrepancies between the estimated morphometric parameters. For instance, the red arrow in Fig. 8(b) points to a large cell not contained in the Topcon software segmentation. However, it was included with the annotation software in the reference and accurately segmented by the proposed UNet-dm model.

The second row of Fig. 8 shows a grade-1 cornea guttata image. In the Topcon segmentation, the red arrow indicates a small gutta, which the software erroneously classified as a cell. The reference segmentation includes it correctly as a gutta, as shown in Fig. 8(g). However, even the reference segmentation may have errors from the manual annotation process. The blue arrows in the reference segmentation indicate two merged cells and an inaccurately segmented cell. The proposed method was able to correct these two issues (Fig. 8(h)). However, the lack of uniform illumination in the periphery may lead to incorrect classification, such as the small dark region detected as a gutta indicated by the red arrow. The blue arrow points to a region that contains two merged cells due to the low contrast.

Grades 2 and 3 of corneal guttata are related to large guttae, like those pointed out with red arrows in the third and fourth row of Fig. 8. The Topcon software erroneously detected them as large cells, which produce an overestimated MCA alongside any other morphometric parameter that depends on the cell area. In the reference segmentation of the severe case (Fig. 8(o)), there is a medium-size gutta indicated with the red arrow that was not well classified. Nevertheless, the proposed UNet-dm method identified it correctly along with the larger guttae throughout the image and the cells.

3.3 Performance evaluation 2: quantitative results

We assume manually corrected segmentation as a ground-truth to compare with the results obtained from the proposed method. To assess the agreement between our proposed method and the reference, we used Bland-Altman plots (second column of Fig. 9). More specifically, we evaluated the mean difference $\bar {d}$ between the two methods and determined the confidence interval CI = [$\bar {d}$ - 1.96 SD, $\bar {d}$ + 1.96 SD], within which we expect the 95% of the differences to be contained. As is typical in a Bland-Altman plot, we assume the mean of the two measurements as the best estimation of the true value. The Bland-Altman plot displays the difference between the two methods against their mean. We carried out the same analysis to evaluate the agreement between the Topcon microscope software and the reference (first column of Fig. 9).

 figure: Fig. 9.

Fig. 9. Bland-Altman plots of the main morphometric parameters of CE images. The first column shows the comparison between the original segmentation performed by the microscope and the manual corrected segmentation, i.e. the ground-truth reference, and the second column shows the comparison between the CNN-based proposed method and the ground-truth reference.

Download Full Size | PDF

Regarding the CD, the Bland-Altman plot of the Topcon segmentation and the reference shows a mean difference $\bar {d}$ = 312.4 cells/mm$^2$, which means the original segmentation heavily overestimates this parameter. This overestimation is due to dystrophic regions misclassified as cells, as shown with red arrows in the second column of Fig. 8. The 95% CI ranges from -302.4 to 927.3 cells/mm$^2$, which is considerably wide for a parameter crucial for assessing the CE. However, the bias between the proposed method and the reference is not zero but acceptable. On average, the proposed method measures -41.9 cells/mm$^2$ than the manual reference with 95% CI [-306.2, 222.5] cells/mm$^2$. These results indicate good agreement between the proposed segmentation method and the manual reference with a relatively low CI, especially over an extensive CD range.

The analysis with the Bland-Altman plot of MCA between the Topcon software and the manual reference indicates an over-estimation by the Topcon software with a mean difference $\bar {d}$ = 54.3 $\mu m^2$, and 95% CI in the range of [-191.7, 300.3] $\mu m^2$. This disagreement is due to large pathological regions classified as cells, like those being pointed out with red arrows in Fig. 8(j,n). The results of comparing the proposed method and the reference yield a mean difference $\bar {d}$ = 14.8 $\mu m^2$, which is close to zero, and 1.96SD = $\pm$ 56.6 $\mu m^2$, indicating a good agreement between the two methods.

As the HEX% is not a cell size-dependent parameter, we expect it to be less affected by cornea guttata. The CIs of the two HEX% Bland-Altman plots are quite similar. In comparison to the manual reference, the Topcon software produces 1.96SD = $\pm$ 11.2% with $\bar {d}$ = 1.5%, while the proposed method 1.96SD = $\pm$ 10.4% and $\bar {d}$ = 0.8%. They both confirm a good agreement with the ground truth.

The CV% parameter yields a notable difference between the Bland-Altman plots. The mean difference between the Topcon segmentation and the reference was $\bar {d}$ = 29.9%, with 95% CI ranging from -60.6% to 120.4%. This plot seems to follow an upward trend; i.e., the error increases as the measured value is higher. On average, the Topcon segmentation tends to overestimate this parameter. Conversely, there is a good agreement level between the proposed method and the reference with a mean difference $\bar {d}$ = 2.7% and a 95% CI between -10.2% and 15.6%.

Finally, the microscope often segments guttae as large cells or multiple small cells for dysfunctional regions. As a result, there is no information calculated with the Topcon software about these dystrophies. In the mild level of cornea guttata, the mean difference of GAR between the estimated values from the proposed method and the reference is $0.08 \pm 0.32$ %. For the moderate level, the mean difference is $3.57 \pm 3.83$ %, and for severe cases, it is $7.80 \pm 8.19$ %. In all cases, these results indicate that the proposed method can reliably assess the percentage of area covered by guttae.

4. Conclusions

Automated corneal endothelium assessment in cornea guttata is challenging due to various factors, including adequate cell visibility, nonuniform illumination, and guttae appearance. Moreover, obtaining accurately defined segmentation boundaries from manual references for training is complicated. For this reason, we proposed a fast-converging UNet-based regression method trained to produce a signed distance map that requires minimal post-processing to calculate corneal endothelium morphometric parameters, reducing the complexity of the training process. Our results show that this method works sufficiently well, even with the relatively small dataset, and has the potential for improving the way specular microscopy images of the corneal endothelium are analyzed. Future work involves further validation on a larger dataset, exploring the impact of preprocessing strategies, like illumination correction, and the use of these methods in the clinical setting. The newly proposed guttae area ratio parameter may prove helpful for classifying Fuchs’ dystrophy into different stages.

Funding

Ministerio de Ciencia, Tecnología e Innovación (Minciencias) (124489786239, 763-2021); Universidad Tecnológica de Bolívar (CI2021P02); Agencia Estatal de Investigación (AEI/10.13039/501100011033, PID2020-114582RB-I00).

Acknowledgments

This work has been partly funded by Ministerio de Ciencia, Tecnología e Innovación, Colombia, Project 124489786239 (Contract 763-2021), Universidad Tecnológica de Bolívar (UTB) Project CI2021P02, and Agencia Estatal de Investigación del Gobierno de España (PID2020-114582RB-I00/ AEI / 10.13039/501100011033). J. Sierra thanks UTB for a post-graduate scholarship.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Code for the networks and their weights are available at [52].

References

1. C. J. Giasson, A. Graham, J.-F. Blouin, L. Solomon, J. Gresset, M. Melillo, and K. A. Polse, “Morphometry of cells and guttae in subjects with normal or guttate endothelium with a contour detection algorithm,” Eye Contact Lens 31(4), 158–165 (2005). [CrossRef]  

2. J. S. Sierra, J. Pineda, E. Viteri, D. Rueda, B. Tibaduiza, R. D. Berrospi, A. Tello, V. Galvis, G. Volpe, M. S. Millán, L. A. Romero, and A. G. Marrugo, “Automated corneal endothelium image segmentation in the presence of cornea guttata via convolutional neural networks,” Proc. SPIE 11511, 115110H (2020). [CrossRef]  

3. B. Selig, K. A. Vermeer, B. Rieger, T. Hillenaar, and C. L. L. Hendriks, “Fully automatic evaluation of the corneal endothelium from in vivo confocal microscopy,” BMC Med. Imaging 15(1), 13–15 (2015). [CrossRef]  

4. S. O. Tone and U. Jurkunas, “Imaging the corneal endothelium in Fuchs corneal endothelial dystrophy,” Semin. Ophthalmol. 34(4), 340–346 (2019). [CrossRef]  

5. Y. Yasukura, Y. Oie, R. Kawasaki, N. Maeda, V. Jhanji, and K. Nishida, “New severity grading system for fuchs endothelial corneal dystrophy using anterior segment optical coherence tomography,” Acta Ophthalmol. 99(6), e914–e921 (2021). [CrossRef]  

6. R. A. Laing, M. M. Sandstrom, and H. M. Leibowitz, “Clinical specular microscopy: I. optical principles,” Arch. Ophthalmol. 97(9), 1714–1719 (1979). [CrossRef]  

7. M. Srinivasan, “Chapter-22 specular microscopy,” in Modern Ophthalmology, (Jaypee Brothers Medical Publishers (P) Ltd., 2005), pp. 147–153.

8. K. Nurzynska, “Deep learning as a tool for automatic segmentation of corneal endothelium images,” Symmetry 10(3), 60 (2018). [CrossRef]  

9. A. Fabijańska, “Segmentation of corneal endothelium images using a u-net-based convolutional neural network,” Artif. Intelligence In Medicine 88, 1–13 (2018). [CrossRef]  

10. F. Scarpa and A. Ruggeri, “Automated morphometric description of human corneal endothelium from in-vivo specular and confocal microscopy,” in 2016 38th Annual Int. Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2016), pp. 1296–1299.

11. F. Scarpa and A. Ruggeri, “Development of a reliable automated algorithm for the morphometric analysis of human corneal endothelium,” Cornea 35(9), 1222–1228 (2016). [CrossRef]  

12. F. J. Sanchez-Marin, “Automatic segmentation of contours of corneal cells,” Comput. Biol. Med. 29(4), 243–258 (1999). [CrossRef]  

13. S. Al-Fahdawi, R. Qahwaji, A. S. Al-Waisy, S. Ipson, M. Ferdousi, R. A. Malik, and A. Brahma, “A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology,” Compu. Methods Programs Biomed. 160, 11–23 (2018). [CrossRef]  

14. A. Piorkowski, K. Nurzynska, J. Gronkowska-Serafin, B. Selig, C. Boldak, and D. Reska, “Influence of applied corneal endothelium image segmentation techniques on the clinical parameters,” Comput. Med. Imaging Graph. 55, 13–27 (2017). [CrossRef]  

15. A. Ruggeri, E. Grisan, and J. Jaroszewski, “A new system for the automatic estimation of endothelial cell density in donor corneas,” Br. J. Ophthalmol. 89(3), 306–311 (2005). [CrossRef]  

16. T. Vicar, J. Chmelik, R. Jakubicek, L. Chmelikova, J. Gumulec, J. Balvan, I. Provaznik, and R. Kolar, “Self-supervised pretraining for transferable quantitative phase image cell segmentation,” Biomed. Opt. Express 12(10), 6514–6528 (2021). [CrossRef]  

17. M. C. Daniel, L. Atzrodt, F. Bucher, K. Wacker, S. Böhringer, T. Reinhard, and D. Böhringer, “Automated segmentation of the corneal endothelium in a large set of ’real-world’specular microscopy images using the u-net architecture,” Sci. Rep. 9(1), 4752–4757 (2019). [CrossRef]  

18. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Int. Conference on Medical image computing and computer-assisted intervention, (2015), pp. 234–241.

19. N. Joseph, C. Kolluru, B. A. Benetz, H. J. Menegay, J. H. Lass, and D. L. Wilson, “Quantitative and qualitative evaluation of deep learning automatic segmentations of corneal endothelial cell images of reduced image quality obtained following cornea transplant,” J. Med. Imaging 7(01), 014503 (2020). [CrossRef]  

20. J. P. Vigueras-Guillén, B. Sari, S. F. Goes, H. G. Lemij, J. van Rooij, K. A. Vermeer, and L. J. van Vliet, “Fully convolutional architecture vs sliding-window CNN for corneal endothelium cell segmentation,” BMC Biomed. Eng. 1, 4 (2019). [CrossRef]  

21. J. P. Vigueras-Guillén, J. van Rooij, A. Engel, H. G. Lemij, L. J. van Vliet, and K. A. Vermeer, “Deep learning for assessing the corneal endothelium from specular microscopy images up to 1 year after ultrathin-Saek surgery,” Trans. Vis. Sci. Technol. 9(2), 49 (2020). [CrossRef]  

22. J. P. Vigueras-Guillén, H. G. Lemij, J. Van Rooij, K. A. Vermeer, and L. J. van Vliet, “Automatic detection of the region of interest in corneal endothelium images using dense convolutional neural networks,” Proc. SPIE 10949, 1094931 (2019). [CrossRef]  

23. J. P. Vigueras-Guillén, J. van Rooij, H. G. Lemij, K. A. Vermeer, and L. J. van Vliet, “Convolutional neural network-based regression for biomarker estimation in corneal endothelium microscopy images,” in 2019 41st Annual Int. Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2019), pp. 876–881.

24. J. P. Vigueras-Guillén, J. van Rooij, B. T. van Dooren, H. G. Lemij, E. Islamaj, L. J. van Vliet, and K. A. Vermeer, “Denseunets with feedback non-local attention for the segmentation of specular microscopy images of the corneal endothelium with guttae,” Sci. Rep. 12(1), 14035 (2022). [CrossRef]  

25. F. Aiello, G. Gallo Afflitto, F. Ceccarelli, M. Cesareo, and C. Nucci, “Global prevalence of fuchs endothelial corneal dystrophy (fecd) in adult population: A systematic review and meta-analysis,” J. Ophthalmol. 2022, 1–7 (2022). [CrossRef]  

26. S. Feizi, “Corneal endothelial cell dysfunction: etiologies and management,” Therapeutic Advances in Ophthalmology 10, 251584141881580 (2018). [CrossRef]  

27. A. O. Eghrari, S. A. Riazuddin, and J. D. Gottsch, “Fuchs corneal dystrophy,” Prog. Molecular Biol. Trans. Sci. 134, 79–97 (2015).

28. R. A. Laing, H. M. Leibowitz, S. S. Oak, R. Chang, A. R. Berrospi, and J. Theodore, “Endothelial mosaic in Fuchs’ dystrophy: A qualitative evaluation with the specular microscope,” Arch. Ophthalmol. 99(1), 80–83 (1981). [CrossRef]  

29. A. G. Chiou, S. C. Kaufman, R. W. Beuerman, T. Ohta, H. Soliman, and H. E. Kaufman, “Confocal microscopy in cornea guttata and fuchs’ endothelial dystrophy,” Br. J. Ophthalmol. 83(2), 185–189 (1999). [CrossRef]  

30. M. J. Hogan, I. Wood, and M. Fine, “Fuchs’ endothelial dystrophy of the cornea: 29th sanford gifford memorial lecture,” Am. J. Ophthalmol. 78(3), 363–383 (1974). [CrossRef]  

31. T. Iwamoto and A. G. Devoe, “Electron microscopic studies on fuchs’ combined dystrophy: I. posterior portion of the cornea,” Invest. Ophthalmol. Vis. Sci. 10, 9–28 (1971).

32. S. O. Tone, V. Kocaba, M. Böhm, A. Wylegala, T. L. White, and U. V. Jurkunas, “Fuchs endothelial corneal dystrophy: The vicious cycle of fuchs pathogenesis,” Prog. Retinal Eye Res. 80, 100863 (2021). [CrossRef]  

33. S. He, K. T. Minn, L. Solnica-Krezel, M. A. Anastasio, and H. Li, “Deeply-supervised density regression for automatic cell counting in microscopy images,” Med. Image Anal. 68, 101892 (2021). [CrossRef]  

34. J. S. Sierra, J. Pineda, E. Viteri, A. Tello, M. S. Millán, V. Galvis, L. A. Romero, and A. G. Marrugo, “Generating density maps for convolutional neural network-based cell counting in specular microscopy images,” J. Phys.: Conf. Ser. 1547(1), 012019 (2020). [CrossRef]  

35. F. Lundh, “An introduction to tkinter,” URL: www.pythonware.com/library/tkinter/introduction/index.htm (1999).

36. J. Grauer, F. Schmidt, J. Pineda, B. Midtvedt, H. Löwen, G. Volpe, and B. Liebchen, “Active droploids,” Nat. Commun. 12(1), 6005–6008 (2021). [CrossRef]  

37. P. Naylor, M. Laé, F. Reyal, and T. Walter, “Segmentation of nuclei in histopathology images by deep regression of the distance map,” IEEE Trans. Med. Imaging 38(2), 448–459 (2019). [CrossRef]  

38. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast stylizationmap,” arXiv, arXiv:1607.08022 (2016). [CrossRef]  

39. A. L. Maas, A. Y. Hannun, and A. Y. Ng, “Rectifier nonlinearities improve neural network acoustic models,” in International conference on machine learning, vol. 30 (2013).

40. B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv, arXiv:1505.00853 (2015). [CrossRef]  

41. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 770–778.

42. S. Helgadottir, B. Midtvedt, J. Pineda, A. Sabirsh, C. B. Adiels, S. Romeo, D. Midtvedt, and G. Volpe, “Extracting quantitative biological information from bright-field cell images using deep learning,” Biophysics Rev. 2(3), 031401 (2021). [CrossRef]  

43. B. Midtvedt, S. Helgadottir, A. Argun, J. Pineda, D. Midtvedt, and G. Volpe, “Quantitative digital microscopy with deep learning,” Appl. Phys. Rev. 8(1), 011310 (2021). [CrossRef]  

44. S. Helgadottir, A. Argun, and G. Volpe, “Digital video microscopy enhanced by deep learning,” Optica 6(4), 506–513 (2019). [CrossRef]  

45. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv, arXiv:1412.6980 (2014). [CrossRef]  

46. A. M. Roszkowska, P. Colosi, P. D’Angelo, and G. Ferreri, “Age-related modifications of the corneal endothelium in adults,” Int. Ophthalmol. 25(3), 163–166 (2004). [CrossRef]  

47. J. E. Valdez-García, G. Ortiz-Morales, N. Morales-Mancillas, J. L. Domene-Hickman, J. Hernández-Camarena, D. Loya-García, J. Zavala, and A. Rodriguez-García, “Age-related changes of the corneal endothelium in the hispanic elderly population,” The Open Ophthalmol. J. 16(1), e2204140 (2022). [CrossRef]  

48. A. A. Kudva, A. S. Lasrado, S. Hegde, R. Kadri, P. Devika, and A. Shetty, “Corneal endothelial cell changes in diabetics versus age group matched nondiabetics after manual small incision cataract surgery,” Indian J. Ophthalmol. 68(1), 72 (2020). [CrossRef]  

49. P. S. Shilpashree, K. V. Suresh, R. R. Sudhir, and S. P. Srinivas, “Automated image segmentation of the corneal endothelium in patients with Fuchs dystrophy,” Trans. Vis. Sci. Technol. 10(13), 27 (2021). [CrossRef]  

50. I. Ahmed, D. L. Carni, E. Balestrieri, and F. Lamonaca, “Comparison of u-net backbones for morphometric measurements of white blood cell,” in 2022 IEEE International Symposium on Medical Measurements and Applications (MeMeA), (2022), pp. 1–6.

51. V. Yeghiazaryan and I. D. Voiculescu, “Family of boundary overlap metrics for the evaluation of medical image segmentation,” J. Med. Imaging 5(1), 015006 (2018). [CrossRef]  

52. J. Sierra, J. Pineda, G. Volpe, L. A. Romero, and A. G. Marrugo, “Code for corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps,” GitHub (2022), Available at https://doi.org/10.5281/zenodo.7378507.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Code for the networks and their weights are available at [52].

52. J. Sierra, J. Pineda, G. Volpe, L. A. Romero, and A. G. Marrugo, “Code for corneal endothelium assessment in specular microscopy images with Fuchs’ dystrophy via deep regression of signed distance maps,” GitHub (2022), Available at https://doi.org/10.5281/zenodo.7378507.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Specular microscopy imaging of the corneal endothelium (CE). (a) Optical principle. A light source projects light onto the corneal surface from which only a fraction is reflected by the endothelium and collected by the microscope. (b) The anterior segment including the cornea and the anterior chamber. (c) The corneal layers including the endothelium. (d) Typical CE image. The cells are uniformly distributed along the entire CE tessellation. (e) Example of a corneal gutta showing the outgrowth produced due to abnormal CE cells. (f) CE image with Fuchs’ dystrophy with a large gutta in the center.
Fig. 2.
Fig. 2. (a) A CE image with Fuchs’ dystrophy. (b) The segmentation performed by the specular microscope software. The green color indicates regions labeled as cells. The software misclassifies abnormal regions (guttae) as large cells, as shown with red arrows. (c) The manually annotated ground-truth reference. The violet color represents the guttae. The microscope image and the segmentation were split into $96\times 96$ pixels image patches to generate the training and validation data sets. The estimated parameters are shown at the bottom left corner, indicating a significant difference.
Fig. 3.
Fig. 3. Data annotation stage. A trained physician manually annotates the segmentation using our custom-built software to produce the masks for cells and guttae. Then, a distance transform is applied on the two masks. Finally, negative values are assigned to the guttae distance map resulting image in a signed distance map, with positive values for cells (green) and negative values for guttae (violet).
Fig. 4.
Fig. 4. GUI of the data annotation software developed for this research. Panel 1 shows basic information about the loaded file. Panel 2 shows the CE image overlayed with the current segmentation. Panels 3 and 4 display the CE calculated parameters, and panel 5 contains the editing tools.
Fig. 5.
Fig. 5. Schematic of the CNN architecture. Each encoding path layer is made up of two sequences of $3 \times 3$ convolution blocks. Each block consists of Instance Normalization and Leaky ReLU activation with negative slope coefficient $\alpha = 0.1$. It is followed by a $2 \times 2$ max-pooling layer with a stride of 2 for downsampling. The decoding path consists of deconvolution layers (kernel size of $2 \times 2$ and stride of $2$) followed by two convolution blocks. Skip connections allow transferring data between layers of the same level in the downsampling/upsampling paths. The last layer is computed by a $1 \times 1$ convolution with linear activation to perform the output distance map. We trained the CNN using the MAE loss function and Adam optimizer.
Fig. 6.
Fig. 6. Comparison between UNet-mask and UNet-dm in one example of (a-d) non-guttae cornea, (e-h) grade 1 and (i-l) grade 3 of cornea guttata. Each result is composed of the specular microscopy image, the final segmentations with the UNet-mask and UNet-dm, and the reference segmentation (ground-truth). The estimated parameters are shown at the bottom right corner.
Fig. 7.
Fig. 7. MAE calculated after each 10 training epochs of (a) Mean Cell Area, (b) Coefficient of Variation in Cell Area, (c) Cell Density and (d) Hexagonality. The graphs show the fast convergence and lower errors of the relevant morphometric parameters calculated using the UNet-dm compared to those calculated with the UNet-mask.
Fig. 8.
Fig. 8. Qualitative analysis. From top to bottom: images from four different stages of cornea guttata. From left to right: the CE image, the Topcon software segmentation, the manual reference segmentation, and the predicted segmentation using the proposed method. Red arrows indicate examples of misclassified regions and blue arrows examples of inaccurately segmented regions. The orange bounding boxes indicate the analyzed ROI in each example.
Fig. 9.
Fig. 9. Bland-Altman plots of the main morphometric parameters of CE images. The first column shows the comparison between the original segmentation performed by the microscope and the manual corrected segmentation, i.e. the ground-truth reference, and the second column shows the comparison between the CNN-based proposed method and the ground-truth reference.

Tables (1)

Tables Icon

Table 1. Summary of the main morphometric parameters used to assess the corneal endothelium health.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

D I = D c D g ,
T g ( i , j ) = { 1 , if   D I ( i , j ) < 0 0 , otherwise .
T c ( i , j ) = { 1 , if   D I ( i , j ) > 0.2 0 , otherwise
CD = # of cells total segmented area ,
MCA = total cell area # of cells ,
HEX % = # of hexagonal cells # of cells × 100 % .
CV % = std ( cells area ) mean ( cells area ) × 100 % ,
GAR % = total guttae area total segmented area × 100 % .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.