The effective and non-invasive diagnosis of skin cancer is a hot topic, since biopsy is a costly and time-consuming surgical procedure. As skin relief is an important biophysical feature that can be difficult to perceive with the naked eye and by touch, we developed a novel 3D imaging scanner based on fringe projection to obtain morphological parameters of skin lesions related to perimeter, area and volume with micrometric precision. We measured 608 samples and significant morphological differences were found between melanomas and nevi (p<0.001). The capacity of the 3D scanner to distinguish these lesions was supported by a supervised machine learning algorithm resulting in 80.0% sensitivity and 76.7% specificity.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
The incidence of skin cancer increases every year, thus it is crucial to diagnose and treat it effectively. Risk factors are too much exposure to sunlight, fair skin, family history and age. The World Health Organization estimates that 60,000 people die every year from skin cancer: 48,000 from melanoma, its most aggressive form, and 12,000 from other types .
Skin cancer is usually diagnosed through visual inspection followed by dermoscopy. In order to facilitate dermoscopic diagnosis and to spotlight the warning signs of the most common type of melanoma, the 𝐴𝐵𝐶𝐷𝐸 criteria were proposed: 𝐴 is for asymmetry, 𝐵 for border irregularity, 𝐶 for color, D for diameter and 𝐸 for evolution . However, these methods result in a significant number of false positives. Currently, the gold standard for diagnosis is histopathology, which requires the surgical excision of the tumor (biopsy), is time consuming, uncomfortable for patients and contributes to the high direct annual costs for the diagnosis and treatment of skin cancer . Due to the large number of affected people, much research has focused on reducing unnecessary biopsies. Some authors have tried to detect skin cancer quantitatively and non-invasively through optical devices. Confocal microcopy , optical coherence tomography  and multispectral imaging [6,7] are among the most commonly used imaging modalities. Three-dimensional (3D) technology can also be used to retrieve topographic information of cutaneous lesions, producing a height map of the lesion’s surface from which parameters of area, volume and texture can be calculated. 3D techniques can be mechanical and optical, but they are mainly classified into two categories: ex-vivo techniques, which measure skin indirectly from a replica; and in-vivo techniques . Ex-vivo techniques are reliable and provide high resolutions, but since they are based on point-wise scanning they are too slow to be performed in clinical environments. In-vivo techniques are faster, but the great majority do not retrieve height maps per se, but rather a visual distribution of texture, and consequently cannot quantify roughness in accordance with International Organization for Standardization (ISO) standards . The only imaging modality whose acquisition is fast enough for in-vivo applications while retrieving quantitative information on surface heights is fringe projection, which is based on the triangulation measuring principle combined with light intensity modulation using sinusoidal functions . The use of a whole fringe pattern covering the skin sample instead of a single laser point was facilitated by the advent of high-resolution digital cameras, cost-effective and real time frame grabbers, and powerful image processing. By avoiding point-wise scanning, acquisition time dropped to a few seconds. In consequence, fringe projection can now be applied to ex-vivo  and in-vivo studies [11–13]. According to the literature, current in-vivo methods to analyze skin topography that are suitable for clinical measurements can be divided into three groups: videoscopy, capacitance mapping, and fringe projection. The first two methods produce only a visual distribution of texture, whereas fringe projection retrieves a height map that can contribute to the accuracy for the detection of non-melanoma skin cancer  and to the assessment of the 3D changes in patient’s body during radiotherapy , among other applications.
In this study, a new handheld 3D scanner based on this technology was developed for the morphological analysis of cutaneous lesions, with the goal to improve skin cancer diagnosis. In Section 2, we describe the 3D scanner, the clinical measurements, and data processing. Section 3 includes the statistically significant parameters and the classification of lesions by means of supervised machine learning. Section 4 compares this methodology with others used in the past. Finally, Section 5 contains the most remarkable findings.
2. Materials and methods
2.1 Experimental setup and clinical measurements
The 3D handheld prototype is based on stereovision and structured light projection (Fig. 1). The device has a compact design (220 x 240 x 120 mm) and includes two DMK 24U445 monochrome CCD cameras (The Imaging Source Europe GmbH) attached to two lenses (TECHSPEC Compact Fixed Focal Length lens, Edmund Optics Ltd) placed in a standard stereo geometry, at a 100 mm distance from each other and oriented at an angle of 42°.
The cameras have a working distance of 110 mm, a fixed focal lens of 25 mm and a 19 x 14 mm field of view (FOV). Between the cameras, a Pek3 picoprojector (MicroVision, Inc.) creates an image of the fringe pattern onto the skin; this pattern is moved horizontally at a small distance during acquisition. Once this procedure has finished, the skin is illuminated with a uniform white field and an image is captured with a C615 color camera (Logitech International S.A.). The recorded fringe images are then processed by a conventional phase-shifting algorithm to obtain the wrapped phase maps. Afterwards, they are unwrapped based on the well-known Goldstein algorithm  without using patterns of different period. Finally, the corresponding phases between both cameras are identified and the height map can be reconstructed by means of the triangulation principle . Additionally, the RGB information obtained with the color camera is superimposed to the geometric coordinates of the object points (X, Y, Z). The calibration of the three cameras followed the technique proposed by Tsai . The 3D scanner developed in this study offers a spatial resolution of 15 μm (XY dimension) and an acquisition time less than 4.5 s per whole FOV.
Clinical measurements were carried out at the Hospital Clínic i Provincial de Barcelona (Spain) and the Università degli Studi di Modena e Reggio Emilia (Italy). A total of 654 skin lesions from Caucasian individuals were assessed. Patients remained still to avoid motion artifacts; the area over the skin was cleaned and hair was carefully cut instead of shaved to avoid irritation. All patients provided written informed consent before any examination and ethical committee approval was obtained. The study complied with the tenets of the 1975 Declaration of Helsinki (Tokyo revision, 2004). The lesions were diagnosed by dermatologists using a commercial dermoscope and the confocal microscope VivaScope 1500 from MAVIG GmbH. When malignancy was suspected, a histological analysis was carried out. From the total set of lesions, 93% (608) could be measured with the 3D prototype; in the remaining 7% (46), the use of the 3D scanner was impractical for the body area involved. Out of the 608 lesions measured, 194 (32%) could be analyzed. The remaining 414 (68%) were rejected because of artifacts caused by micro-movements, lesions out of the system’s FOV, and hair inaccurately removed. Regarding the lesions properly collected, 81 (42%) were benign nevi; 60 (31%) melanomas; 18 (9%) basal cell carcinomas; 18 (9%) non-nevi benign lesions such as angiomas, dermatofibromas and actinic keratosis; 11 (6%) seborrheic keratosis; and 6 (3%) squamous cell carcinomas.
2.2 3D image processing
Point-wise height maps of each lesion were obtained through the reconstruction of the fringe images. Next, the 3D reconstructed data was imported into the software MountainsMap (Fig. 2). Preliminary removal of artifacts was carried out. A first operator levelled the scanned surface to remove the general slope caused by small tilts. A second operator zoomed in the area where the lesion and the surrounding healthy skin were located to select the Region of Interest (ROI). Finally, the application of a Fast Fourier Transform (FFT) filter allowed the exclusion of frequencies corresponding to artifacts of movement. Lesions were segmented from the healthy skin on the color image and the selected contour was pasted into the 3D image to extract the rest of parameters. Their perimeter was obtained by manual segmentation. Next, the area and volume were calculated by means of an operator that accounts for both peaks (convex curvatures) and holes (concave curvatures). The area and volume divided by the perimeter (Ap in mm and Vp in mm2) were also computed to determine which lesions, despite presenting a smaller perimeter, had a greater surface or volume. In order to compute additional parameters, the 3D surface was converted into a series of 2D profiles (horizontal and vertical axial cuts). A set of 200 horizontal profiles and their mean profile were obtained. From them, parameters for roughness and texture established by the ISO 25178 were calculated such as the profile maximum height, skewness and arithmetic mean deviation, number of peaks per centimeter and amplitude of the peaks.
2.3 Statistical analysis and classification algorithm
The parameters described were compared among types of lesions to evaluate their differences using SPSS statistics V23.0 (IBM Corp.). Comparisons were considered statistically significant for p-values under 0.05. Non-nevi benign lesions, seborrheic keratosis and squamous cell carcinomas were excluded from the statistical analysis due to the low number of samples. The Kolmogorov-Smirnov test was used to evaluate the normal distribution of all variables. The ANOVA / Kruskal-Wallis (KW) tests were used to compare the data among groups of skin lesions for parametric and non-parametric variables, respectively. The T-test / Mann-Whitney U (MWU) tests were used to compare outcome measures between pairs of groups for normal and non-normal distributed data.
In order to classify the lesions, the accuracy of the parameters to train a good classification model was evaluated. For this purpose, the Matlab R2015a Classification Learner application was used to perform supervised machine learning by supplying the 3D morphological parameters with statistical significance and the known classification of the lesions. Since only 18 basal cell carcinomas were measured, they were excluded from this process to avoid inaccurate results from the statistical analysis and machine learning algorithms. The Classification Learner uses the data to train different models. Accordingly, an automated training was performed to search for the best classification model including discriminant analysis, decision trees, support vector machines and nearest neighbors. The different validation options of the fitted models were analyzed beforehand . The first scheme was the ‘Cross Validation’, in which the data is divided into k subfolds, a model is trained using the out-of-fold observations, and then, the algorithm calculates the average test error for all folds. For this, k was set equal to two and the samples in the two folds were randomly selected by the algorithm. The second scheme used was the ‘Holdout Validation’, which divides the data into two folds, containing random observations of each lesion, and uses one fold for training and the second for validating. The percentage of data used for training and validating was set equally to 50%. Additionally, in order to test an algorithm that does not divide the data into training and validation sets, the ‘No-Validation’ scheme, in which all data is used for training and afterwards also for validating, was also tested.
For nevi (N), melanomas (MM) and basal cell carcinomas (BCC), all analyzed parameters followed a non-normal distribution (p < 0.05). Of all parameters computed, the KW test reported significant differences for the different types of lesions regarding area, A, volume, V, perimeter, P, normalized area, Ap, and volume, Vp (p < 0.001). In contrast, the profile and amplitude parameters calculated following the ISO 25178 did not provide significant differences (p > 0.05), and for this reason they are not reported. Table 1 shows the mean ( ± standard deviation) and the minimum and maximum values (in parenthesis) for each type of lesion and globally (Total). In addition, the MWU test analyzed if significant differences existed between pairs of types of lesions. As shown in Table 2, when comparing MM versus N, all parameters showed significant differences (p < 0.001), meaning that MM and N are significantly different. In contrast, when comparing MM vs BCC and N vs BCC, differences were not significant, with the exception of the area when comparing BCC with MM (p = 0.034), indicating the complexity of discriminating BCC from MM and N.
Regarding the machine learning based classification algorithm, the best results were obtained with decision trees (e.g. Bagged Trees) and nearest neighbors (e.g. Weighted k-Nearest Neighbor, KNN). The final data set contained 60 observations of nevi and 60 of melanomas. The nevi observations were randomly chosen from the 81 to avoid returning higher specificities than sensitivities as a result of the higher number of nevi. Table 3 contains the different validation schemes and the classification models that provided the lowest error rate. The predictive accuracy of each classification model is shown in terms of sensitivity and specificity, i.e., the percentage of melanomas and the percentage of nevi correctly identified. The sensitivity and specificity corresponding to the ‘No Validation’ scheme were 100%, with an associate error rate of 0% for all trials. The repeatability of the results was also assessed by the creation of different random sets of nevi. The ‘Cross Validation’ scheme applied to the different sets of nevi led to maximum variations of 5.00% for sensitivity and specificity and 0.05% of error rate. Regarding the ‘Holdout Validation’, the variations of sensitivity, specificity and error rate were 10.00%, 13.40% and 11.65%, respectively.
High accuracy (80.00% sensitivity and 76.70% specificity) was obtained through the ‘Holdout Validation’ scheme and training a ‘Weighted KNN’ classification model. This model categorizes query points based on their distance to the nearest k neighbors in a training data set. Specifically, medium distinction between classes is set with a number of k neighbors equal to ten, using a distance weight. On the other hand, the ‘No Validation’ scheme returned 100% sensitivity and specificity values since the data set was classified with respect to the data set itself. In conclusion, ‘Holdout Validation’ achieved the best results, but ‘Cross Validation’ was considered the most reliable approach, since it uses both sets for training and validation. Other methods such as multispectral imaging systems, which acquire images at different wavelengths, have also been used to detect skin cancer. Comparing the 3D results with those obtained in a previous study where a multispectral imaging system from 414 nm to 995 nm was applied to the study of skin cancer , our results show significantly increased specificity at the only expense of a slightly lower sensitivity. The authors obtained sensitivity and specificity of 87.2% and 54.5%, respectively, for a 50%-50% validation scheme with a customized classification algorithm in a set of 502 lesions (nevi, melanomas and basal cell carcinomas). Another study evaluated skin lesions combining this system with another that used 995 nm to 1613 nm wavelengths . The sensitivity and specificity obtained were 85.7% and 76.9%, respectively, for a no-validation algorithm when analyzing 14 melanomas and 39 nevi. We should underscore that the classification algorithms and the sample sets used were not the same for all studies.
This is the first report showing that non-invasive 3D technology based on fringe projection could assist practitioners in making a final diagnosis and contribute to a significant reduction in the number of false positives thanks to its enhanced specificity. Further work will focus on reducing the system sensitivity to the micro-movements of the patients, since the main cause for discarding 3D images was the inability to adequately filter all image artifacts.
Spanish Ministry of Economy and Competitiveness (DPI2014-56850-R, DPI2017-89414-R); European Union (DIAGNOPTICS “Diagnosis of skin cancer using optics,” ICT PSP seventh call for proposals 2013).
The authors declare that there are no conflicts of interest related to this article.
1. World Health Organization, “Health effects of UV radiation,” https://www.who.int/uv/health/uv_health2/en/index1.html. Accessed 15 May 2019.
3. G. P. Guy Jr., D. U. Ekwueme, F. K. Tangka, and L. C. Richardson, “Melanoma Treatment Costs: A Systematic Review of the Literature, 1990-2011,” Am. J. Prev. Med. 43(5), 537–545 (2012). [CrossRef] [PubMed]
4. S. Nori, F. Rius-Díaz, J. Cuevas, M. Goldgeier, P. Jaen, A. Torres, and S. González, “Sensitivity and specificity of reflectance-mode confocal microscopy for in vivo diagnosis of basal cell carcinoma: a multicenter study,” J. Am. Acad. Dermatol. 51(6), 923–930 (2004). [CrossRef] [PubMed]
5. M. Mogensen, L. Thrane, T. M. Joergensen, P. E. Andersen, and G. B. Jemec, “Optical Coherence Tomography for Imaging of Skin and Skin Diseases,” Semin. Cutan. Med. Surg. 28(3), 196–202 (2009). [CrossRef] [PubMed]
6. X. Delpueyo, M. Vilaseca, S. Royo, M. Ares, L. Rey-Barroso, F. Sanabria, S. Puig, G. Pellacani, F. Noguero, G. Solomita, T. Bosch, and T. Bosch, “Multispectral imaging system based on light-emitting diodes for the detection of melanomas and basal cell carcinomas: a pilot study,” J. Biomed. Opt. 22(6), 065006 (2017). [CrossRef] [PubMed]
7. L. Rey-Barroso, F. J. Burgos-Fernández, X. Delpueyo, M. Ares, S. Royo, J. Malvehy, S. Puig, and M. Vilaseca, “Visible and Extended Near-Infrared Multispectral Imaging for Skin Cancer Diagnosis,” Sensors (Basel) 18(5), 1441 (2018). [CrossRef] [PubMed]
9. International Organization for Standardization, “ISO 25178 Geometric Product Specifications (GPS) – Surface texture: areal”, (2012).
10. C. Hof and H. Hopermann, “Comparison of replica- and in vivo-measurement of the microtopography of human skin,” SOFW J. 126, 40–46 (2000).
11. D. S. Gorpas, K. Politopoulos, E. Alexandratou, and D. Yova, “A binocular machine vision system for non-melanoma skin cancer 3D reconstruction,” Proc. SPIE 6081, 60810D (2006). [CrossRef]
12. C. J. Moore, D. Burton, O. Skydan, P. Sharrock, and M. Lalor, “3D Body Surface Measurement and Display in Radiotherapy Part I: Technology of Structured Light Surface Sensing,” in Proceedings of IEEE International Conference on Medical Information Visualisation - BioMedical Visualisation (IEEE, 2006), pp. 97–102. [CrossRef]
13. M. Ares, S. Royo, M. Vilaseca, J. A. Herrera-Ramírez, X. Delpueyo, and F. Sanàbria, “Handheld 3D scanning system for in-vivo imaging of skin cancer,” in Proceedings of the 5th International Conference on 3D Body Scanning Technologies, N. D’Apuzzo, ed. (Hometrica Consulting, 2014), pp. 231–236. [CrossRef]
14. D. C. Ghiglia and M. D. Pritt, Two-dimensional phase unwrapping: theory, algorithms, and software (John Wiley & Sons Inc., 1998).
15. R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]
16. S. Y. Kung, Kernel Methods and Machine Learning (Cambridge University Press, 2014), App. A.