In this paper we present a novel method for the segmentation of thin corrugated layers in high resolution optical coherence tomography (OCT) images. First, we make an initial segmentation, for example with graph based segmentation that, for highly corrugated interfaces, leads to many segmentation errors. Second, we resegment the initial outcome, based on the OCT attenuation coefficient image with our matching layer attenuation coefficient segmentation (MLAS) algorithm. This algorithm repositions the initial segmentation such that it finds the point where the attenuation coefficient is close to the mean centerline attenuation. The algorithm does not utilize any sample specific prior knowledge in the attenuation coefficient based segmentation step. For simulated and measured data of strongly corrugated samples, such as is the case for varnish layers on paintings and furniture, the MLAS algorithm performs much better than the conventional segmentation. Finally, we show 3D segmentation of an entire 190 mm3 OCT volume. Our technique can aid in the rapid characterization of layer stratigraphy and deepen our understanding of their condition.
© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Nondestructive imaging of thin layers is of great importance for applications in industry, medicine, and art conservation. Optical coherence tomography (OCT) has proven to be a valuable technique to image thin layers in a variety of application areas. OCT has been used for quality control of protective coatings used in automotive  and pharmaceutical industries . Also in the manufacturing of thin layered devices such as ICs  and LCDs is OCT applicable for high accuracy quality control . In clinical practice, OCT is regularly used for the delineation of thin retinal layers, which is of critical important in clinical diagnosis of eye diseases such as macular degeneration and glaucoma [5, 6]. In cultural heritage science, inspection of thin semitransparent layers with OCT is of importance for our knowledge of the manufacturing process and analysis of possible alterations and/or degradation, and improving art conservation and has been reported for various cultural heritage items in a wide range of studies [7–12].
Paintings, for instance, are layered objects, with a typical structural design. Their optical behavior, and as a result their visual and aesthetical appearance, is determined by the stratigraphy and spectroscopic properties of the various layers . Varnishes and coatings are of mayor importance for the appearances of many cultural heritage objects, such as paintings and furniture, either as a protective layer or to create more elaborate visual effects such as highlighting colors or sfumato. Unfortunately, paintings change over time, as both the structure of the layers as well as its chemical composition are evolving away from their initial state. As a result, the appearance of the artwork can be severely altered. A well-known example of this deterioration is the darkening discoloration which occurs when unstable varnishes age . Also unwanted metal soaps can form in paint layers due to chemical reactions between the pigments and oil binders . These (sub)surface processes in combination with mechanical abrasion lead to a high degree of corrugation for the varnish layers. In addition to this, manufacturing processes (e.g. brush strokes on paintings) and the materials themselves (nerves in wood) are a source of corrugated layers. Hence, it is imperative to study the quality of varnish and paint layers and their degradation process in order to preserve and restore the artworks, if the intended representation of the artist is to be retained.
OCT performs exceptionally well in non-contact imaging of thin (semi-transparent) varnish and pigment layers. In addition to providing information about general stratigraphy of turbid objects, OCT can be used to study subsurface degradation processes. Varnish layers and protective coatings are typically 10 µm or larger in case studies [16, 17]. In practice, heavily damaged layers can be even thinner. Typical depth resolutions for commercial OCT systems are currently around 6 µm (in air). OCT resolutions that are even higher have been reported in literature of ultra-high-resolution (UHR) OCT . However, these setups are often bulky, complex, and typically confined to laboratory spaces due to their bulkiness and fragility. Although UHR-OCT can produce sharp images of the layer thickness and stratigraphy, quantification of these properties is typically left to the image interpreter. Precise and automated segmentation of images obtained by UHR-OCT is therefore crucial, if OCT is to become a regular tool for the quantitative inspection of varnish layers during the conservation and restorative process.
Segmentation of layered structures in OCT is widely studied, since manual segmentation of images is cumbersome and very time consuming. For clinical diagnosis of retinal diseases various segmentation algorithms have been developed, such as locally adaptable level sets , voxel classification methods  or graph based segmentation (GBS) . Although these algorithms work well for retinal segmentation, they do not necessarily perform well for highly corrugated interfaces and thin (varnish) layers as being present in cultural heritage objects. For example, GBS does not perform optimally for OCT retinal images of strongly corrugated pathological structures if it is used with a single regularization parameter. Although these algorithms can be adjusted using additional a priori information, the resulting careful tuning process makes the algorithm only applicable for a particular type of interface. Here we present a novel segmentation method that is specifically well suited for the segmentation of corrugated interfaces and thin layers. The first step of our algorithm, is based on an initial coarse segmentation with a conventional segmentation algorithm. We use GBS, which is a popular methodology for image segmentation in medical imaging [22–24], due to its uncomplicated implementation. The initial segmentation is used as an initialization for our second step, the matching layer attenuation coefficient segmentation (MLAS). This segmentation method relies on the property that the value of the OCT attenuation coefficient (AC), as determined by the method of Vermeer et al. , is constant in a layer. Based on this additional information the image is resegmented. We demonstrate that MLAS is highly accurate for segmenting corrugated interfaces and thin layers.
2. Materials and methods
2.1. OCT set-up
The experiments are performed using a Thorlabs Ganymede-II-HR spectral-domain OCT system operated with home built software. The system has a spectrum centered at 900 nm, spanning a bandwidth of 195 nm. The specified axial bandwidth limited resolution is 3.0 µm (in air). We developed algorithms for removing the non-linear dispersion (dechirp) and the rest dispersion (from the Thorlabs LSM04-BB imaging lens) from measurement of the OCT signal at multiple depth locations using the method described by Hillmann et al. . The normalized dechirp k is described by k(p) = p+[1−((p−N/2)/(N/2))2]h, with p, the pixel index, N=2048 the total amount of pixels on the detector and h = 182.6 the number which defines the magnitude of the chirp. The rest dispersion Δϕ is described by the relation Δϕ(p) = 1.5 · 10−9p3 − 3.2 · 10−5p2 + 0.19p. Correction of both non-linearities is performed using spline interpolation of the measured interference spectrum. Subsequently, the OCT intensity is calculated by taking the square of the inverse Fourier transform of the linearized spectrum. With our calibration we obtained an axial resolution of 3.4±0.2 µm (at 0.3 mm depth) that slowly increases to 5.7±0.6 µm (at 1.5 mm depth). The maximum imaging depth is 1.99±0.01 mm corresponding to 1.94±0.01 µm/pixel. The spectrometer roll-off is measured and fitted to the theoretical relation  resulting in a spectral resolution Δk over sampling interval δk ratio of ω = 0.69±0.05. The lateral resolution is determined with a knife edge method in the focal plane and resulted in a Gaussian beam waist of w0=7±3 µm. The signal to noise ratio, determined from an OCT measurement of a single mirror reflector at 0.3 micrometer depth, is 92.4±0.5 dB. The attenuation coefficient image is obtained by correcting the OCT intensity signal for roll-off, subsequently followed by a depth integration . In order to increase the SNR, every cross-sectional image in this work is an average of 10 B-scans. For 3D OCT imaging a single Bscan is acquired for every lateral position.
2.2. OCT simulations
The OCT signal is simulated with the assumption of 1-dimensional light propagation in the single scattering approximation . In brief, a turbid medium is simulated by propagating the sample field for every wavenumber inside the random medium using a light propagation model based on a discrete medium with Mie scattering particles. The reflection at every scattering event is determined by the numerical aperture of the sample arm lens. The transmitted field is determined by unscattered field. Multiple layers are simulated with different optical properties. The simulations for image segmentation are performed for particles with radius r = 1.0 µm, and refractive index npart1 = 1.30, npart2 = 1.40, npart3 = 1.50, The refractive indices of the media in the simulation are: nair = 1.00, nmedium = 1.33. The size of the images is 512 by 512 pixels. The spectrum is described by a Gaussian shape with a center wavelength of 900 nm and full width half maximum of 50 nm (7.1 µm axial resolution). All simulations are implemented in MATLAB2016b and run on a Dell Optiplex with a quad core CPU unit (Q8400/2.66GHz) and 8 GB of RAM.
2.3. Image segmentation algorithm
First, we apply GBS to OCT attenuation coefficient images. Second, we add a resegmentation step, which we named matching layer attenuation coefficient segmentation (MLAS), to improve the initial segmentation accuracy. Figure 1 provides a schematic overview of the entire algorithm used in this work.
2.3.1. Graph based image segmentation
The initial segmentation with GBS is applied to the OCT attenuation coefficient image. Although this gives good results for planar samples, we observed that it performs weakly for segmentation of strongly corrugated interfaces and thin layers. Flattening of images as commonly performed in retinal image segmentation is not applicable for our purposes. The objects in this work exhibit no common structures that can be uniformly filtered. Other initial segmentation algorithms can be used as well, however, the speed and easiness of implementation make GBS our first choice. The initial GBS is followed by matching layer attenuation coefficient segmentation (MLAS), which reduces the errors made by the initial GBS. The GBS segmentation method consists of connecting nodes (pixels) in an image so to obtain the minimum weighted path between start and end node in an adjacency matrix. The shortest path is determined by Dijkstra’s algorithm . First, the derivative in the axial direction of the OCT attenuation coefficient image (M [i, z]) is calculated. Second, graph weights determine the segmentation results and are a function of intensity differences between nodes, according to: Wab = 2 − (ga + gb) + 1 · 10−5. The parameters ga and gb are the normalized vertical gradients belonging to node a and node b respectively. In our analysis we ignore the horizontal/lateral gradient , since intensity transitions in OCT images are primarily in the axial/depth direction. Third, Dijkstra’s algorithm is applied to find the interface with the lowest sum of weights of the traversed nodes in the adjacency matrix. This step is iterated for every interface in the image . During segmentation of multiple interfaces the search region of the algorithm is limited by the outer interfaces of the sample. This reduces erroneous interface detection outside the physical dimensions of the imaged object.
2.3.2. Matching layer attenuation coefficient segmentation
After application of GBS a second segmentation step is performed. The main point of our method is a refinement of an initial segmentation that we assume to segment the layers over large distances but fails to correctly segment local strong corrugations. The MLAS segmentation method is based on the assumption that the individual segmented layers have a uniform attenuation coefficient. For the first interface, MLAS is initiated with the first two interfaces determined by GBS. As shown in Fig. 2(a), these interfaces may be (partly) erroneously segmented. First, the centerline ΔBj,j+1 between all the GBS segmented layers is determined according toFig. 2(b) for the first layer and the second centerline ΔB2,3 can be seen in Fig. 2(e) for the second layer. When GBS generates a perfect segmentation ΔBj,j+1 lies perfectly in the center of the segmented layer, if GBS does not segment the layer perfectly it is at a different position or even can be outside of the layer. Subsequently, the average attenuation coefficient over the centerline of the layer is determined according to Eq. (2) is close to the actual value of the layer. The sample standard deviation σj of the centerline for every layer is determined Fig. 2(b, f). For every lateral coordinate the renewed interface position is moved down until the condition is satisfied (Fig. 2(c, g)). We repeat this process iteratively for every interface in the image, when no matching value is found the resegmented interface position value remains the GBS result. This is very unlikely since σj will increase for occurrences where µj contains many values that deviate from . The application of Eq. (1) for the last interface requires the definition of an additional virtual interface beneath the last segmented interface from the GBS segmentation. In the end the initial segmentation determined by GBS is improved and replaced by the result of the MLAS step (Fig. 2(a, h)). The application of GBS and MLAS only requires the number of interfaces to be defined a priori.
The magnitude of the upward shift and the fraction of standard deviation. α & β are two key parameters defining the accuracy of the MLAS algorithm. The upward shift can be a fixed number for every interface, such as 10 pixels, but we implement a more elegant solution and define a coefficient cj in an expression that includes the respective layer thickness at coordinate i, so that sj[i] = cj(Bj[i] − Bj−1[i]). With sj[i], the upward shift in pixels of boundary Bj[i], being a function of the layer number and the lateral coordinate. sj[i] is an integer and cj is a value between 0 and 1. Note that interface Bj−1 is a result of the previous MLAS iteration for j ≥ 2, while Bj is the GBS result. In our method we take α = 0.2 and β = 0.6. α and β are fixed parameters that have been chosen such that optimal performance is achieved. We introduce a slight bias favoring segmentation at a higher µj, since µj in the AC image tends to be overestimated at the interface position.
3.1. Simulated OCT image segmentation
The implementation of the AC coefficient determination in OCT is tested using a simulation of a corrugated multi-layered sample, as described in Section 2.2. The OCT image and AC image are shown in Fig. 3(a, b), respectively. The OCT and AC signal are shown for a single A-line in (c) and (d), respectively. It can be observed that good agreement exists between the attenuation coefficient in the AC signal and the input of the simulation that is based on Mie scattering.
Next, we demonstrate the performance of the GBS and the MLAS algorithm in the segmentation of the layers in the OCT simulations of the same sample. Figure. 4 presents an overview of the segmentation results. The OCT AC image has 4 interfaces present with the first three interfaces followed by a layer with an increasing attenuation coefficient µ1 = 1.2 mm−1, µ2 = 6.9 mm−1 and µ3 = 14.3 mm−1.
Figure 4(a) shows that the first interface (air-layer1) is erroneously segmented by GBS. This is the results of the algorithm’s tendency to exclude corners when the height of the layer structure is strongly varying over the lateral coordinate. The subsequently applied MLAS segmentation, developed by us, corrects for the errors in the initial GBS and correctly segments the entire interface, see Fig. 4(a). The improved segmentation depicted in Fig. 4(a) is similar to the illustrated resegmentation in the top row of Fig. 2. Figure 4(b) shows that, in addition to the horizontal segmentation alignment bias introduced by the lowest sum of weight requirement of GBS, we can observe that the GBS result of an interface of a thin layer, sometimes crosses over to the other adjacent interface (interface2-interface1). This error is also corrected by the application of our MLAS algorithm.
Besides this particular example we observed that, in general, our segmentation algorithm performs identical to GBS for planar samples, but performs much better when strong variations in depth are present in the interface. Note that this is obtained without adding any a priori information about the expected layer structure into our model. Hence, our model is generic and applicable to a wide variety of interfaces.
A quantitative comparison of the performance of GBS and MLAS is shown in Fig. 5. We performed OCT segmentation simulations of the top layer of a three-layered corrugated structure, similar to the sample shown in Fig. 4. While varying the thickness of the layer we segmented the upper interface with GBS only and MLAS. The mean absolute error (MAE), which is the mean of the absolute value of the difference between ground truth input and algorithmic segmentation made by both methods, increases with decreasing layer thickness. However, for layer thicknesses smaller than 35 pixels (i.e., about 13 coherence lengths), and in particular for very thin corrugated samples, MLAS clearly outperforms GBS as it has a much lower MAE. Visual inspection of segmented images shows that GBS has problems following corrugated interfaces, as shown in Fig. 4(a), and occasionally erroneously jumps from the top interface to the second interface, as shown in Fig. 4(b). Furthermore, MLAS resulted in a significantly lower mean absolute deviation (MAD), which indicates that the spread in the segmentation is also much smaller than for the GBS method. In the next section we show that also for OCT images of real objects, our algorithm performs well for the segmentation of thin layers with corrugated interfaces
3.2. Experimental results
We applied our segmentation algorithm to different cultural heritage samples such as varnished furniture pieces and paintings.
3.2.1. Imaging and segmentation of a corrugated wooden substrate
As a first example of our algorithm we segment a highly corrugated uncoated wooden substrate sample. Fig. 6 shows a comparison between the results obtained with GBS and our MLAS algorithm. The conventional GBS method is not capable of accurately segmenting the interface at the locations where substantial indentations are present.
This is further quantified by calculating the MAE and MAD between the GBS and MLAS results in three regions of interest compared to a ground truth manual segmentation. Table 1 provides an overview of the MAE and MAD for the two methods.
The quantitative comparison confirms the conclusion that is also clearly visible in Fig. 6: The interface in Box 1 is relatively flat (uncorrugated) and one can observe that the GBS is performing nearly as well as the MLAS. Box 2 contains an indentation and, while the GBS segmentation result is still acceptable, it does not follow the interface into the indentation as good as the MLAS does. Finally, for the segmentation of Box 3 one can easily deduce from Fig. 6 and Table 1 that segmentation with GBS results in severe errors (GBS cannot follow the detailed surface topology), while MLAS provides a nearly perfect segmentation.
3.2.2. Imaging and segmentation of thin corrugated varnish layers on wood and paint
A second example is the segmentation of OCT images of thin varnish layers on wood and paint. The segmentation of these layers is important to assess the quality of varnish depostion and the removal of the correct layer during restoration. Figure. 7 shows multiple interfaces segmented with MLAS. Figure. 7(a) shows the MLAS segmentation of a single varnish layer on a 19th-century mahogany writing desk. MLAS delivers better segmentation results compared to GBS (segmentation not shown). MAE and MAD values (pixels) for the top interface are (0.1, 0.1) & (1.00, 0.7) for MLAS and GBS, respectively. For the bottom interface the respective values are (0.3, 0.6) & (2.3, 2.1). The sample in Fig. 7(b) consists of 2 varnish layers on a corrugated pigment layer, of a 17th century historical painting. MLAS performs better than GBS in this case as well. The MAE and MAD values for respectively MLAS and GBS are for the top interface (1.0, 1.7) & (2.9, 2.1), for the second interface (0.6, 1.1) & (3.2, 1.3) and for the bottom interface (1.6, 2.8) & (3.5, 3.8). Both images in Fig. 7 are well segmented with our MLAS algorithm, segmentation of the same structures with GBS resulted in significant segmentation errors for all segmented interfaces (segmentation not shown).
3.3. 3D segmentation of thin varnish layers
Similar to GBS, our algorithm is designed to work on single OCT B-scans (2D images). Since OCT is regularly used for 3D structural imaging, fast 3D segmentation capability is highly desirable. We implemented our algorithm for 3D segmentation of a varnish layer based on consecutive B-scans, as shown in Fig. 8. This sample is a damaged varnish layer on wood, shown in the camera image Fig. 8(a). The OCT scan consists of 300×300×1024 pixels, covering a volume of 10×10×1.9 mm3 (The sampling distance between Bscans in the Cscan is 33.3 micrometers). On a single core computer, the two interfaces in the 300 B-scans are segmented with GBS followed by MLAS in circa 1050 seconds (including loading and saving of data). Our algorithm is almost as fast as the GBS and therefore adds a factor of 2 to the total computation time. However, since the segmentation is entirely based on individual B-scans, as is the case of GBS, the parallel use of multiple computer cores improves the segmentation speed. In our implementation, a single B-scan segmentation of two interfaces takes 3.5 seconds, with the entire dataset is segmented on 4 cores in only 430 seconds, a ≈2.5 factor improvement over the single core speed. Figure 8(a) demonstrates the difficulty to estimate the condition of a varnish layer based on a regular microscopy image. Besides the rectangle shaped indentation, the varnish seems relatively well intact. A segmentation of a single OCT cross section with MLAS is shown in Fig. 8(b). Both interfaces are highly corrugated, but well segmented. In particular it can be seen that inside the rectangle and on the upper left corner the varnish is strongly damaged (i.e. very thin or absent), something that is not visible in the microscopy image. Figure 8(c) is a contour plot of the varnish thickness as calculated from the difference between the two interface and the refractive index of the shellac varnish (estimated at n=1.51).
The segmentation leads to a detailed spatial understanding of the state and damage of the varnish layer. From the data we estimate that the average layer thickness is 101.5 µm and the standard deviation 44.9 µm. This corroborates the impression from Fig. 8 that the surface is not flat at all, but is heavily corrugated due to damage inflicted after varnish application. Using our segmentation the top varnish layer can be virtually removed to show the underlying wood substrate, as shown in Fig. 8(d).
We developed a novel OCT image segmentation algorithm suited for segmentation of corrugated interfaces of thin layers. We applied our algorithm to segment interfaces and thin layers of high resolution OCT images of cultural heritage objects. The results of our work demonstrate that we can accurately segment the interfaces of corrugated thin layers.
Our segmentation is based on the OCT AC image as determined with the model of Vermeer et al. . This model is known to be biased in the quantification of the attenuation coefficient. The model assumes that the phase function is equal for all layers (which is generally not true) and leads to an overestimation of the AC for large depths. In our work however, we do not use the AC as a quantitative measure of the sample property, but instead use its qualitative property, i.e., the assumption that the AC is constant over a layer to segment the image. Moreover, the segmented layers are typically at the top of the sample, hence they are not strongly affected by the AC bias.
The experimental data and the simulated images show that for layers with a (by approximation) homogeneous composition and different ng we obtain unique AC values (as can be observed in Fig. 3). In case consecutive layers have a nearly identical , erroneous segmentation can occur. This error is made only when the interface between those layers lies within the vertical shift region and the difference between their ACs within the σj tolerance. In practice, the combination of layers with similar attenuation coefficient in close proximity is unlikely to occur. Although we use the AC image for GBS segmentation, we observed that the GBS also could be performed with similar results on the conventional OCT reflection image. The MLAS approach, however, only can be performed on the AC images, as this method is based on the assumption that every pixel within a particular layer j should have a constant µj value, which is not the case for the OCT signal. Hence, for the MLAS method to work optimally, the depth dependence of the OCT signal, due to roll-off and confocal gating, needs to be accurately compensated. In this way a layer with a homogeneous composition results in a layer with constant AC and shows little variation in depth caused by the system. Besides variations in AC caused by the OCT system, variations in AC from noise and speckle can lead to erroneous segmentation results for both GBS and MLAS. GBS is quite prone to speckle and noise fluctuation as it is based on the derivative of the reflectivity intensity image. Since, the MLAS algorithm is based on the average µj obtained over all the lateral coordinates of a layer, it reduces the effect of local noise and speckle levels within a single B-scan, leading to a better segmentation result of the final interfaces. The MLAS algorithm robustness to outliers can be further improved by using the median instead of the mean and percentiles rather than standard deviations in Eq. (2) and Eq. (3), albeit at the cost of increased processing time. The amount of upward shift can be set per individual interface boundary and is of significant influence on the MLAS results. Although the use of an arbitrary shift is possible, iterating MLAS over multiple layers allows to prevent any overlap of interfaces by limiting the maximal upward shift to the position of the previous segmented MLAS interface. This is highly desirable since in this case crossing of interfaces is prevented, an effect that is particularly undesired in MLAS. In this work we have consistently used cj = 0.5. As a rule, cj depends on a possible systematic error of the GBS segmentation. If GBS determines the interfaces too deep in the image it is recommended to take cj > 0.5, for the reverse case, cj < 0.5 is recommended to provide better results.
Segmentation of corrugated layers in OCT images is problematic. Firstly, we cannot flatten images due to the unpredictable nature of local corrugation. Secondly, the use of smoothing would lead to additional segmentation errors for GBS. GBS can be improved by applying prior knowledge about the layer structure (registration) and by adjusting cost functions or kernels. However, significant improvement of the performance of GBS on corrugated layers always entails the use prior information, a situation that is to be avoided for the purpose of general applicability of the segmentation algorithm. Segmentation using level sets, for example, have proven to be successful for segmenting large amounts of interfaces in the retina, but need anatomical knowledge of the biological structures to do so (e.g., a predefined order of layers). A limitation of our method is that it only works for corrugations with a maximum steepness of 90 degrees. In case of larger angles, multiple interface values are present at any single lateral index position. This is the case with spherical shaped cavities below the surface. Our method will select the first interface as the segmented interface position. In this work we show segmentation of up to 2 layers with MLAS, Further study of the algorithm is required to verify its capability of segmenting more than 3 interfaces.
Many paintings and pieces of furniture are varnished, which protects the surface of these materials, modifies their appearance, and protects fragile inlays. Our method can help to rapidly determine the varnish condition and inspect its degradation over time, during non-invasive OCT inspection. Moreover, in many cases degraded varnish layers are removed in restoration processes. Also in these cases our method can be used to delineate varnish layers and provide feedback on the correct removal of particular varnish layers (e.g. the newest ones) and prevent damage to the underlying paint and pigment layers that need to be preserved.
In this work we present a novel MLAS segmentation algorithm to segment high resolution OCT images of thin corrugated layers. Our method is generally applicable as it does not require prior information about the composition of the sample other that the amount of layers present in the sample. The method works for corrugated and thin layers. We demonstrate that our model is well suited for studying thin varnish layers in 3D on works of art.
Netherlands Organization for Scientific Research (NWO), (323-54-005).
The authors want to thank Paul van Duin, Katrien Keune, Annelies van Loon, and Petria Noble of the Rijksmuseum in Amsterdam for useful discussions and providing experimental samples, E.P. van der Linden for characterization of the OCT set-up and L. J. van Vliet for useful discussions.
References and links
1. Y. Dong, S. Lawman, Y. Zheng, D. Williams, J. Zhang, and Y.-C. Shen, “Nondestructive analysis of automotive paints with spectral domain optical coherence tomography,” Appl. Opt. 55, 3695–3700 (2016). [CrossRef] [PubMed]
2. Y. Dong, H. Lin, V. Abolghasemi, L. Gan, J. A. Zeitler, and Y.-C. Shen, “Investigating intra-tablet coating uniformity with spectral-domain optical coherence tomography,” J. Pharm. Sci. 106, 546–553 (2017). [CrossRef]
3. G. Fresquet and J.-P. Piel, “Optical characterization and defect inspection for 3D stacked IC technology,” International Symposium on Microelectronics 2014, 000630 (2014). [CrossRef]
4. S.-H. Kim, J.-H. Kim, and S.-W. Kang, “Nondestructive defect inspection for LCDs using optical coherence tomography,” Displays 32, 325–329 (2011). [CrossRef]
5. M. R. Hee, C. R. Baumal, C. A. Puliafito, J. S. Duker, E. Reichel, J. R. Wilkins, J. G. Coker, J. S. Schuman, E. A. Swanson, and J. G. Fujimoto, “Optical coherence tomography of age-related macular degeneration and choroidal neovascularization,” Ophthalmology 103, 1260–1270 (1996). [CrossRef] [PubMed]
6. T. R. G. Babu, S. S. Devi, and R. Venkatesh, “Automatic detection of glaucoma using optical coherence tomography image,” J. Appl. Sci. 12, 2128–2138 (2012). [CrossRef]
7. M.-L. Yang, C.-W. Lu, I.-J. Hsu, and C. C. Yang, “The use of optical coherence tomography for monitoring the subsurface morphologies of archaic jades,” Archaeometry 46, 171–182 (2004). [CrossRef]
8. P. Targowski, B. Rouba, M. Góra, L. Tyminska-Widmer, J. Marczak, and A. Kowalczyk, “Optical coherence tomography in art diagnostics and restoration,” Appl. Phys. A 92, 1–9 (2008). [CrossRef]
9. M. Lenz, C. Mazzon, C. Dillmann, N. C. Gerhardt, H. Welp, M. Prange, and M. R. Hofmann, “Spectral domain optical coherence tomography for non-destructive testing of protection coatings on metal substrates,” Appl. Sci. 7364 (2017). [CrossRef]
10. J. Striova, B. Salvadori, R. Fontana, A. Sansonetti, M. Barucci, E. Pampaloni, E. Marconi, L. Pezzati, and M. P. Colombini, “Optical and spectroscopic tools for evaluating Er:YAG laser removal of shellac varnish,” Stud. Conserv. 60, S91–S96 (2015). [CrossRef]
12. G. Michalina, P. Targowski, A. Rycyk, and J. Marczak, “Varnish ablation control by optical coherence tomography,” Laser Chem. 2006, 10647 (2007).
13. J. K. Delaney, E. R. de la Rie, M. Elias, L.-P. Sung, and K. M. Morales, “The role of varnishes in modifying light reflection from rough surfaces - a study of changes in light scattering caused by variations in varnish topography and development of a drying model,” Stud. Conserv. 53, 170–186 (2008). [CrossRef]
14. E. R. de la Rie, “The influence of varnishes on the appearance of paintings,” Stud. Conserv. 32, 1–13 (1987).
15. J. J. Hermans, K. Keune, A. van Loon, and P. D. Iedema, “An infrared spectroscopic study of the nature of zinc carboxylates in oil paintings,” J. Anal. At. Spectrom. 30, 1600–1608 (2015). [CrossRef]
16. D. Ciofini, M. Oujja, M. V. Canamares, S. Siano, and M. Castillejo, “Spectroscopic assessment of the UV laser removal of varnishes from painted surfaces,” Microchem. J. 124, 792–803 (2015). [CrossRef]
17. P. Pouli, I.-A. Paun, G. Bounos, S. Georgiou, and C. Fotakis, “The potential of UV femtosecond laser ablation for varnish removal in the restoration of painted works of art,” Applied Surface Science 254, 6875–6879 (2008). [CrossRef]
19. J. Novosel, G. Thepass, H. G. Lemij, J. F. de Boer, K. A. Vermeer, and L. J. van Vliet, “Loosely coupled level sets for simultaneous 3D retinal layer segmentation in optical coherence tomography,” Med. Image Anal. 26, 146–158 (2015). [CrossRef] [PubMed]
20. F. van der Lijn, T. den Heijer, M. M. Breteler, and W. J. Niessen, “Hippocampus segmentation in MR images using atlas registration, voxel classification, and graph cuts,” NeuroImage 43, 708–720 (2008). [CrossRef] [PubMed]
21. S. J. Chiu, X. T. Li, P. Nicholas, C. A. Toth, J. A. Izatt, and S. Farsiu, “Automatic segmentation of seven retinal layers in SDOCT images congruent with expert manual segmentation,” Opt. Express 18, 19413–19428 (2010). [CrossRef] [PubMed]
22. L. Zhang, H. Kong, S. Liu, T. Wang, S. Chen, and M. Sonka, “Graph-based segmentation of abnormal nuclei in cervical cytology,” Comput. Med. Imaging Graphics 56, 38–48 (2017). [CrossRef]
24. D. Kaba, Y. Wang, C. Wang, X. Liu, H. Zhu, A. G. Salazar-Gonzalez, and Y. Li, “Retina layer segmentation using kernel graph cuts and continuous max-flow,” Opt. Express 23, 7366–7384 (2015). [CrossRef] [PubMed]
25. K. A. Vermeer, J. Mo, J. J. A. Weda, H. G. Lemij, and J. F. de Boer, “Depth-resolved model-based reconstruction of attenuation coefficients in optical coherence tomography,” Biomed. Opt. Express 5, 322–337 (2014). [CrossRef] [PubMed]
26. D. Hillmann, “Holoscopy,” Springer (2014).
27. N. A. Nassif, B. Cense, B. H. Park, M. C. Pierce, S. H. Yun, B. E. Bouma, G. J. Tearney, T. C. Chen, and J. F. de Boer, “In vivo high-resolution video-rate spectral-domain optical coherence tomography of the human retina and optic nerve,” Opt. Express 12, 367–376 (2004). [CrossRef] [PubMed]
28. J. Kalkman, “Fourier-domain optical coherence tomography signal analysis and numerical modeling,” Int. J. Opt. 229586067 (2017).
29. E. Dijkstra, “A note on two problems in connexion with graphs,” Numer. Math 1, 269–271 (1959). [CrossRef]
30. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 22, 888–905 (2000). [CrossRef]