Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Denoising and 4D visualization of OCT images

Open Access Open Access

Abstract

We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications.

©2008 Optical Society of America

1. Introduction

We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. OCT allows one to non-invasively image living hearts with microscopic resolution, and to visually and quantitatively analyze development. Due to the diminutive size and rapid movements of the early embryonic heart, OCT imaging provides a unique ability to study anatomy and function. We believe that OCT has the requisite spatial and temporal resolution and is hence an important tool to facilitate understanding of the underlying mechanisms responsible for normal/abnormal heart development [1]. However, noise present in OCT imaging systems [211] limits our ability to interpret, visualize and analyze image data which is crucial to the understanding of early cardiac development. The purpose of our study is to address this limitation by creating an algorithm for noise reduction in OCT images and evaluating its performance both visually and quantitatively through volumetric visualization and image segmentation. The novelty of our noise reduction technique lies in its ability to optimally reduce noise based on the characteristics of a particular image data set.

Due to its deleterious effects on coherent imaging systems such as ultrasound and OCT, there has been significant effort to characterize and reduce noise [221]. The two most common noise sources are shot noise, which is additive in nature and can be adequately described by the Additive White Gaussian Noise (AWGN) process, and speckle noise, which is multiplicative in nature and harder to eliminate due to its signal dependency. In fact, speckle carries useful information about the underlying tissue structure [11]. OCT is very similar to ultrasound and a brief review of shot and speckle noise reduction in ultrasound is in order. Shot noise reduction is applied both during acquisition [17] and post-acquisition using simple image processing techniques [2224] such as an averaging filters, median filters and Gaussian low-pass filters. However, many of these filtering techniques tend to remove useful features from images. One of the most effective technique for shot noise removal is the phase preserving non-orthogonal wavelet (NW) filtering technique proposed by Kovesi [16]. As for speckle noise removal from ultrasound images, spatial domain techniques have been employed including the one proposed by Xie et al. [19] who applied a salient boundary enhancement technique with a speckle suspension term, and Dutt and Greenleaf [13], who employed a local statistical model to quantify the extent of speckle formation and subsequently used an unsharp masking filter to suppress speckle. As for transform domain techniques, wavelet-based speckle suppression has been reported [12,14,20]. More recently, Fan et al. [25] combined pyramid decomposition of images with anisotropic diffusion filtering to reduce speckle in ultrasound images of phantoms and liver. For OCT images, shot noise has been reduced using post-acquisition image processing techniques [2224] such as averaging filters, median filters and Gaussian filters. There are also reports on speckle reduction techniques in OCT including physical techniques [2,4,5,7,8], those applied prior to image formation [6], and post-acquisition, image processing techniques such as hybrid median filter (HMF), Wiener filter, ELEE filter, symmetric nearest neighbor (SNN) filter, Kuwahara filter, adaptive Wiener filter, rotating kernel transformation (RKT), anisotropic diffusion filtering, orthogonal and non-orthogonal wavelet filters [3,7,10]. Ozcan et. al [7] have compared the relative performances of the ELEE filter, two wavelet transform based filters, the HMF, SNN, a Kuwahara filter, and the adaptive Wiener filter, and have argued that post-acquisition digital image processing is advantageous because it does not require the additional acquisition of compounding angles required by the physical technique for speckle reduction. Puvanathasan and Bizheva [9] have used a fuzzy thresholding algorithm in the wavelet domain for speckle reduction in OCT images of a human finger tip, and have compared their technique with the Wiener and Lee filters.

In this paper, we create an algorithm to reduce both shot and speckle noise through digital image processing. The Kovesi NW filtering technique, originally applied to video surveillance data, can greatly reduce shot noise. However, manual optimization of parameters can be a daunting and unsatisfying task. Hence, we will investigate methods for automatically optimizing the wavelet filter bank for OCT. We call our technique Optimized Non-orthogonal Wavelet (ONW) denoising. To reduce speckle, we use an enhanced version of the Laplacian Pyramid Nonlinear Diffusion (LPND) technique used by Fan et. al on ultrasound images of the liver and carotid artery [25]. Since speckle size depends on imaging parameters such as the characteristics of the light source, the spot size, and sampling rate, we have investigated adaptive optimization of LPND parameters, and call the method Adaptive LPND (ALPND).

We have identified three approaches for evaluation of noise reduction. First, there are quantitative measures on individual images such as edge preservation (β) [15], structural similarity measure (SSIM) [26], and contrast-to-noise ratio (CNR) [27], as reported in a recent work by Fan et al [25]. We will create a weighted sum of these measures and use this scalar image quality criterion to optimize ALPND. This measure will also be used to evaluate other noise reduction algorithms. Second, as described by Frangakis et al [28], one can evaluate the effect of noise reduction on 3D image visualization. We will investigate how noise reduction affects both isosurface, surface rendering and direct volume rendering [29]. Gradients provide enhanced volume visualization of internal surfaces and tissue boundaries [3032], and we are particularly interested in the role of noise reduction in improving visualization through accurate estimation of gradients in data. Third, one can determine the effect of noise reduction on segmentation [33]. We used a simple tolerance based seeded region growing algorithm available within the visualization package Amira [34] and a more sophisticated semiautomatic image contour segmentation tool called LiveWire [35,36] to both qualitatively and quantitatively evaluate the effect of noise reduction.

The rest of the paper is organized as follows. In Section 2, we briefly describe baseline denoising algorithms such as median filtering, Wiener filtering, and orthogonal wavelet (OW) filtering, followed by our proposed ONW-ALPND denoising algorithm. In Section 3, we discuss methods for evaluating the performance of image denoising algorithms. Section 4 presents results of our proposed denoising algorithm along with quantitative/qualitative comparisons to baseline algorithms. This is followed by a discussion in Section 5.

2. Image denoising algorithms

2.1 Baseline methods: median, Wiener and orthogonal wavelet (OW) filtering

We briefly review some “baseline” noise reduction filters, which will be compared to our new noise reduction filter. The median filter has been used as a baseline comparison method for filtering additive, impulse and speckle noise in OCT images [3,7,10]. In median filtering, noise is suppressed by replacing each pixel with the median computed from neighboring pixels in a window. We used the Matlab Image Processing Toolbox implementation [37] of the median filter called median2. Like the median filter, the Wiener filter is a popular baseline comparison method for shot and speckle noise removal both in ultrasound and OCT images [7,21,25]. The Wiener filter is a linear filter mostly suited for images degraded by additive noise. Wiener filtering assumes that the signal and noise processes are second-order stationary (in the random process sense). The basic Wiener filter is a bandpass filter that minimizes (in a least-squares sense) the difference between the true and observed signal. It is defined in terms of the following spectra (i) Fourier Transform of the point spread function (PSF) of the system, (ii) Power spectrum of the signal process, and (iii) Power spectrum of the noise process. We used the Matlab Image Processing Toolbox implementation [37] of the Wiener filter called wiener2 which is an adaptive version of the Wiener filter that uses statistics estimated from a local neighborhood of each pixel [38]. We also used a standard wavelet wdencmp from the Matlab Wavelet Toolbox for noise reduction. Wavelets are a powerful tool in image analysis for pattern detection, signal recovery, image compression and noise reduction [39]. Orthogonal wavelets (OW) are a specific class of wavelets where the basis functions are orthogonal i.e. any pair of basis functions in the set has a zero correlation. Image denoising using OW filter consists of (i) decomposition of spatial image data into wavelet coefficients using an appropriate family of wavelet basis functions, (ii) identification of a suitable threshold for the wavelet coefficients followed by a thresholding operation, and (iii) reconstruction of the spatial domain image from the thresholded wavelet coefficients [12,14,20].

 figure: Fig. 1.

Fig. 1. Kovesi NW filtering for shot noise reduction. The image is convolved with a filter bank after Fourier transformation. The result is transformed back to the spatial domain by an inverse Fourier transformation. A noise threshold is identified in each scale and the filtered image is produced by reconstruction.

Download Full Size | PDF

Following some ad hoc optimizations, we determined a fixed set of algorithm parameters that were used in our baseline denoising schemes. For median2, we used a square 13 × 13 filter kernel. In wiener2, we again set the filter size to be 13 x 13. In the case of the OW filter using wdencmp function of Matlab, we used the symlet-7 (sym7) wavelet with decomposition performed up to level 3. In order to make a fair comparison, these values were chosen to match the optimal settings for the proposed ONW and ALPND denoising schemes.

2.2 Optimized non-orthogonal wavelet (ONW) denoising

To reduce shot noise, we created the ONW algorithm. It builds upon the basic Kovesi NW filter [16] which was originally applied to synthetic and video surveillance images. Kovesi argued that denoising should not corrupt the phase information in the image, and used a non-orthogonal wavelet filter bank followed by a thresholding of the magnitudes of the wavelet coefficients leaving the phase unchanged. In our modification, we include an image data set specific method for designing the optimal wavelet filter bank for OCT images. A parameter optimization scheme is applied once to a given data set, a set of parameters for designing an optimal filter bank are derived, and all the images in the data set are processed using this optimal filter bank.

The basic Kovesi NW filter is illustrated in Fig. 1. It uses a non-orthogonal wavelet filter bank with a non-zero correlation between any two filters in the bank [16]. Filters are created to detect features at different frequency subbands (called scales) and different orientations. Assuming the original noise to be an Additive White Gaussian Noise (AWGN) process, the amplitude of the transformed noise follows a Rayleigh distribution [16], whose probability density function (PDF) is characterized by a single parameter. Due to this property, a noise threshold can be determined for the lowest scale from the transformed coefficient values and a simple scaling operation can be applied to derive noise thresholds for higher scales. The noise threshold at each scale is then subtracted from the corresponding filter response and the resulting constituent images are used to reconstruct the denoised image.

 figure: Fig. 2.

Fig. 2. Proposed optimization scheme for shot noise reduction based on the Kovesi NW filter. The resulting filter is called the Optimized Non-orthogonal Wavlet (ONW) filter. In ONW, a user-defined ROI is matched against a co-located ROI in the smallest scale reconstruction to compute an image dissimilarity measure. The parameters that generate the wavelet filter bank are varied through empirically determined ranges to minimize this dissimilarity measure.

Download Full Size | PDF

In our modified filter, we propose an optimization scheme for the parameters that generate the wavelet filter bank as illustrated in the schematic flow diagram in Fig. 2. The parameters controlling the filters in the filter bank are (i) number of scales (s), (ii) number of orientations (o), (iii) number of standard deviations around noise threshold to reject as noise (k), (iv) multiplying factor between scales (p), (v) wavelength of smallest scale filter (λ), (vi) ratio of the standard deviation of the Gaussian describing the filter transfer function in the frequency domain to the filter center frequency (R g) , and (vii) ratio of angular interval between filter orientations and the standard deviation of the angular Gaussian function used to design filters (Ra). We have seen by experimentation that, although parameters (i) through (iv) can be set independently of image data, parameters (v) through (vii) namely λ, R g and Ra are image dependent. However, Kovesi’s technique [16] does not adapt these parameters based on noise characteristics of the images. We propose a parameter optimization step where empirically determined maximum and minimum values are used for λ, R g and Ra and each parameter is varied in this range [see Equation(1) below]. For each setting, the wavelet filter response of the smallest scale filter across all orientations, denoted by h ss, is computed. We note that the smallest scale filter mostly responds to noise thereby making it a useful approximation to the “noise pattern” in the image. Next, a user-selected noisy region-of-interest (ROI) from the original image I (denoted by Ω) is matched to the co-located ROI in h ss using an image distance measure function D based on local histograms [40]. Finally, the optimal parameter settings (λ’, R g’, Ra’) are determined as those values that result in the minimum image distance between the two ROIs. Mathematically, this can be represented as:

(λ,Rb,Ra)=argminx,yΩ,λminλλmax,RgminRgRgmax,RaminRaRamaxD[hssxyλRgRa,Ixy]

where λmin, λmax, Rg,min, Rg,max, Ra,min, and Ra,max are the maximum and minimum values of λ, Rg and Ra that have been determined empirically. To compute this image distance D, the two ROIs are divided into tiled rectangular blocks. A local image histogram with a predefined number of bins is computed for each tile for both the ROIs. Now, a sum-of-absolute-difference (SAD) distance metric is computed between the histograms of the two ROIs [40]. We note that ONW is not only suited for shot noise reduction, but could also be used for speckle reduction. However, we have found that in presence of speckle, shot noise reduction alone is not sufficient f and more sophisticated speckle reduction techniques need to be employed.

 figure: Fig. 3.

Fig. 3. Laplacian Pyramid Nonlinear Diffusion (LPND) technique for speckle reduction of Fan et al. The image is decomposed into constituent images spanning different frequency bands (referred to as layers). A nonlinear diffusion step is applied in each layer to reduce speckle and the output image is reconstructed from the speckle-reduced images in each layer.

Download Full Size | PDF

2.3 Adaptive Laplacian pyramid nonlinear diffusion (ALPND) denoising

We developed the Adaptive Laplacian Pyramid Nonlinear Diffusion (LPND) technique building upon the basic LPND filter of Fan et al [25] which was originally used on ultrasound images of the liver and the carotid artery. Specifically, we use an optimization scheme for determining parameters of the nonlinear diffusion step that are specific to the speckle characteristics of a given OCT image data set. In other words, the optimization is done once for a given data set by choosing a representative image (visually) that best describes the speckle characteristics. A detailed discussion of pyramid decomposition of images can be found in the medical imaging textbook by Paul Seutens [41]. As illustrated in the flow diagram of Fig. 3, the basic technique proposed by Fan et al. [25] for speckle reduction in ultrasound images employs (LPND) which essentially comprises of a nonlinear (anisotropic) diffusion technique [42] applied to the frequency subbands of the images obtained by Laplacian pyramid image decomposition. As a result of this decomposition, the high frequency speckle noise occupies lower pyramid layers; its effect in higher layers will be negligible. Parameters controlling the amount of smoothing due to anisotropic diffusion are computed separately for each decomposition layer. The smoothing itself is directional in nature and is dependent on the gradient - a high gradient means lesser smoothing while a lower gradient implies heavier smoothing. As a result, speckle is reduced without affecting image features and edges.

We developed a technique for optimally determining the diffusion threshold td and filter kernel size N used for the nonlinear diffusion process, as illustrated in the flow diagram of Fig. 4. This optimization step is necessary to ensure that the speckle filtering is adaptive to the characteristics of the data set and to the imaging set up. Our technique computes a combined figure of merit μ for evaluating visual quality of the denoised images produced by LPND. Specifically, the following image quality metrics - structural similarity (SSIM) measure, edge preservation parameter (β) and contrast-to-noise ratio (CNR) were computed [25]. These three measures are robust estimators of signal preservation in an image which made them applicable for our filter evaluation. SSIM is a measure of overall processing quality, which compares the original and processed images based on statistics of co-located sub-regions. β is an edge preservation metric that involves computation of high-pass filtered (edge-enhanced) images from the two images being compared using a Laplacian operator. CNR is the relationship of signal intensity differences between two regions, scaled to image noise. Improving CNR increases perception of the distinct differences between two clinical areas of interest.

A combined figure of merit μ was derived from the above three measures (CNR, SSIM and β) using a weighted linear function as shown by Eq. (1) below where set the weights ω1 = ω2 = ω3 = 1/3. Since each individual measure does not completely capture all aspects of image quality accurately (e.g. CNR produces a higher value with more smoothing and may erroneously rate a highly blurry image higher than a less blurry image), we found it advantageous to combine these parameters linearly using user-selected weights (which in our case were set equal). This let us tune out extreme effects caused by any one individual parameter.

μ=ω1.CNR+ω2.β+ω3.SSIM

A region-of-interest (foreground) ΩFG and a background region ΩBG are needed to evaluate μ and were manually selected by a user. μ is therefore a function of td, N, ΩFG, ΩBG, ω1, ω2, and ω3. However, for a fixed user choice of foreground and background regions and the weights ωi, it is sufficient to denote μ as μ(td, N). The parameter values td and N were iterated through a set of values from a predetermined range (determined empirically), and the optimal parameter settings (td’, N’) were derived by maximizing μ. Mathematically, this is written as:

tdN=argmaxtdmintdtdmin,NminNNmaxμtdN

2.4 Combined filter for shot and speckle noise reduction

We created a noise reduction method using both the ONW and ALPND algorithms in a serial fashion to reduce shot and speckle noise, respectively. We call this combined technique the ONW-ALPND denoising algorithm. To apply this method, a single representative 2D image was chosen from a data set, and ONW and ALPND parameters were optimized. Following optimization, the ONW-ALPND denoising algorithm was run on the entire (2D+time) or (3D+time) data set.

 figure: Fig. 4.

Fig. 4. The parameter optimization scheme used in adaptive LPND (ALPND) technique. The filter kernel size N used to compute gradients for nonlinear diffusion and the diffusion threshold td are iterated through a set of values in an empirically determined range and a quantitative figure of merit μ is evaluated at each step. The optimal values for td and N are determined by maximizing μ.

Download Full Size | PDF

3. Methods for evaluating image quality

We have developed three methods to evaluate the performance of our noise reduction algorithms. First, we quantitatively evaluate image quality from the 2D OCT images before and after denoising. Second, we use volume and surface rendering of the original and denoised volumetric data sets as a means for evaluating the noise reduction performance. Third, we apply a tolerance based seeded region growing algorithm for image segmentation available within Amira [34], a popular 3D image analysis and visualization package, and a semi-automatic contour segmentation technique called LiveWire [35,36] to both noisy and denoised image data. Our evaluation was performed using the following data sets (i) (2D + time) data set that consisted of about 500 images (captured at 195 fps) from a complete cardiac cycle of a day 2 quail embryo, (ii) (3D + time) data set that consisted of 20 volumes (3 cardiac cycles) of the day 2 quail embryonic heart captured at a rate of 10 volumes/sec, (iii) a single volume 3D data set consisting of 131 2D image slices of a stage 14 quail embryo corresponding to one phase (beginning of diastolic filling) of the cardiac cycle, and (iv) human colonic crypt data set consisting of 600 2D image slices. This volume was obtained by an image processing technique called retrospective gating (the topic of retrospective gating is reserved for a future publication).

3.1 Evaluating 2D image quality

A quantitative method was used for evaluating image quality [3,25] of the resultant images produced through filtering. A human expert was shown (2D + time) OCT images of the quail embryonic heart. She first picked a region on the myocardial wall as the foreground and a nearby surrounding region consisting of cardiac jelly as the background. Following this, a combined figure of merit μ, introduced in Section 2.3 [Equation (2)], was evaluated for the 2D images.

3.2 Volumetric visualization

We next describe the methods used to visualize our (2D + time), 3D, and (3D + time) data sets before and after applying the noise reduction schemes. For the (2D + time) data, the ONW-ALPND denoising algorithm was applied to each 2D frame following which all the original (noisy) and denoised frames across time were assembled into a single movie file [37]. As for the (3D + time) data, surface renderings of volumetric data from each time point were generated using the isosurface algorithm which generates a surface passing through the volumetric data corresponding to a chosen gray value called the iso-value, using an algorithm similar to marching cubes [29]. . Although surface rendering is a useful tool for data visualization, surfaces are normally rendered opaque eliminating the ability to visualize internal structures such as interfaces between tissue types and boundaries. A 3D volume rendering of our data sets was implemented to enable visualization of internal structures by using suitable voxel opacities for the data points within the volume. To compute the voxel opacities, we built a 2D opacity transfer function (OTF) that assigns an opacity value based on both the original data value and its gradient. In other words, a 2D OTF maps each (data, gradient) pair to an opacity value. To optimize these volume renderings, we first rendered a volume using the “default” OTF provided by the software. Then, sub-regions within the volume were explored and the shape of the OTF was changed using a set of control points till an optimal rendering was obtained (as evaluated visually by the user). The beating heart of the quail embryo before and after the noise reduction step has been visualized as a time series of 3D volume renderings. To generate the surface and volume rendering discussed above, we implemented software extensions using the 3D visualization packages Amira [34] and Drishti [43] by the way of scripts and plug-ins.

3.3 Semi-automatic segmentation

First, we used a tolerance based seeded region growing algorithm for segmentation. Many image analysis and visualization packages such as Amira [34] implement this basic algorithm. Here, the user clicks on a region in the image called a seed and subsequently manipulates a pair of sliders which define a tolerance (in terms of intensity) around the seed value. The algorithm returns a region within the 2D image which satisfies the tolerance set by the user. Second, we employed a semi-automatic 2D image contour segmentation algorithm called LiveWire [35,36] to determine the heart contour from images in our (2D + time) and the single volume 3D data sets. In LiveWire, the user performs mouse-clicks to choose points on a boundary in a digital image. Following each mouse-click, a contour edge is automatically drawn between the current and previous points and the LiveWire boundary starts wrapping around the object of interest. LiveWire is based on a least cost path algorithm of Dijkstra [44] for detecting an optimal boundary in a digital image. More specifically, the boundary detection problem is formulated as an optimal path search in a weighted graph. Graph searching provides mathematically piece-wise optimal boundaries while greatly reducing sensitivity to local noise or other intervening structures [35,36]. We slightly modified the algorithmic implementation of Live Wire so that the user first clicks on a finite number of almost evenly spaced points on the contour and then instructs the software to complete the contour. The segmented heart contours (i.e. from the noisy and denoised data) are compared to the “ground truth” segmentations obtained from human expert users. We first made a subjective (visual) comparison for a few test cases. In addition to this visual comparison, we have developed a quantitative method for computing the distance between two contours. Briefly, the two contours are sampled by selecting an equal number of uniformly spaced pixels (points) on each of them. The x and y coordinates are then noted for each set of points. Following this, Euclidean distances are computed between the two sets of corresponding points [45] and the standard deviation of these distances is obtained to quantify the differences in shape of the contour and location of sampled points in the two contours. A smaller value implies a higher similarity and vice-versa.

4. Results

We performed experiments using the three data sets discussed in the beginning of section 3. We determined optimal parameter settings for both the ONW and the ALPND algorithms using a representative image from our (2D + time) data set. We ran the optimization process using 2D images from different phases of the cardiac cycle and found little variation in the optimal values determined in each case. The optimization process was more dependent on the imaging set up and less on which image from a data set was chosen. For our experiments, we chose one image close to the beginning of diastole for both ONW and ALPND optimization. This image has been shown as the input image in Fig. 4. In the case of the ONW algorithm, the optimal parameter settings were determined to be λ’ = 3, Rg’ = 0.35, and Ra’ = 0.25, as opposed to the default (suboptimal) settings for the basic Kovesi NW filter which were λ’ = 2, Rg = 0.55, and Ra’ = 1.0. In the case of the ALPND algorithm, the optimal value for the square filter kernel size N was 13 and that for the diffusion threshold td was determined to be 0.0001, when compared with a default (suboptimal) setting of N = 7 and td = 0.005 that have been suggested previously by Fan et al [25].

 figure: Fig. 5.

Fig. 5. ONW technique applied to OCT images of quail embryo (a-c) from (2D + time) data set and colon crypt pattern (d-f). (a) Original image of day 2 quail embryo, (b) Denoised image using the basic Kovesi NW filter, (c) Denoised image using ONW, (d) Original image of colon crypt pattern, (e) Denoised image using the basic Kovesi NW filter, and (f) Denoised image using ONW.

Download Full Size | PDF

4.1 ONW denoising

In Fig. 5, we compare the basic Kovesi NW filter with the ONW filter using a sample 2D image from the (2D + time) data set [Fig. 5(a)]. Although the Kovesi NW filter efficiently reduced noise, we see that in the resulting image [Fig. 5(b)], there is a loss of information with depth. For instance, the image information close to the bottom part of the inflow tract (see bottom right portion of the tubular heart) seems to be lost in Fig. 5(b). The proposed ONW technique reduced the shot noise but preserved image information with depth [Fig. 5(c)]. Figs. 5d-f show the results of applying ONW to OCT images of the human colonic crypt data set. As before, a depth-dependent loss of information due to suboptimal choice of filter bank parameters is very apparent [Fig. 5(e)]. For instance, in Fig. 5e, the muscularis mucosae (thin layer of smooth muscle) at the bottom of the mucosa and the lower parts of the crypts of Lieberkuhn are clearly lost in this image. The ONW technique again preserved information with depth [Fig. 5(f)].

4.2 ALPND denoising

We applied the LPND method to the image already processed using ONW [Fig. 6(a), 6(b)]. Suboptimal parameter settings produced a heavily blurred image [Fig. 6(b)]. The ALPND method produced a speckle-reduced image with minimal blurring [Fig. 6(c)].

 figure: Fig. 6.

Fig. 6. ALPND technique applied to quail embryo OCT images (a) ONW filtered image from Fig. 5(c), (b) Result of basic LPND filtering of Fan et al. applied to the image in (a), (c) Result of proposed ALPND technique applied to the image in (a).

Download Full Size | PDF

4.3 Visual and quantitative comparison with other filtering techniques

In Fig. 7 (a)–(f), we visually compared the ONW-ALPND denoising algorithm with the baseline denoising algorithms and the basic Kovesi NW filter [16] using a panel of experts in OCT technology and embryonic development. More specifically, our expert panel consisted of (i) an expert in cardiac developmental biology (Dr. Michiko Watanabe, Associate Professor of Pediatrics, Case Western Reserve University), (ii) two students from the Case School of Medicine majoring in Anatomy, and (iii) two experts in OCT imaging (Michael Jenkins & Andrew Rollins, authors). Our experts were shown images from different cardiac cycles for training and evaluation. Using the same input image as in Fig. 5(a) [repeated as Fig. 7(a)], we applied the baseline algorithms, the basic Kovesi NW filter and finally our ONW-ALPND denoising algorithm. Experts indicated that the denoised image obtained using the proposed method [Fig. 7(f)] was visually better as compared to the other methods [Fig. 7 (b)–(e)]. Figure 8 shows a movie comparing the image frames from the noisy and the denoised data sets after applying the proposed algorithm. Again, experts indicated that with denoising, blood flow can be more clearly visualized. Also, we can see structures such as cardiac jelly and the endocardial wall better after denoising. We also performed a quantitative comparison with the baseline algorithms. We chose 50 consecutive (in time) images from our (2D + time) data set. In order to make a fair comparison, the optimal filter kernel size determined by the ALPND algorithm was used to set the kernel size for the baseline algorithms. We obtained the combined figure of merit (μ) in Eq. (2) for the filtered images [Fig. 7(g)] produced by each of the above-mentioned techniques, from which it is evident that our proposed denoising algorithm performs better than the baseline algorithms.

 figure: Fig. 7.

Fig. 7. Visual and quantitative comparison of filtered images obtained by applying various noise reduction techniques to image in (a). (b) Median filter, (c) Wiener filter, (d) The OW filter using wdencmp function from the MATLAB® Wavelet ToolboxTM, (e) the basic Kovesi NW filter, and (f) proposed ONW-ALPND technique. (g) Quantitative comparison of figure of merit of ONW-ALPND filter with median, Wiener, OW and the basic Kovesi NW filters.

Download Full Size | PDF

4.4 Surface and volume visualization

Surface and volume renderings from the original and denoised data were evaluated by experts in image processing and embryonic development. First, the experts were shown a volume rendering from one representative volume in the (3D+time) data set produced by each of the baseline denoising algorithms and the ONW-ALPND denoising algorithm. Anecdotally, the experts indicated that the ONW-ALPND denoising enabled the best visualization of internal structures without loss of useful details. Next, we asked them to compare the volume rendering produced by the ONW-ALPND denoising algorithm fro with that obtained from the original data [Fig. 9(a)]. They concluded that denoising enabled better visualization of tissue boundaries and the tubular structure of the heart. Following this, experts were shown surface renderings from the original [Fig. 9(c)] and ONW-ALPND denoised data [Fig. 9(d)]. They concluded that the heart surface was smoother and that the adjoining surfaces were more clearly visible after denoising. Finally, they were shown a movie made from a time series of volume renderings consisting of six phases of the cardiac cycle, corresponding to both the original and the denoised data [Fig. 9(e)]. They concluded that the dynamics of the beating heart in 3D, e.g. interaction between the heart and adjoining structures, could be more clearly visualized.

 figure: Fig. 8.

Fig. 8. (3.19 MB) A (2D + time) movie of original (noisy) and ONW-ALPND denoised data from the (2D + time) data set of the stage 13 quail embryo. [Media 1]

Download Full Size | PDF

4.5. Semi-automatic segmentation

4.5.1 Tolerance based seeded region growing

Figures 9(f) and (h) show a visual comparison of noisy versus ONW-ALPND denoised data respectively using an en face 2D image from the single volume 3D data set. It is clear that anatomically important features such as outpocketings (sometimes called tethers) from the endocardial wall into the surrounding cardiac jelly are more distinctly visible in the denoised image in Fig. 9(h) (red arrows), suggesting that it should be easier to segment after denoising. We have verified that the tolerance based seeded region growing algorithm (section 3.3) performed better on denoised data because it clearly segmented out the lower portion of the cardiac jelly, as shown by the red region in Fig. 9(i). This can be observed by comparing it with the corresponding red region in Fig. 9(g), which shows the same algorithm applied on noisy data.

4.5.2 LiveWire segmentation

We compared LiveWire 2D segmentation results obtained by applying the proposed denoising technique with those obtained from noisy data and the OW filter denoising technique [Fig. 10(a)]. The data set consisted of 90 2D images from the (2D + time) data set corresponding to one complete cardiac cycle of the quail embryo. We computed the contour distance measure (section 3.3) between human expert traced contours and those obtained using LiveWire on (i) noisy data [denoted by D(I1,I2) in Fig. 10], (ii) denoised data using proposed technique [denoted by D(I1,I3)], and (iii) denoised data using OW filter [denoted by D(I1,I4)]. A scatter plot of D(I1,I2) versus D(I1,I3) is shown in Fig. 10 (b). It can be easily seen that a larger number of points (64%) lie below the dotted line [corresponding to D(I1,I2) = D(I1,I3)] than above it, as indicated by solid red squares. This indicated that LiveWire segmentation of the myocardial wall after noise reduction more closely matched an expert traced contour than LiveWire segmentation from noisy data.

Computing accurate gradients from data is very crucial for the success of 2D image segmentation algorithms. Some of the existing approaches for noise reduction are not tuned to the specific noise characteristics of an image data set. Therefore, the resulting images may be insufficiently or excessively filtered posing challenges to the gradient estimation process thereby affecting the performance of segmentation. Using the same (2D + time) data set, we compared the performance of the OW filter when used as a preprocessing (denoising) step for segmentation with our proposed ONW-ALPND technique [Fig. 10(c)] using human expert tracings as the baseline. We observed that, in 60% of the cases (solid red squares), Livewire contours produced by the proposed method were closer to human expert tracings than the LiveWire contours produced by OW filtering, thereby suggesting the superiority of the ONW-ALPND method.

We next performed the same experiments on the single volume 3D data set consisting of 2D slices obtained at different spatial positions (outer myocardial wall was visible in 131 slices). These images were from a stage 14 quail heart volume at a single phase in the cardiac cycle. This data set was obtained using an image processing technique called retrospective gating (the topic of retrospective gating is reserved for a future publication). For this data set, Figs. 9(f)–(i) have already demonstrated visually that both image quality and segmentation performance are better in the case of ONW-ALPND denoised data. We further provide quantitative evaluation results in Fig. 11. Figure 11(a) shows a comparison of ONW-ALPND versus noisy data as a scatter plot in which about 60% of the points (solid red squares) lie below the dotted line, indicating that noise reduction aids segmentation. Similarly, Fig. 11(b) shows a comparison of ONW-ALPND versus the OW filter in which 65% of the points (solid red squares) indicate better conformity of the ONW-ALPND technique to human expert tracings.

The results from these data sets indicate a moderate percentage of conformity of contours obtained from the proposed algorithm to human tracings (in the range of 60-65%), when compared with those obtained from noisy data and the OW filtering method. This was probably due to (i) inaccuracies in human tracings owing to the changing shape and position of the myocardial wall contour across the 2D slices, and (ii) robustness of the LiveWire segmentation algorithm to noise.

4.5.3Shot versus speckle noise reduction for LiveWire segmentation

We performed an experiment to evaluate the effect of shot noise reduction on segmentation. First, we applied ONW-ALPND to the noisy images from both the (2D + time) and the single volume 3D data sets. We then applied LPND only to these same images. LiveWire segmentation was performed in both cases and compared (as usual) with human expert tracings (Fig. 12). We observed that the segmentation performance improvement due to the addition of the shot noise reduction step was only marginal, as demonstrated by Fig. 12(a) for the (2D + time) data set where only 48 of the 90 images (53%) indicated that adding the shot noise reduction step helped segmentation, and by Fig. 12(b) for the single volume 3D data set where only 62 of the 131 images (47%) indicated an improved LiveWire segmentation as a result of the shot noise reduction step. In other words, there was strong evidence to believe that speckle noise is more dominant in “segmentable” regions.

5. Discussion

The combined ONW-ALPND filter aids data interpretation by reducing noise, facilitates biologically useful volume visualizations, and improves semi-automatic segmentation. The results shown in Figs. 5 through 12 strongly support our claim with regard to the usefulness of these filters in investigating early cardiac development in small animal models. We explored the use of each individual filter (ONW, ALPND) in isolation but concluded that the combined filter when applied serially in a specific order (ONW followed by ALPND) produced the best volume visualizations as evaluated subjectively by human experts. These experts were also provided the results from median, Wiener and OW filtering methods to evaluate and compare with the proposed filter. As an example, one expert evaluated the median filter as being inadequate in terms of noise reduction when shown the corresponding volume renderings.

 figure: Fig. 9.

Fig. 9. Comparison of volume renderings of one phase of the cardiac cycle from the (3D + time) data set before (a) and after (b) ONW-ALPND denoising. Volume renderings were produced using the Drishti visualization software. An isosurface for a gray level value of 60 from both noisy (c) and denoised (d) data. In (e), (301 KB) a movie is shown of the time series of original (left) and denoised (right) volumes corresponding to a complete heartbeat [Media 2]. Figures (f) – (i) show an en face 2D image slice from a different data set - the single volume 3D data set of quail embryo. The noisy image appears in (f) and the ONW-ALPND denoised image is shown in (h). It is clear that outpocketings from the endocardium (red arrows) are more clearly visible after ONW-ALPND denoising. Figures (g) and (i) show the result of the Amira tolerance based seeded region growing tool applied to (f) and (h) respectively for segmenting the cardiac jelly (red region).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Quantitative comparison of LiveWire segmentation with human tracings using ONW-ALPND and OW filters on 90 images from the (2D + time) data set. (a) Method used for contour comparison. Scatter plots of contour distance measure (section 3.3) are shown in (b) comparing noisy data with ONW-ALPND denoised data where 64% of the images showed closer conformity of proposed technique to human tracings, and in (c) comparing OW filter denoised data with ONW-ALPND denoised data where 60% of the images showed closer conformity to human tracings.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Quantitative comparison of LiveWire segmentation on the single volume 3D data set consisting of 131 2D image slices corresponding to different spatial positions within the volume. Scatter plots of contour distance measure (section 3.3) have been plotted. (a) Comparison of contours obtained from noisy data with those obtained from ONW-ALPND where 60% of images showed closer conformity of proposed technique to human tracings, (b) Comparison of ONW-ALPND with OW filter where 65% of images showed closer conformity of proposed technique to human tracings.

Download Full Size | PDF

There are some limitations to the current implementation. First, both the ONW and ALPND techniques involve an optimization step, which is currently applied on one representative image to determine the optimal settings for the filtering process. Once the optimal settings are obtained, the denoising is performed using these predetermined settings for the entire data set. Depending on the choice of the “representative” image, ONW may cause a signal drop off especially deeper in the tissue and the ALPND may result in image blurring. One solution would be to use different optimized parameters on different blocks of images. Second, with regards to the computational time, we first employed an unoptimized MATLAB code which took about 25 seconds to process a single 512 × 512 image from the (2D + time) data set on a 2.16 GHz Intel Core Duo laptop with 2 GB of RAM. Since this computation time can become prohibitively large for our extreme data sets, we performed some code optimization. We have currently reduced the computation time to 6 seconds per image on the same computer configuration but believe that further optimizations are possible. Third, we perform denoising on 2D image data only i.e. a 3D data set is processed in a serial fashion by applying the denoising to each 2D image in the stack. We plan to extend denoising to 3D as part of our future work. Also, we employ a serial, slice-by-slice, semi-automatic 2D segmentation technique to perform 3D segmentation. We plan to extend this to a fully automatic 3D implementation. Supervised (i.e. training-based) 3D shape modeling techniques such as Active Shape Models (ASM) and Active Appearance Models (AAM) [4649] will be useful in this regard but will pose several new challenges. For instance, the complex morphological changes that organs undergo during development would make training a subjective and difficult task. Fourth, there are probably opportunities for improving the opacity and color mapping functions for volume visualization. A 2D opacity transfer function (OTF) leads to enhanced volume visualization as shown by the renderings obtained using the Drishti software [43]. However, it is possible to obtain even improved renderings by designing more complex OTFs that use a higher number of data-dependent variables (and hence a higher number of dimensions). For instance, in addition to the original data value and its gradient, a Laplacian (second derivative) of the data could be employed to derive OTFs. When combined with suitable scalar weighting values along with the original data value and its gradient, a Laplacian can help determine exact location of boundaries, as suggested by Kniss et al [32]. We are planning to build extensions to our volume visualization software so that it can support higher dimensional OTFs during volume rendering.

 figure: Fig. 12.

Fig. 12. Quantitative comparison of LiveWire segmentation to human tracings with and without shot noise reduction (ONW). Scatter plots of contour distance measure (section 3.3) have been plotted. (a) Results from the (2D + time) data set consisting of 90 images from one cardiac cycle of quail heart (where shot noise reduction helped in only 53% of the total images). (b) Results from single volume 3D data set consisting of 131 2D images from one time point of the cardiac cycle (where shot noise reduction helped in only 47% of the total images).

Download Full Size | PDF

We have routinely encountered very large data sets while producing our volume and surface renderings. Occasionally, we have had to sub-sample our data sets due to computer memory limitations. A possible solution to this data management issue is a multi-resolution volume rendering. Here, the user can work with a lower resolution data set (obtained from sub-sampling high-resolution data) for exploratory purposes. They can then load sub-regions of the image data set at higher resolutions at will (Fig. 13). Some currently available visualization software packages have large scale data visualization features. For example, Amira [34], a popular 3D image analysis and visualization tool provides a Large Data Access (LDA) module which lets the user perform multi-resolution volume rendering shown in Fig. 13.

OCT technology coupled with image processing steps discussed in this paper would constitute a useful tool for investigating the early embryonic heart. Developmental cardiac researchers currently lack an effective imaging tool to investigate the morphological dynamics of the early avian/murine embryonic heart. Recently, we demonstrated the ability of OCT to morphologically phenotype embryonic murine hearts [50]. The set of image denoising algorithms, volume and surface visualization techniques and semi-automatic segmentation techniques presented in this report could be easily adapted to images of murine hearts and would represent a first step towards a fully automated, non-destructive, high-throughput system to assess the phenotype of embryonic murine hearts. This would allow researchers to pinpoint critical time periods at a much faster rate. Also, we have recently demonstrated the ability to image the 3D avian embryonic heart while beating [1,51,52]. Again, an evolved image processing pipeline would assist us in understanding mechanisms that drive normal versus abnormal heart development in early stages.

 figure: Fig. 13.

Fig. 13. Multi-resolution volume interaction on a single volume of quail embryonic heart from the (3D + time) data set. From a low resolution volume rendering of the heart, a region of interest can be selected (shown by bounding box) for higher resolution viewing.

Download Full Size | PDF

6. Conclusion

The combined ONW-ALPND filter efficiently reduced additive and speckle noise in OCT images. This algorithm compared very favorably to other popular denoising algorithms when evaluated both visually and quantitatively on 2D images from a (2D + time) data set of the quail embryonic heart. Also, surface and volume visualization from denoised data were greatly improved as compared to that from original data when evaluated visually with the help of human experts. Cardiac structure and function was much more easily visualized following denoising. Also, it was demonstrated using two different data sets [(2D + time) and single volume 3D data sets] that segmentation of substructures such as heart could be performed more efficiently using semi-automatic algorithms after applying our noise reduction techniques. We are currently developing techniques for optimizing opacity transfer functions (OTFs) for producing better visualization of 3D data sets. We are also exploring supervised (training-based) 3D segmentation techniques.

Acknowledgments

This research is supported in part by National Institutes of Health RO1HL083048, the Ohio Wright Center of Innovation and Biomedical Research and Technology Transfer award: “The Biomedical Structure, Functional and Molecular Imaging Enterprise”, and the Interdisciplinary Biomedical Imaging Training Program NIH T32EB007509. This investigation was conducted in a facility constructed with support from Research Facilities Improvement Program Grant Number C06 RR12463-01 from the National Center of Research Resources, National Institutes of Health. The authors also acknowledge James G. Fujimoto, Desmond Adler and Robert Huber for their technical contributions.

References and links

1. M. W. Jenkins, D. C. Adler, M. Gargesha, R. Huber, F. Rothenberg, J. Belding, M. Watanabe, D. L. Wilson, J. G. Fujimoto, and A. M. Rollins, “Ultrahigh-speed optical coherence tomography imaging and visualization of the embryonic avian heart using a buffered Fourier Domain Mode Locked laser,” Opt. Express 15, 6251–6267 (2007). [CrossRef]   [PubMed]  

2. M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett. 25, 545–547 (2000). [CrossRef]  

3. M. E Brezinski, Optical Coherence Tomography: Principles and Applications (Elsevier, 2006).

4. A. E. Desjardins, B. J. Vakoc, G. J. Tearney, and B. E. Bouma, “Speckle reduction in OCT using massively-parallel detection and frequency-domain ranging,” Opt. Express 14, 4736–4745 (2006). [CrossRef]   [PubMed]  

5. A. I. Kholodnykh, I. Y. Petrova, K. V. Larin, M. Motamedi, and R. O. Esenaliev, “Precision of measurement of tissue optical properties with optical coherence tomography,” Appl. Opt. 42, 3027–3037 (2003). [CrossRef]   [PubMed]  

6. D. L. Marks, T. S. Ralston, and S. A. Boppart, “Speckle reduction by I-divergence regularization in optical coherence tomography,” J. Opt. Soc. Am. A 22, 2366–2371 (2005). [CrossRef]  

7. A. Ozcan, A. Bilenca, A. E. Desjardins, B. E. Bouma, and G. J. Tearney, “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A 24, 1901–1910 (2007). [CrossRef]  

8. M. Pircher, E. Gotzinger, R. Leitgeb, A. F. Fercher, and C. K. Hitzenberger, “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt. 8, 565–569 (2003). [CrossRef]   [PubMed]  

9. P. Puvanathasan and K. Bizheva, “Speckle noise reduction algorithm for optical coherence tomography based on interval type II fuzzy set,” Opt. Express 15, 15747–15758 (2007). [CrossRef]   [PubMed]  

10. J. Rogowska and M. E. Brezinski, “Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images,” Phys. Med. Biol. 47, 641–655 (2002). [CrossRef]   [PubMed]  

11. J. M. Schmitt, S. H. Xiang, and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt. 4, 95–105 (1999). [CrossRef]  

12. A. Achim, A. Bezerianos, and P. Tsakalides, “Novel Bayesian multiscale method for speckle removal in medical ultrasound images,” IEEE Trans. Med. Imaging. 20, 772–783 (2001). [CrossRef]   [PubMed]  

13. V. Dutt and J. F. Greenleaf, “Adaptive speckle reduction filter for log-compressed B-scan images,” IEEE Trans. Med. Imaging. 15, 802–813 (1996). [CrossRef]   [PubMed]  

14. S. Gupta, R. C. Chauhan, and S. C. Sexana, “Wavelet-based statistical approach for speckle reduction in medical ultrasound images,” Medical & Biological Engineering & Computing 42, 189–192 (2004). [CrossRef]   [PubMed]  

15. X. H. Hao, S. K. Gao, and X. R. Gao, “A novel multiscale nonlinear thresholding method for ultrasonic speckle suppressing,” IEEE Trans. Med. Imaging. 18, 787–794 (1999). [CrossRef]   [PubMed]  

16. P. Kovesi, Phase Preserving Denoising of Images. The Australian Pattern Recognition Society Conference: DICTA’99. 212–217. 1999. Perth, WA.

17. M. A. Kutay, A. P. Petropulu, and C. W. Piccoli, “On modeling biomedical ultrasound RF echoes using a power-law shot-noise model,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control 48, 953–968 (2001). [CrossRef]   [PubMed]  

18. T. Loupas, W. N. Mcdicken, and P. L. Allan, “An Adaptive Weighted Median Filter for Speckle Suppression in Medical Ultrasonic Images,” IEEE. Trans. Circuits Syst. 36, 129–135 (1989). [CrossRef]  

19. J. Xie, Y. F. Jiang, H. T. Tsui, and P. A. Heng, “Boundary enhancement and speckle reduction for ultrasound images via salient structure extraction,” IEEE Trans. Biomed. Eng. 53, 2300–2309 (2006). [CrossRef]   [PubMed]  

20. Y. Yue, M. M. Croitoru, J. B. Zwischenberger, and J. W. Clark, “Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images,” IEEE Trans. Med. Imaging. 25, 297–311 (2006). [CrossRef]   [PubMed]  

21. D. F. Zha and T. S. Qiu, “A new algorithm for shot noise removal in medical ultrasound images based on alpha-stable model,” International Journal of Adaptive Control and Signal Processing 20, 251–263 (2006). [CrossRef]  

22. R. C Gonzalez and R. E Woods, Digital Image Processing (Prentice Hall, 2002).

23. W. K Pratt, Digital Image Processing (John Wiley and Sons, Inc., 2001). [CrossRef]  

24. M Sonka, V Hlavac, and R Boyle, Image Processing: Analysis and Machine Vision ( Brooks and Cole Publishing, 1998).

25. Zhang Fan, Mo Yoo Yang, Mong Koh Liang, and Kim Yongmin, “Nonlinear Diffusion in Laplacian Pyramid Domain for Ultrasonic Speckle Reduction,” IEEE Trans. Med. Imaging. 26, 200–211 (2007). [CrossRef]  

26. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]   [PubMed]  

27. J. T. M. Verhoeven, J. M. Thijssen, and A. G. M. Theeuwes, “Improvement of Lesion Detection by Echographic Image-Processing - Signal-To-Noise-Ratio Imaging,” Ultrason. Imaging 13, 238–251 (1991). [CrossRef]   [PubMed]  

28. A. S. Frangakis and R. Hegerl, “Noise reduction in electron tomographic reconstructions using nonlinear anisotropic diffusion,” J. Struct. Biol. 135, 239–250 (2001). [CrossRef]   [PubMed]  

29. The Visualization Handbook ( Elsevier Academic Press, 2005).

30. B. Csebfalvi, L. Mroz, H. Hauser, A. Konig, and E. Groller, “Fast visualization of object contours by non-photorealistic volume rendering,” Computer Graphics Forum 20, C452-+ (2001). [CrossRef]  

31. D. S. Ebert, C. J. Morris, P. Rheingans, and T. S. Yoo, “Designing effective transfer functions for volume rendering from photographic volumes,” IEEE. Transactions on Visualization and Computer Graphics 8, 183–197 (2002). [CrossRef]  

32. J. Kniss, G. Kindlmann, and C. Hansen, “Multidimensional transfer functions for interactive volume rendering,” IEEE. Transactions on Visualization and Computer Graphics 8, 270–285 (2002). [CrossRef]  

33. Y. Sato, S. Nakajima, H. Atsumi, T. Koller, G. Gerig, S. Yoshida, and R. Kikinis, “3D multi-scale line filter for segmentation and visualization of curvilinear structures in medical images,” Cvrmed-Mrcas’97 1205, 213–222 (1997). [CrossRef]  

34. D Stalling, H. Hege, and M. Amira- Zockler An Advanced 3D Visualization and Modeling System. http://amira.zib.de , 2007. http://amira.zib.de.

35. H. Ghassan and H. Judith, DTMRI Segmentation using DT-Snakes and DT-Livewire. Signal Processing and Information Technology, 2006 IEEE International Symposium on. Signal Processing and Information Technology, 2006 IEEE International Symposium on, 513–518, (2006).

36. E. N. Mortensen and W. A. Barrett, “Interactive segmentation with intelligent scissors,” Graphical Models and Image Processing 60, 349–384 (1998). [CrossRef]  

37. M. Demirci, “Matlab Image-Processing Toolbox,” Computer 27, 106–107 (1994).

38. J. S. Lim, Two-Dimensional Signal and Image Processing (Prentice Hall, Englewood Cliffs, NJ 1990).

39. A. Chambolle, R. A. Devore, N. Y. Lee, and B. J. Lucier, “Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage,” IEEE Trans. Image Process. 7, 319–335 (1998). [CrossRef]  

40. C. C. Chang, C. S. Chan, and J. Y. Hsiao, “A color image retrieval method based on local histogram,” Advances in Mutlimedia Information Processing – Pcm 2001, Proceedings 2195, 831–836 (2001). [CrossRef]  

41. P. Suetens, Fundamentals of Medical Imaging (Cambridge University Press, 2002).

42. P. Perona and J. Malik, “Scale-Space and Edge-Detection Using Anisotropic Diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 12, 629–639 (1990). [CrossRef]  

43. A. L. Drishti; Volume Exploration and Presentation Tool([0.1.7. 2007). Ref Type: Computer Program

44. E.W. Dijkstra, “A note on two problems in connection with graphs,” Numerische Mathematik 1, 269–271 (1959). [CrossRef]  

45. V. Perlibakas, “Automatical detection of face features and exact face contour,” Pattern. Recogn. Lett. 24, 2977–2985 (2003). [CrossRef]  

46. T. F. Cootes, A. Hill, C. J. Taylor, and J. Haslam, “Use of Active Shape Models for Locating Structure in Medical Images,” Image and Vision Computing 12, 355–365 (1994). [CrossRef]  

47. T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active Shape Models – Their Training and Application,” Computer Vision and Image Understanding 61, 38–59 (1995). [CrossRef]  

48. T. F. Cootes, C. Beeston, G. J. Edwards, and C. J. Taylor, “A unified framework for atlas matching using Active Appearance Models,” Information Processing in Medical Imaging, Proceedings 1613, 322–333 (1999). [CrossRef]  

49. T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 681–685 (2001). [CrossRef]  

50. M. W. Jenkins, P. Patel, H. Deng, M. M. Montano, M. Watanabe, and A. M. Rollins, “Phenotyping transgenic embryonic murine hearts using optical coherence tomography,” Appl. Opt. 46, 1776–1781 (2007). [CrossRef]   [PubMed]  

51. M. W. Jenkins, F. Rothenberg, D. Roy, V. P. Nikolski, Z. Hu, M. Watanabe, D. L. Wilson, I. R. Efimov, and A. M. Rollins, “4D embryonic cardiography using gated optical coherence tomography,” Opt. Express 14, 736–748 (2006). [CrossRef]   [PubMed]  

52. M. W. Jenkins, O. Q. Chughtai, A. N. Basavanhally, M. Watanabe, and A. M. Rollins, “In vivo gated 4D imaging of the embryonic heart using optical coherence tomography,” J. Biomed. Opt. 12, 030505 (2007). [CrossRef]   [PubMed]  

Supplementary Material (2)

Media 1: AVI (3198 KB)     
Media 2: AVI (301 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Kovesi NW filtering for shot noise reduction. The image is convolved with a filter bank after Fourier transformation. The result is transformed back to the spatial domain by an inverse Fourier transformation. A noise threshold is identified in each scale and the filtered image is produced by reconstruction.
Fig. 2.
Fig. 2. Proposed optimization scheme for shot noise reduction based on the Kovesi NW filter. The resulting filter is called the Optimized Non-orthogonal Wavlet (ONW) filter. In ONW, a user-defined ROI is matched against a co-located ROI in the smallest scale reconstruction to compute an image dissimilarity measure. The parameters that generate the wavelet filter bank are varied through empirically determined ranges to minimize this dissimilarity measure.
Fig. 3.
Fig. 3. Laplacian Pyramid Nonlinear Diffusion (LPND) technique for speckle reduction of Fan et al. The image is decomposed into constituent images spanning different frequency bands (referred to as layers). A nonlinear diffusion step is applied in each layer to reduce speckle and the output image is reconstructed from the speckle-reduced images in each layer.
Fig. 4.
Fig. 4. The parameter optimization scheme used in adaptive LPND (ALPND) technique. The filter kernel size N used to compute gradients for nonlinear diffusion and the diffusion threshold td are iterated through a set of values in an empirically determined range and a quantitative figure of merit μ is evaluated at each step. The optimal values for td and N are determined by maximizing μ.
Fig. 5.
Fig. 5. ONW technique applied to OCT images of quail embryo (a-c) from (2D + time) data set and colon crypt pattern (d-f). (a) Original image of day 2 quail embryo, (b) Denoised image using the basic Kovesi NW filter, (c) Denoised image using ONW, (d) Original image of colon crypt pattern, (e) Denoised image using the basic Kovesi NW filter, and (f) Denoised image using ONW.
Fig. 6.
Fig. 6. ALPND technique applied to quail embryo OCT images (a) ONW filtered image from Fig. 5(c), (b) Result of basic LPND filtering of Fan et al. applied to the image in (a), (c) Result of proposed ALPND technique applied to the image in (a).
Fig. 7.
Fig. 7. Visual and quantitative comparison of filtered images obtained by applying various noise reduction techniques to image in (a). (b) Median filter, (c) Wiener filter, (d) The OW filter using wdencmp function from the MATLAB® Wavelet ToolboxTM, (e) the basic Kovesi NW filter, and (f) proposed ONW-ALPND technique. (g) Quantitative comparison of figure of merit of ONW-ALPND filter with median, Wiener, OW and the basic Kovesi NW filters.
Fig. 8.
Fig. 8. (3.19 MB) A (2D + time) movie of original (noisy) and ONW-ALPND denoised data from the (2D + time) data set of the stage 13 quail embryo. [Media 1]
Fig. 9.
Fig. 9. Comparison of volume renderings of one phase of the cardiac cycle from the (3D + time) data set before (a) and after (b) ONW-ALPND denoising. Volume renderings were produced using the Drishti visualization software. An isosurface for a gray level value of 60 from both noisy (c) and denoised (d) data. In (e), (301 KB) a movie is shown of the time series of original (left) and denoised (right) volumes corresponding to a complete heartbeat [Media 2]. Figures (f) – (i) show an en face 2D image slice from a different data set - the single volume 3D data set of quail embryo. The noisy image appears in (f) and the ONW-ALPND denoised image is shown in (h). It is clear that outpocketings from the endocardium (red arrows) are more clearly visible after ONW-ALPND denoising. Figures (g) and (i) show the result of the Amira tolerance based seeded region growing tool applied to (f) and (h) respectively for segmenting the cardiac jelly (red region).
Fig. 10.
Fig. 10. Quantitative comparison of LiveWire segmentation with human tracings using ONW-ALPND and OW filters on 90 images from the (2D + time) data set. (a) Method used for contour comparison. Scatter plots of contour distance measure (section 3.3) are shown in (b) comparing noisy data with ONW-ALPND denoised data where 64% of the images showed closer conformity of proposed technique to human tracings, and in (c) comparing OW filter denoised data with ONW-ALPND denoised data where 60% of the images showed closer conformity to human tracings.
Fig. 11.
Fig. 11. Quantitative comparison of LiveWire segmentation on the single volume 3D data set consisting of 131 2D image slices corresponding to different spatial positions within the volume. Scatter plots of contour distance measure (section 3.3) have been plotted. (a) Comparison of contours obtained from noisy data with those obtained from ONW-ALPND where 60% of images showed closer conformity of proposed technique to human tracings, (b) Comparison of ONW-ALPND with OW filter where 65% of images showed closer conformity of proposed technique to human tracings.
Fig. 12.
Fig. 12. Quantitative comparison of LiveWire segmentation to human tracings with and without shot noise reduction (ONW). Scatter plots of contour distance measure (section 3.3) have been plotted. (a) Results from the (2D + time) data set consisting of 90 images from one cardiac cycle of quail heart (where shot noise reduction helped in only 53% of the total images). (b) Results from single volume 3D data set consisting of 131 2D images from one time point of the cardiac cycle (where shot noise reduction helped in only 47% of the total images).
Fig. 13.
Fig. 13. Multi-resolution volume interaction on a single volume of quail embryonic heart from the (3D + time) data set. From a low resolution volume rendering of the heart, a region of interest can be selected (shown by bounding box) for higher resolution viewing.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

( λ , R b , R a ) = arg min x , y Ω , λ min λ λ max , R g min R g R g max , R a min R a R a max D [ h ss x y λ R g R a , I x y ]
μ = ω 1 . CNR + ω 2 . β + ω 3 . SSIM
t d N = arg max t d min t d t d min , N min N N max μ t d N
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.