Abstract
In this paper, we present an overview of three-dimensional (3D) optical imaging techniques for real-time automated sensing, visualization, and recognition of dynamic biological microorganisms. Real time sensing and 3D reconstruction of the dynamic biological microscopic objects can be performed by single-exposure on-line (SEOL) digital holographic microscopy. A coherent 3D microscope-based interferometer is constructed to record digital holograms of dynamic micro biological events. Complex amplitude 3D images of the biological microorganisms are computationally reconstructed at different depths by digital signal processing. Bayesian segmentation algorithms are applied to identify regions of interest for further processing. A number of pattern recognition approaches are addressed to identify and recognize the microorganisms. One uses 3D morphology of the microorganisms by analyzing 3D geometrical shapes which is composed of magnitude and phase. Segmentation, feature extraction, graph matching, feature selection, and training and decision rules are used to recognize the biological microorganisms. In a different approach, 3D technique is used that are tolerant to the varying shapes of the non-rigid biological microorganisms. After segmentation, a number of sampling patches are arbitrarily extracted from the complex amplitudes of the reconstructed 3D biological microorganism. These patches are processed using a number of cost functions and statistical inference theory for the equality of means and equality of variances between the sampling segments. Also, we discuss the possibility of employing computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms illuminated under incoherent light. Experimental results with several biological microorganisms are presented to illustrate detection, segmentation, and identification of micro biological events.
© 2006 Optical Society of America
1. Introduction
The development of reliable, automated, and low-cost methods for real-time detection and identification of harmful bacteria and viruses are of significant benefits and essential in combating catastrophic diseases. Such pandemics could create global disasters and the death toll could be in millions [1–2]. Conventional methods in practice for inspecting most bacteria or viruses involve bio-chemical processing. In general, these techniques are labor intensive, require special skills, and are not real-time. Clearly, there could be vast applications for realtime automated recognition of microorganisms in a multitude of areas, including combating biological terrorism, security and defense, diagnosis of diseases, health care, food safety investigation and so on.
Real-time automatic recognition of living organisms is a very difficult task for a number of reasons. Biological microorganisms are dynamic events and not rigid objects. They can move, grow, and reproduce themselves, and vary in size and shape among the same species [3]. In particular, bacteria and viruses are very small and simple morphological traits. They may occur as a single cell or form an association of various complexities according to the environmental conditions. Conventional methods in this field have been aimed to recognize cells through bio-chemical analyses. Most image-based recognition efforts for specific microorganisms have been based on two-dimensional (2D) intensity images [4–8] which may not be effective.
2D image processing and pattern recognition techniques have been extensively applied to identify objects in unknown scenes [9–18]. Recently, there has been increased interest in three-dimensional (3D) optical imaging and automatic target recognition (ATR) [19–35].
Digital holography techniques [36–41] can be used for 3D image sensing [21–27]. Previously, computer synthesized holograms were used for complex spatial filtering [42]. Holographic microscopy [40–41] is an attractive 3D imaging technique for acquisition and visualization of 3D information of the micro-biological objects. By means of digital holographic microscopy, one can obtain both magnitude and phase content of a microorganism. Single-exposure on-line (SEOL) digital holography [25–26] for 3D image recognition has benefits compared with off-axis and/or phase-shifting on-axis digital holography. In particular, the SEOL holographic setup is simpler than its off-axis counterpart and it is more robust to input object size and scale variations. Since recording a hologram in the SEOL holographic setup requires a single-exposure, it is robust to sensor noise and environmental variation, thus it can be used for monitoring and studying dynamic events of microorganisms.
In this paper, we present an overview of several techniques for real-time automated 3D sensing, detection, visualization, segmentation, and recognition of microorganisms [28–33, 43]. In particular, SEOL digital holography is employed for sensing and visualization of micro-biological objects. The optical setup of SEOL digital holography is based on the Mach-Zehnder interferometer to record the Fresnel diffraction field of microorganisms. The 3D complex amplitude of the microorganisms is computationally reconstructed at arbitrary depths along the optical axis without mechanical scanning.
Segmentation of microscopic objects can be accomplished using a number of approaches [43–46]. One technique is bivariate jointly distributed region snakes method for segmentation of complex amplitude biological microorganism images [43]. Living organisms are non-rigid objects and they vary in shape and size. Moreover, they often do not exhibit clear edges in computationally reconstructed SEOL holographic images. Thus conventional segmentation techniques based on the edge map may fail to segment these images appropriately. We present a statistical framework based on the joint probability distribution of magnitude and phase information of SEOL holographic microscopy images and maximum likelihood estimation of parameters for the joint probability density function. An optimization criterion is computed by maximizing the likelihood function of the target support hypothesis [47–49]. The performance of the proposed method for the segmentation of reconstructed SEOL holographic microorganism images along with experimental results is presented.
In one 3D recognition approach [See Fig. 1(a)], after the segmentation of the microorganisms, the recognition of microorganisms can be performed by analyzing the 3D complex morphology of the computationally reconstructed holographic images. Gabor-based wavelets [50–52] extract features of the microorganisms by decomposing the reconstructed images in the spatial frequency domain. A feature matching technique follows which measures the similarity of 3D morphologies between a reference microorganism and unknown biological samples. The graph matching with Gabor-based wavelets has been used as a robust template matching which is tolerant to shift, rotation, and distortion [53–56]. We may utilize the graph matching technique with Gabor features for automatic selection of feature vectors to be used in training and testing stages. In this case, trained features of the specific microorganisms will be stored in a database [29,30].
As we discussed, automatic recognition of microorganisms is a difficult task because of their dynamic nature (moving, growing, and varying in size and shape). Therefore, an alternative recognition approach is developed that utilizes statistical inference theory for a shape-tolerant 3D recognition system as shown in Fig. 1(b). A number of sampling segments are randomly extracted from the reconstructed 3D image of microorganisms. By selecting arbitrary sampling segments and testing them through statistical inference, we can develop a recognition system which is independent of the shape of microorganisms. These sampling segments are processed using various cost functions including mean-squared distance (MSD), mean-absolute distance (MAD), and statistical inference using the sampling theory [47]. The equality of means and equality of variances between the sampling segments of a reference microorganism and unknown input biological samples are tested for recognition. Student’s t distribution and Fisher’s F distribution are, respectively, used to analyze the difference of means and the ratio of variances of reconstructed microorganism images [47,57]. After calculating statistical parameters of the microorganisms, the data can be processed by training rules and then stored in the database.
As we will show in the experiments, spatially shift-invariant recognition of biological microorganisms can be obtained through the reconstructed volumetric image of an unknown input biological scene.
In addition, 3D sensing, imaging, and recognition of biological microorganisms may be achieved by means of computational integral imaging (II). II sensing system can operate with incoherent light to generate multi-view perspectives of a 3D scene by using a micro-lens array [19,58–70]. The volumetric information of the biological microorganism is reconstructed numerically by ray projection method.
The research described in this paper has a number of benefits: 1) the biological microorganisms are analyzed in 3D coordinates and complex magnitude topology; 2) the single-exposure on-line holographic sensor allows optimization of the space bandwidth product of detection as well as robustness to environmental variations during the sensing process; 3) multiple exposures are not required, thus, dynamic biological events can be detected in real-time; 4) a statistical segmentation technique based on complex amplitude reconstructed holographic images is developed; 5) a graph matching technique with Gabor features measures the similarity of 3D morphologies between a reference and unknown input microorganisms; and 6) shape-tolerant 3D microorganism recognition leads to promising recognition performance independent of the geometrical shape of microorganisms.
In Section 2, we present a brief overview of SEOL digital holography and its advantages for sensing micro-organic biological events. The segmentation of the complex-valued biological microorganism images using the regional segmentation method is presented in Section 3. Microorganism recognition using 3D complex morphology of the reconstructed images is presented in Section 4. Shape-tolerant recognition technique using statistical inference is presented in Section 5. Spatially shift-invariant recognition of microorganisms is discussed in Section 6. In Section 7, experimental results are demonstrated. The possibility of computational integral imaging for 3D sensing, visualization, and recognition of biological microorganisms is discussed in Section 8. Summary and conclusions follow in Section 9.
2. Overview of SEOL holographic microscopy
The block diagram for real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events is shown in Fig. 1. The first stage is SEOL holographic sensing and 3D reconstruction. The interference intensity patterns of a microorganism in the Fresnel diffraction field is recorded by the charge-coupled device (CCD) array as shown in Fig 2. A beam splitter divides the laser beam into object and reference waves. The laser beam illuminates the specimen magnified by the microscope objective. The SEOL digital hologram of a microorganism can be generated by the reference wave and the diffracted wave-fronts of the specimen. Our system requires only a single-exposure, therefore SEOL digital holography can be suitable for recognizing a moving 3D object and it is tolerant to external noise factors. The complex field distribution of a microorganism at the hologram plane can be represented as follows:
$$\left\{\int \int \mathbf{O}(\epsilon ,\eta ;z)\mathrm{exp}\left[j\frac{\pi}{\mathit{\lambda z}}\left({\epsilon}^{2}+{\eta}^{2}\right)\right]\mathrm{exp}\left[-j\frac{2\pi}{\mathit{\lambda z}}\left(\mathit{x\epsilon}+\mathit{y}\mathit{\eta}\right)\right]\mathit{d\epsilon d\eta}\right\}\mathit{dz},$$
where d _{0} is the distance between the center of a microorganism and the hologram plane; δ is the microorganism’s depth along z-axis; and O(ε,η]) is the field distribution of a microorganism at the object plane. The SEOL digital hologram of a microorganism at the hologram plane can be expressed as follows:
where the reference beam’s intensity |R|^{2} is obtained by only a one time measurement on the experiment and the object beam’s intensity |O _{H}|^{2} can be approximated by means of the local averaging technique [28–33].
The reconstruction of the original microorganism is performed digitally on a computer. The field distribution of the microorganism from SEOL digital hologram can be numerically reconstructed by the inverse Fresnel transformation:
where IFrT{·} denotes the inverse Fresnel transformation. The reconstructed image from the SEOL digital hologram inevitably contains a conjugate image. This undesired component degrades the quality of the reconstructed 3D image, but the intrinsically defocused conjugate image also contains the information of the 3D microorganism. As an additional merit, SEOL digital holography allows us to obtain a dynamic time-varying scene which is digitally reconstructed on the computer for monitoring and recognizing moving and growing microorganisms.
3. Microorganism segmentation using bivariate region snakes
A critical step for microorganism identification is the segmentation of reconstructed images, which can facilitate proper detection and recognition. In this section, we address the segmentation of SEOL holographic images of microorganisms using bivariate jointly distributed region snakes [43] which is based on statistically independent region snakes [44, 45]. This technique is built on a statistical framework capable of handling images with complex-valued pixels and the joint probability distribution of magnitude and phase information of the scene. Within this framework, the optimization criterion is computed by maximizing the likelihood function of the target support hypothesis H _{w}, while no knowledge of the statistical properties of the target/background is assumed as a priori. Instead, a maximum likelihood estimator estimates the necessary statistical parameters. Moreover, target and background pixels are assumed to have independent bivariate Gaussian distribution for their magnitude and phase contents, respectively.
This method uses the concept of snake active contours [43–46] for separating the target from the background scene by a target support hypothesis. A snake is essentially a closed contour that can be approximated by a multi-node polygon, which evolves during the segmentation process to minimize a certain criterion known as the snake energy [46]. This contour divides the image into inner and outer regions which are denoted by Ω_{t} (target) and Ω_{b} (background), respectively. A stochastic algorithm is utilized to carry out the optimization and guide the deformations of the snake to eventually force the snake contour to converge to the original microorganism boundary [43–45 ].
There are several advantages for using the bivariate jointly distributed region snake algorithm [43–45]. In fact, the bivariate joint distribution of magnitude and phase information provides a more accurate image model for the reconstructed images of SEOL digital holography since it captures the correlation between each pixel’s magnitude and phase content. That is in contrast with independent distribution analysis, which treats the magnitude and phase information as independent random variables and consequently ignores the correlation of these two correlated random variables. In addition, in region snakes regime, the evolution of the snake contour is not dependent of local pixels near the contour edge as in classic snake active contours [46], but rather, the evolution process is based on the statistical distribution of the complex amplitude inside and outside the snake contour. The latter fact facilitates segmentation of objects even when they are out-of-focus or images with jagged object boundaries.
3.1 Methodology
Computational reconstruction of the SEOL hologram obtained from the interference pattern formed on the CCD involves the inverse Fresnel transform. As a result, the reconstructed holographic images have complex-valued pixels, thus each pixel s_{i} = α_{i} exp(jφ_{i} ) is a complex number with α_{i} and φ_{i} for its magnitude and phase, respectively. The target and background pixels are assumed to follow two independent bivariate normal distributions. Each distribution has a probability density function which consists of two dependent normal random variables α and φ as for magnitude and phase, respectively. The original bivariate normal probability density function is not separable directly. However, conditioning one of the variables (α ) on the second variable (φ), one can obtain the separated form of bivariate normal probability distribution function as follows [47]:
where Φ(x)= (2π)^{-1/2} exp(-x ^{2}/2) denotes the standard normal distribution. The script u∊{t, b} is used to discriminate the target and background respectively. Also, let parameter vector Θ_{m} ={${\mu}_{\alpha}^{u}$ ,${\mu}_{\phi}^{u}$ , ${\sigma}_{\alpha}^{u}$ ,${\sigma}_{\sigma}^{u}$ ,ρ_{u} } be the distribution parameters of either the target or the background. Since the separation of two random variables in Eq. (4) is made possible by conditioning α on φ, the corresponding conditional mean and variances can be used for a as follows [47]:
Let w = {w_{i} |i∊[1,N]} be a binary window model that determines the support of thetarget such that w_{i} = 1 for the pixels of target and w_{i} =0 elsewhere, and N is the total number of image pixels. Now the image can be represented as the addition of disjoint target complex pixels (a) inside the binary window w, and background complex pixels (b) outside the window [48,49]. Thus, we adopt the one dimensional representation of the image as: s_{i} =a _{i} w_{i} + b _{i}[1-w_{i} ].
With these notations the problem of segmentation reduces to finding an optimal choice for w that maximizes the hypothesis probability P[H _{W}|s] (i.e. the most likely window w of the target), where H _{w} represents the hypothesis that w is the target support. Using the Bayes rule and considering an equally likely hypothesis scenario, the maximization of a posteriori hypothesis probability is analogous to maximizing the conditional probability which is expressed as the likelihood function for H _{w} as following:
where vector Θ = {Θ_{t},Θ_{b}} contains all the parameters needed to characterize the bivariate normal distributions of the target and background pixels. Since no prior knowledge of the target and background is assumed, these parameters should be estimated. Thus maximum likelihood estimator has been utilized as following:
$$\phantom{\rule{7em}{0ex}}{\hat{\sigma}}_{\alpha}^{u}={\left\{\frac{1}{{N}_{u}\left(\mathbf{w}\right)}\sum _{\mathrm{i\u220a}{\Omega}_{u}}{\left({\alpha}_{i}-{\mu}_{\alpha}^{u}\right)}^{2}\right\}}^{\frac{1}{2}},\phantom{\rule{.2em}{0ex}}{\hat{\sigma}}_{\phi}^{u}={\left\{\frac{1}{{N}_{u}\left(\mathbf{w}\right)}\sum _{\mathrm{i\u220a}{\Omega}_{u}}{\left({\phi}_{i}-{\mu}_{\phi}^{u}\right)}^{2}\right\}}^{\frac{1}{2}},$$
$${\hat{\rho}}_{u}=\frac{1}{{N}_{u}\left(\mathbf{w}\right){\sigma}_{\alpha}^{u}{\sigma}_{\phi}^{u}}\sum _{\mathrm{i\u220a}{\Omega}_{u}}\left({\alpha}_{i}-{\mu}_{\alpha}^{u}\right)\left({\phi}_{i}-{\mu}_{\alpha}^{u}\right),\phantom{\rule{3em}{0ex}}$$
where N_{u} (w) denotes the number of pixels in the target or background window according to the script u. By substituting the bivariate joint probability distribution function in Eq. (4) into Eq. (6) and using Eqs. (5) and (7), one can see that maximization of Eq. (6) is analogous to minimization of the following criterion [43]:
Minimization of Eq. (8) leads to maximization of the likelihood function in Eq. (6), thus, this optimization forces the snake polygon (representing H _{w}) to evolve in such a way to find the statistically optimal H _{w} for the target support.
3.2 Stochastic optimization algorithm
In order to carry out the optimization, a simple stochastic algorithm is employed. The basic idea is to model the snake by a polygon with l constant points and iteratively deform the polygon nodes in such a way that the optimization criterion in Eq. (8) decreases at every iteration. This procedure is illustrated in the following diagram:
Several techniques such as multi-resolution snake, adaptive node selection and direction inertia are presented in [43] to increase the robustness and convergence speed of the above algorithm. The algorithm is terminated when no more contraction can be imposed on J(s,w) for long consecutive iterations.
4. 3D complex morphology-based recognition of microorganisms
In this section, we review 3D complex morphology-based recognition of microorganisms [28–32]. 3D complex morphology pattern is defined as the complex amplitude of computationally reconstructed holographic images at arbitrary depths. In the following subsections, we present detailed processes of the recognition technique.
4.1 Feature extraction by means of Gabor-based wavelets
It is more efficient to remove unnecessary background for recognition before processing the microorganisms. Threshold-based segmentation is performed using histogram analysis [28–32], however more advanced methods such as bivariate region snake in Section 3 can be applied. After the segmentation, images are decomposed and feature vectors are extracted by Gabor-based wavelets. The Gabor-based wavelets have the form of a Gaussian envelope modulated by the complex sinusoidal function [50–52]. The impulse response (or kernel) of the Gabor-based wavelet in 2D discrete domain is defined as:
where x is a position vector; k _{uν} is a wave number vector; and σ is proportional to the standard deviation of the Gaussian envelope. k _{uv} is defined as: k _{uν} = k _{0u}[cosϕ _{ν} sinϕ _{ν}]^{t}, k _{0u} =k _{0}/δ ^{u-1}, ϕ _{ν}=[(ν-1)/V]π, u = 1,…,U, and ν = 1,…,V, where k _{0u} is the magnitude of the wave number vector; ϕ _{ν} is the azimuth angle of the wave number vector; k _{0} is the maximum carrier frequency of the Gabor kernels; δ is the spacing factor in the frequency domain; U and V are the total numbers of decompositions along the radial and tangential axes, respectively; and the superscript t denotes the matrix transpose.
By changing the magnitude and direction of the vector k _{uν}, we can scale and rotate the Gabor kernel to make self-similar forms. The size of the Gaussian envelope is the same in the x and y directions which is proportional to σ|k _{uν}|. The second term in the square bracket in (9), exp(-σ ^{2}/2), subtracts the DC value so that it has a zero mean response [51]. The Gabor-based wavelets perform band-pass filtering where spatial and orientation frequency bandwidths depend on the size of the Gaussian envelope. The carrier frequency of the band pass filter is determined by k _{uν}. The Gaussian-envelope in the Gabor-based wavelet achieves the minimum space-bandwidth product [50]. It is suitable to extract local features with high frequency bandwidth (small u) kernels and global features with low frequency bandwidth (large u) kernels.
Let y_{uν} be the filtered output (Gabor coefficients) of the image Ô after it is 2D convolved with the Gabor kernel g_{uν} :
where Ô is the complex amplitude of the segmented image; and N_{x} and N_{y} are the size of the image in the x and y directions, respectively. The magnitude of Ô is normalized between 0 and 1. A rotation-invariant vector is defined at each pixel. The rotation-invariant property can be achieved by adding up all the Gabor coefficients along the tangential axes of the frequency domain. Thus, we can define the U-dimensional rotation-invariant node vector as:
4.2 Graph matching technique
The rigid graph matching (RGM) technique [53–56] measures the similarity of 3D complex morphology between a reference microorganism and unknown input samples. The graph is defined as a set of nodes associated in the local area. Let R and S be two identical and rigid graphs placed on the reference image O _{r} and unknown sample image O _{s}, respectively. The location of the reference graph R is pre-determined by the translation vector P _{r} and the clockwise rotation angle θ_{r} . Position vectors of K nodes in the graph R are computed as:
where ${\mathbf{x}}_{k}^{o}$ and ${\mathbf{x}}_{c}^{o}$ are, the position vectors of the node k and the center of the graph without any translation and rotation, respectively; and K is the total number of the nodes in the graph.
In our database, the reference graph is predetermined in order to represent unique shape features of the microorganism. Assuming the graph R covers a designated shape of the representing characteristic in the reference microorganism, we search the similar local shape by translating and rotating the graph S on unknown input images. A similarity function between the graph R and S is defined as the summation of the normalized inner product of two vectors v _{R}[x _{k}(P _{r},θ_{r} )] and v _{s}[x _{k}(P _{s},θ_{s} )]:
where 〉η〈 stands for the inner product; and v _{R}[x _{k}(P _{r},θ_{r} )] and v _{S}[x _{k}(P _{s},θ_{s} )] are the node vectors of the graph R in the reference image and the graph S in the unknown input image, respectively. We adopt a difference cost function to improve the discrimination capability between two graphs R and S. The difference cost is defined as the absolute value of the difference between two vectors:
The local area which is covered by the graph S is identified with the reference shape if the following two conditions are satisfied:
where α _{Γ} and α_{c} are thresholds for the similarity function and the difference cost, respectively; and $\widehat{\theta}$_{s} is obtained by searching the best matching angle to maximize the similarity function at the position vector p _{s}. In this subsection, we utilize graph matching technique for the identification of unknown input objects. However, a training process can be considered as a subsequent stage after the graph matching. In the case of microorganisms, automatic selection of training data by means of the graph matching might be useful when biological samples overlap and/or cluster which make it difficult to select individual objects. More detailed scheme of the automatic feature selection with the training and decision rules can be found in [29,30].
5. Shape-independent recognition approach
We apply statistical algorithms to the 3D recognition system to make it independent of the shape and profile of the microorganisms [33]. The shape-independent recognition approach may be suitable for recognizing 3D microorganisms such as bacteria and biological objects that do not have well defined shapes or profiles. For example, they may be simple, unicellular and branched in their morphological traits. It could also be applied to cells that vary in shape and profile rapidly. For the shape-independent approach, a number of sample segments are randomly extracted from the segmented 3D image of a microorganism. These samples are processed using statistical cost functions to classify the microorganism. The sample distributions for the difference of parameters between the sample segment features of the reference and input images are calculated using statistical estimation.
First, we reconstruct the 3D microorganism as a volume image from a SEOL digital hologram corresponding to a reference microorganism. Then, we randomly extract N pixels in the reconstructed 3D image. We repeat the above steps for S specimens of the same class of microorganism. Therefore, each sample segment consists of N by S complex values. We denote each pixel value in the trial sample patch as ${\mathbf{X}}_{\mathit{\text{Nn}}}^{S}$ [See Fig. 4]. We refer to each reconstruction plane of the 3D volume as “page.” Now, we change the locations of each sample in a given page, and repeat the above steps n times.
Similarly, we record the SEOL digital hologram of an unknown input microorganism and then restore the original input image. Next, we randomly extract N pixels n times in the unknown reconstructed 3D image and repeat the above steps about S specimens of the same microorganism. Each sample segment consists of N by S complex values. We have a total of n of these segments as well. We denote each pixel value in the trial sample patch as ${\mathbf{Y}}_{\mathit{\text{Nn}}}^{S}$ [See Fig. 4]. For classification and recognition of biological microorganisms, we use the statistical inference for the equality of the locations and dispersions between reference sample data and unknown sample data using a statistical sampling and estimation theory.
We assume that random variables ${\mathbf{X}}_{N}^{S}$ and ${\mathbf{Y}}_{N}^{S}$ which are elements inside the reference and unknown input sample segment are statistically independent with identical population distribution f(X) and f(Y), respectively. Also, let ${\mathbf{X}}_{N}^{S}$ be independent of ${\mathbf{Y}}_{N}^{S}$ . It is noted that the reconstructed image from a SEOL hologram consists of complex values, so we perform two separate univariate hypothesis testing about the real part and the imaginary part, respectively.
From the histogram analysis of the real and imaginary parts of the reconstructed 3D images from the SEOL digital hologram, we may consider that the random variables (real or imaginary parts of the reconstructed image) in the sampling segment nearly follow Gaussian distribution. For checking the normality of sample data, the Ch-square goodness of fit test [57] can be performed.
For comparing the variance of two sample segments between reference and input, if the sample data are normally distributed, the following F-test can be used [47,57]:
where N_{x} and N_{Y} are the number of reference and input sampling segment, respectively; V[·] denotes the variance; and V̂[·] is unbiased sample variance. If the sample data are not normally distributed, we use the following Levene’s test [57] by performing an analysis of variance on the absolute deviations of the data from their respective sample:
where Z _{∙J} = |Y _{∙j} - YϜ _{∙j}|;YϜ _{∙j}is the sample mean of the reference or unknown input; Z¯_{∙} is the sample means of the Z _{∙j}; and Z¯ is the overall mean of the Z _{∙j} .
For comparing the means of two sample segments between reference and input image, if the sample data are normally distributed, the following t-test can be used [47,57]:
where V¯_{p} is the pooled estimator of the variance of actual population; and E[·] denotes the expectation operator. If the sample data are not normally distributed, we use the following Mann-Whitney test [57] that does not require assumptions about the shape of the underlying distributions by performing an analysis of median from their respective sample:
where the statistic U is corresponding to the reference image; and R _{x} is the rank sum of the sample data of the reference image. If the sample size is greater than 8, it is known that the statistic U is approximately normally distributed, so Eq. (19) can be Z = (U-μ _{U})/σ _{U}, where μ _{U} and σ _{U} are mean and standard deviation of the statistic U, respectively.
We also perform Kolmogorov-Smirnov Test (K-S Test) [57] as a distribution-free test for comparison of two populations. The statistic is given by:
where F _{∙}(u) is the empirical cumulative distribution functions (CDF) of two samples of data.
If the p-value calculated from the statistical test in Eqs. (16)–(20) is less than the desired value at a level of significance α, we can reject the null hypothesis H _{0}. It is noted that H _{0} indicates that there is no statistically significant difference between dispersions (variance), locations (mean) and distribution functions at a given confidence level.
6. Shift-invariant recognition approach
From SEOL digital holographic microcopy, we can reconstruct cross-section images of biological microorganisms along longitudinal direction. These facts enable us to obtain focused images of microorganisms located at the different reconstruction distance as shown in Fig. 5. Applying correlation techniques to the volumetric intensity image of unknown input microorganisms and a reference intensity image, we can recognize the unknown input and find the focused image of it, respectively [24]. These allow the recognition system to be shift-invariant.
The cross-correlation function Corr(x, y, p) between the reference image and the unknown input section images is given by:
where FT denotes Fourier transformation; p is a page number; and U_{X}(x, y, p) and U_{Y} (x,y) are amplitude filed distributions of the unknown input and reference, respectively.
7. Experiments results
7.1 Segmentation results
In this section some experimental results of the bivariate region snake segmentation described in Section 3 are presented. Computationally reconstructed images of several microorganisms from SEOL holographic microscopy are used. As discussed earlier, bivariate jointly distributed region snake incorporates both the magnitude and phase information simultaneously since the holographic images are complex, however, the magnitude images are used for illustration in the figures hereafter. The snake contour is modeled as a polygon with l vertexes and the binary window function, w, is set to 1 inside and 0 outside the polygon. The images in the first column in Fig. 6(a) shows two different diatom algae over which the snakes are initialized with 4 nodes. Although the initial contour is completely different from target boundaries the bivariate region snake is able to capture the microorganism body after approximately 1500 iterations [See Fig. 6(b)]. As it can be seen in Fig. 6(c), the optimization traces obtain a reasonable slope and show very slight progress after the 1500^{th} iteration which can be an indicator to stop the iterations.
In the next example, the segmentation of sphacelaria alga has been illustrated. This alga has a branch-like structure. The initialization captures a small portion of the living organism and throughout the iterations, the snake creeps to capture its whole body. Since the structure of algae requires many snake nodes, and the optimization algorithm’s speed is inversely proportional to the number of snake nodes, more iteration is needed to complete the segmentation. Figure 7(a) is intentionally reconstructed out-of-focus from a SEOL hologram, so it appears blurred without well-defined edges to be more challenging, however, the bivariate region snake shows promising results in Figs. 7(b) and (c).
The next experiment in Fig. 8 shows the segmentation of a diatom, where the boundaries of the microorganism are traced by the snake. The introduction of slight structural mutation on the snake results in small peaks in the optimization profile as shown in Fig. 8(d). These structural mutations are imposed by eliminating unnecessary nodes which lie close to the line segment connecting their previous and next nodes. The optimization plot in Fig. 8(d) shows how mutations can help the snake find its way through narrow passages.
7.2 Experimental results for3D morphology-based recognition
To test the recognition performance, we generate 9 holograms for sphacelaria alga and tribonema aequale alga samples, respectively. We denote 9 sphacelaria alga samples as A_{1},…A_{9} and 9 tribonema aequale alga samples as B_{1},…,B_{9}. To test the robustness of the proposed algorithm, the position of the CCD is changed during the experiments resulting in different depths for the focused reconstruction image. The samples A_{1}-A_{3} are reconstructed at 180 mm, A_{4}-A_{7} are reconstructed at 200 mm, and A_{8} and A_{9} are reconstructed at 300 mm and all samples of tribonema aequale (B_{1}~B_{9}) are reconstructed at 180 mm for the focused images.
Magnitude and phase parts of computationally reconstructed complex images are cropped and reduced into images with 256 × 256 pixels by the reduction ratio 0.25. During the segmentation, we assume less than 20% of the lower magnitude region of the complex image is occupied by the microorganisms and the magnitude of the microorganisms is less than 45% of the background diffraction field. The parameters for Gabor-based wavelets are set as: σ = π, k _{0} = π/4, δ= s√2 , U = 3, and V = 6. Figure 9 shows the node vector components when u = 1, 2, and 3. Only real parts of y_{uν} in Eq. (10) are used for the feature extraction.
To recognize two filamentous objects which have different thicknesses and distributions, we select two different reference graphs and place them on the sample A_{1} and B_{1}. A rectangular grid is selected as a reference graph for the sphacelaria alga which shows regular thickness in the reconstructed images. The reference graph is composed of 25×3 nodes and the distance between nodes is 4 pixels in the x and y directions. Therefore, the total number of nodes in the graph is 75. The reference graph R is placed with p _{r} = [81 75]^{t} and θ_{r} =135° in the sample A1 as shown in Fig. 10(a). The threshold α _{Γ} which is set at 0.6 is only used. The threshold is selected heuristically to produce better results.
Considering the computational load, the graph S is translated by every 3 pixels in the x and y directions for measuring the similarity and difference to the graph R. To search the best matching angles, the graph S is rotated by 7.5° from 0 to 180° at every translated location. When the positions of rotated nodes are not integers, they are replaced with the nearest neighbor nodes. Figure 10(b) shows another sphacelaria alga sample (A_{9}) for the input image with the graph matching results. The reference shapes are detected 65 times along the filamentous objects. Figure 10(c) shows the number of detections for 9 true-class and 9 false-class samples. The detection number for A_{1}–007E;A_{9} varies from 27 to 220 showing strong similarity between the reference sample (A_{1}) and the input samples (A_{2}~A_{9}) of the true-class microorganism. There is no detection found in the samples B_{1}~B_{9} which are the false-class microorganisms. Figure 10(d) shows the maximum similarity and the minimum difference cost for all samples.
To recognize tribonema aequale alga, a wider rectangular grid is selected to identify its thin filamentous structure. The reference graph is composed of 20×3 nodes and the distance between nodes is 4 pixels in the x direction and 8 pixels in the y direction, therefore, the total number of nodes in the graph is 60. The reference graph R is placed with p _{r} = [142 171]^{t} and θ_{r} = 90° in the sample B_{1} as shown in Fig. 11(a). The thresholds α _{Γ} and α_{c} are set at 0.8 and 0.7, respectively. Figure 11(b) shows another sample (B_{2}) of the true-class input image with the graph matching results. The reference shapes are detected 30 times along the thin filamentous object. Figure 11(c) shows the number of detections for 9 true-class and 9 false-class microorganisms. The detection number for the true-class samples B_{1}~B_{9} varies from 6 to 49. Four false detections are found in one of the false-class samples A_{8}. Figure 11(d) shows the maximum similarity and the minimum difference cost for all samples.
7.3 Experimental results for shape-tolerant and shift-invariant 3D microorganism recognition
In this subsection, we conduct statistical estimation and inference to test the performance of our shape tolerant 3D microorganism recognition system using SEOL digital holography. First, 100 trial sampling segments are produced by randomly selecting the pixel values in the segmented oscillatoria bacteria 3D image as the reference microorganism, where we change the size of each trial sampling segment 30, 100, and, 200. We apply Sobel edge-detection method to the segmented 3D images.
Similarly, a number of sampling segments are randomly selected in the oscillatoria bacteria 3D image as the true-class inputs and in the diatom alga image as the false-class inputs. We produced 100 true-class and 100 false-class input sampling segments, respectively. The reference and input images are reconstructed at distance d = 270 mm as shown Fig 12.
Table 1(a) shows experimental results of F-test for comparing the variances between reference and unknown input. As shown in Table 1(a), it is noted that the average p-value for the true-class input are around 0.4534 and 0.5166 in the real and imaginary parts at the sample size 100, respectively and for the false-class input are around 0.0080 and 0.0226. Table 1(b) shows the results of the Levene's test for the difference of scale parameters between the reference and input. It is noted that average p-value for the true-class input are around 0.7068 and 0.6856 in the real and imaginary parts at the sample size 100, respectively, and for the false-class input are around 0.0688 and 0.0156, respectively.
Levene’s Test (any continuous distribution) | |||||||
---|---|---|---|---|---|---|---|
Real part | Imaginary part | ||||||
True Class | False Class | True Class | False Class | ||||
Test statistic | p-value | Test statistic | p-value | Test statistic | p-value | Test statistic | p-value |
Sample size 30 | |||||||
1.7120 | 0.3686 | 1.5140 | 0.4060 | 0.5240 | 0.5440 | 2.0960 | 0.3700 |
Sample size 100 | |||||||
0.2160 | 0.7068 | 5.9300 | 0.0688 | 0.3000 | 0.6856 | 8.2300 | 0.0156 |
(b) |
Mann-Whitney Test (any continuous distribution) | |||
---|---|---|---|
Realpart | Imaginary part | ||
True Class (p-value) | False Class (p-value) | True Class (p-value) | False Class (p-value) |
Sample size 30 | |||
0.5217 | 0.4534 | 0.5100 | 0.5091 |
Sample size 100 | |||
0.5143 | 0.3839 | 0.4567 | 0.5075 |
Sample size 200 | |||
0.5336 | 0.3092 | 0.5626 | 0.5067 |
(b) |
The experimental result of T-test for comparing the mean between the reference and input is shown in the Table 2(a). It is noted that average p-value for the true-class input are around 0.5530 and 0.5677 in the real and imaginary parts at the sample size 200, respectively and for the false-class input are around 0.2141 and 0.5009, respectively. Table 2(b) shows the results of nonparametric test for the difference of the median between the reference and input. It is noted that average p-value for the true-class input are around 0.5336 and 0.5626 in the real and imaginary parts at the sample size 200, respectively and for the false-class input are around 0.3092 and 0.5067, respectively. Table 3 shows the experimental result of distribution-free test for a comparison of two populations. The average maximum difference between the cumulative distributions for the true-class input are around 0.0710 and 0.0770 in the real and imaginary parts at the sample size 200, respectively and for the false-class input are around 0.1320 and 0.1240, respectively.
We calculate the correlation coefficient between the diatom alga image reconstructed at d = 400 mm as reference and unknown input image to test the shift-invariance of our recognition system, where we move the longitudinal position of microorganism using xy-translation stage. As shown in Fig 13, we obtain the correlation peak at the reconstruction distance around 270 mm for the true-input class, but for the false-class input the correlation value is less than around 0.1.
7.4 Discussion of real- time processing
For real-time application, computational complexity should be considered. Since SEOL holography requires a single-exposure, real-time sensing is possible. For computational reconstruction of holographic images, the computational time is of the same order of fast Fourier transformation (FFT) which is O(N) = Nlog_{2} N, where N is the total number of pixels in the holographic image. Therefore, with high speed electronics, it is possible to have realtime detection.
For the segmentation using bivariate region snakes, the optimization of the cost function in the snake segmentation is carried out by the stochastic algorithm [43]. The computational complexity of the process depends on several parameters including the image size, number of polygon vertexes, the step size, and the initial position of the snake contour. However, due to the small deformation of the contour between successive iterations, it is possible to use the computed statistical information from previous iterations along with the pixel information inside the deformation area to derive the exact current statistical information needed [43]. The simulations for this work are implemented on a PC with Intel Pentium IV processor taking sometime between 1 and 10 seconds for the final results. It should be noted that a dedicated hardware can improve this speed dramatically.
For the morphology-based recognition, the computational time of the Gabor filtering is of the same order as FFT. For the graph matching, the computational time depends on the shape and the size of the graph, the dimension of the feature vector, and searching steps for the translation vector and the rotation angle. Since the most time consuming operation results from searching the graphs, that is O(N) = N ^{2}, the system requires quadratic computational complexity.
For the shift-invariant recognition approach, the cross-correlation function can be obtained with the same order of FFT. Therefore, real-time processing can be achieved by developing a specialized hardware or parallel processing.
8. 3D visualization and recognition using integral imaging
In this section, we present a brief discussion of 3D sensing and visualization of biological microorganisms using integral imaging (II) [19,58–70] which can be used for identification using the algorithms presented in this paper. In contrast to holography, II requires incoherent illumination to record the information of a 3D scene. II is a promising technique based on recording the multi-view directional information of 3D scene. A micro-lens array projects the 3D scene onto a detector array generating a set of elemental images. Captured micro-objects have different perspective and location information in each elemental image. The scene in II can be illuminated under ambient or incoherent light. Reconstruction is the reverse of the sensing process. In computational reconstruction, the elemental images are numerically projected through a virtual lens array to reproduce the original 3D object by means of geometrical ray projection method [66–68]. Therefore, volumetric scenes can be reconstructed at different longitudinal distances. Computational reconstruction of II can improve the image quality degradation caused by optical devices [68]. There are several advantages in 3D object recognition using II. One advantage is that II allows multiple perspectives imaging by a single shot. The depth and perspective information in the multiple perspective imaging can be utilized to build a compact 3D recognition system. The other advantage is that II is a passive sensor using incoherent light. Indeed, computational II reconstructs volumetric scenes at different depths, and we are able to recognize objects of interest located at different longitudinal distances.
The experimental system uses a micro-lens array, and a pick-up camera as shown in Fig. 14. A filamentous microorganism, sphacelaria alga with a size of 50~100 μm is used in the experiments. A set of elemental images is captured with one exposure. Reconstructed microorganisms images at different depths (d) are shown in Fig. 15.
9. Summary and conclusions
Automatic recognition of biological microorganisms is very challenging because of their strong resemblance and dynamic nature such as moving, growing, and varying in size and shape. There are broad applications of real-time 3D surveillance and identification of dynamic microscopic bio-organic events. This paper is an overview of techniques for 3D sensing, imaging, segmentation, and recognition of biological microorganisms including SEOL holographic microscopy. 3D sensing and reconstruction by means of SEOL holographic microscopy is suitable for inspection of dynamic biological microscopic events. The sensing stage is robust to dynamic movement of microscopic objects and environmental conditions as compared with the multiple-exposure phase-shifting digital holography. The setup of SEOL digital holography is simpler than off-axis holography and more robust to object size and scale variations.
A number of approaches are presented for the recognition of the biological microorganisms. Segmentation extracts regions of interest for further processing. A number of techniques are discussed for segmentation of biological microorganisms sensed by SEOL holographic microscopy. In particular, bivariate jointly distributed region snake is developed as a statistical segmentation method maximizing the conditional probability of the target hypothesis assuming the joint Gaussian distribution for the complex amplitude pixels.
One 3D recognition approach examines the simple morphological traits comprising the complex amplitude of biological microorganisms. Feature extraction by Gabor-based wavelets and graph matching technique are used to localize the specific 3D shape of reference microorganisms. A scheme of automated feature vector selection is claimed. Experimental results for graph matching technique are presented.
Shape-tolerant 3D recognition of microorganisms using the statistical cost functions and inference is presented. A number of sampling segments are randomly extracted from the microorganism and processed with cost functions and statistical inference theory. By investigating the Gaussian property of the holographically reconstructed images of microorganisms, we are able to distinguish the sampling segments of the true-class object in the database from the different classes of microorganisms presented at the input.
Using SEOL digital holographic microcopy, we can numerically reconstruct focused sectional images of biological microorganisms along longitudinal direction. We have shown by experiments that spatially shift-invariant recognition of biological microorganisms can be obtained throughout the reconstructed volumetric image of input biological scene.
In addition, 3D sensing and imaging of biological microorganisms might be achieved by means of computational II and followed by recognition algorithms. II records multi-view perspectives of 3D microorganisms by using a micro-lens array. The volumetric information of the biological microorganism can be numerically reconstructed by ray projection method. The volumetric reconstruction allows us to search the microorganisms in 3D space.
We have presented several different approaches and image processing techniques based on SEOL holography for 3D segmentation and recognition of biological microorganisms. Although these techniques are applied separately to different classes of microorganisms, the combination of these techniques may enhance the performance for the sensing, segmentation, and identification of unknown microorganisms.
Acknowledgments
This work has been supported by Defense Advanced Research Projects Agency (DARPA). We wish to thank Dr. Seung-Hyun Hong for his assistance.
References and links
1. The largely forgotten Influenza in 1918, a. k. a. “Spanish Flu” or “La Grippe” killed an estimated 40 million people worldwide, and an estimated 600,000 in the USA. It infected an estimated 20% of the world population. See Alfred Crosby, “America’s Forgotten Pandemic: The Influenza of 1918,” (Cambridge University Press, Cambridge, 1989).
2. http://www.pbs.org/wgbh/amex/influenza/
3. J. W. Lengeler, G. Drews, and H. G. Schlegel, Biology of the prokaryotes, (New York, Blackwell science,1999).
4. M. G. Forero, F. Sroubek, and G. Cristobal, “Identification of tuberculosis bacteria based on shape and color,” Real-time imag. 10, 251–262 (2004). [CrossRef]
5. J. Alvarez-Borrego, R. R. Mourino-Perez, G. Cristobal-Perez, and J. L. Pech-Pacheco, “Invariant recognition of polychromatic images of Vibrio cholerae 01,” Opt. Eng. 41, 827–833 (2002). [CrossRef]
6. A. L. Amaral, M. da Motta, M. N. Pons, H. Vivier, N. Roche, M. Moda, and E. C. Ferreira, “Survey of protozoa and metazoa populations in wastewater treatment plants by image analysis and discriminant analysis,” Environmentrics 15, 381–390 (2004). [CrossRef]
7. S.-K. Treskatis, V. Orgeldinger, H. wolf, and E. D. Gilles, “Morphological characterization of filamentous microorganisms in submerged cultures by on-line digital image analysis and pattern recognition,” Biotechnol. Bioeng. 53, 191–201 (1997). [CrossRef] [PubMed]
8. T. Luo, K. Kramer, D. B. Goldgof, L. O. Hall, S. Samson, A. Remsen, and T. Hopkins, “Recognizing plankton images from the shadow image particle profiling evaluation recorder,” IEEE Trans. Syst. Man. Cybern. Part B 34, 1753–1762 (2004). [CrossRef]
9. A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, “Design and application of quadratic correlation filters for target detection,” IEEE Trans. Aerosp. Electron. Syst. 40, 837–850 (2004). [CrossRef]
10. F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. 43, 315–323 (2004). [CrossRef] [PubMed]
11. A. K. Jain, Fundamentals of digital image processing, (Prentice Hall,1989).
12. R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification 2^{nd} , (NewYork, Wiley Interscience,2001).
13. C.M. Bishop, Neural networks for pattern recognition, (New York, Oxford University Press,1995).
14. B. Javidi and P. Refregier, eds., Optical pattern recognition, (SPIE,1994).
15. H. Kwon and N. M. Nasrabadi, “Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Trans. Geosci. Remote Sens. 43, 388–397 (2005). [CrossRef]
16. F. Sadjadi, ed., Milestones in performance evaluations of signal and image processing systems, (SPIE Press,1993).
17. P. Refregier, V. Laude, and B. Javidi, “Nonlinear joint transform correlation: an optimum solution for adaptive image discrimination and input noise robustness,” J. Opt. Lett. 19, 405–407 (1994).
18. F. Sadjadi, “Improved target classification using optimum polarimetric SAR signatures,” IEEE Trans. Aerosp. Electron. Syst. 38, 38–49 (2002). [CrossRef]
19. B. Javidi and F. Okano, eds., Three-dimensional television, video, and display technologies, (New York, Springer,2002).
20. B. Javidi, ed., Image Recognition and Classification: Algorithms, Systems, and Applications, (New York, Marcel Dekker,2002). [CrossRef]
21. B. Javidi and E. Tajahuerce, “Three dimensional object recognition using digital holography,” Opt. Lett. 25, 610–612 (2000). [CrossRef]
22. O. Matoba, T. J. Naughton, Y. Frauel, N. Bertaux, and B. Javidi, “Real-time three-dimensional object reconstruction by use of a phase-encoded digital hologram,” Appl. Opt. 41, 6187–6192 (2002). [CrossRef] [PubMed]
23. Y. Frauel and B. Javidi, “Neural network for three-dimensional object recognition based on digital holography,” Opt. Lett. 26, 1478–1480 (2001). [CrossRef]
24. E. Tajahuerce, O. Matoba, and B. Javidi, “Shift-invariant three-dimensional object recognition by means of digital holography,” Appl. Opt. 40, 3877–3886 (2001). [CrossRef]
25. B. Javidi and D. Kim, “Three-dimensional-object recognition by use of single-exposure on-axis digital holography,” Opt. Lett. 30, 236–238 (2005). [CrossRef] [PubMed]
26. D. Kim and B. Javidi, “Distortion-tolerant 3-D object recognition by using single exposure on-axis digital holography,” Opt. Express 12, 5539–5548 (2005). [CrossRef]
27. S. Yeom and B. Javidi, “Three-dimensional object feature extraction and classification with computational holographic imaging,” Appl. Opt. 43, 442–451 (2004). [CrossRef] [PubMed]
28. B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express 13, 4492–4506 (2005). [CrossRef] [PubMed]
29. S. Yeom, I. Moon, and B. Javidi, “Real-time 3D sensing, visualization and recognition of dynamic biological micro-organisms,” Proceedings of IEEE 94, 550–566 (2006). [CrossRef]
30. S. Yeom and B. Javidi, “Three-dimensional recognition of microorganisms,” J. Bio. Opt. 11, 024017-1~8 (2006). [CrossRef]
31. S. Yeom, I. Moon, and B. Javidi, “Two approaches of 3D microorganism recognition using single exposure online digital holography,” in F. Sadjadi and B. Javidi (eds.), Physics of Automatic Target Recognition, (Springer,2006).
32. B. Javidi, I. Moon, and S. Yeom, “3D microorganism sensing, visualization and recognition using single exposure on-line digital holography,” Optics and Photonics News 17, 16–21 (2006). [CrossRef]
33. I. Moon and B. Javidi, “Shape-tolerant three-dimensional recognition of microorganisms using digital holography,” Opt. Express 13, 9612–9622 (2005). [CrossRef] [PubMed]
34. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11, 3528–3541 (2003). [CrossRef] [PubMed]
35. S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express 13, 9310–9330 (2005). [CrossRef] [PubMed]
36. T. Kreis, ed., Handbook of Holographic Interferometry, (Wiley, VCH,2005).
37. J. W. Goodman, Introduction to Fourier Optics 2^{nd}, (Boston, McGraw Hill,1996). [PubMed]
38. J. W. Goodman and R. W. Lawrence, “Digital image holograms,” Appl. Phys. Lett. 11, 77–79 (1967). [CrossRef]
39. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef] [PubMed]
40. P. Ferraro, S. Grilli, D. Alfieri, S. D. Nicola, A. Finizio, G. Pierattini, B. Javidi, G. Coppola, and V. Striano, “Extended focused image in microscopy by digital holography,” Opt. Express 13, 6738–6749 (2005). [CrossRef] [PubMed]
41. T. Zhang and I. Yamaguchi, “Three-dimensional microscopy with phase-shifting digital holography,” Opt. Lett. 23, 1221–1223 (1998). [CrossRef]
42. B. R. Brown and A. W. Lohmann, “Complex spatial filtering with binary masks,” Appl. Opt. 5, 967–969 (1966). [CrossRef] [PubMed]
43. M. DaneshPanah and B. Javidi “Segmentation of 3D holographic images using bivariate jointly distributed region snake,” Opt. Express (submitted).
44. O. Germain and P. Refregier “Optimal snake-based segmentation of a random luminance target on a spatially disjoint background,” Opt. Lett. 21 (1996). [CrossRef] [PubMed]
45. C. Chesnaud, V. Page, and P. Refregier , “Improvement in robustness of the statistically independent region snake-based segmentation method of target-shape tracking,” Opt. Lett. 23, 488–490 (1998). [CrossRef]
46. M. Kass, A. Witkin, and D. Terzopoulus, “Snakes: Active contour models,” Int. J. Comput. Vision 1, 321–331 (1987). [CrossRef]
47. N. Mukhopadhyay, Probability and Statistical Inference, (New York, Marcel Dekker,2000).
48. B. Javidi and J. Wang, “Limitations of the classic definition of the signal-to-noise ratio in matched filter based optical pattern recognition,” Appl. Opt. 31, 6826–6829 (1992). [CrossRef] [PubMed]
49. B. Javidi and J. Wang, “Optimum distortion invariant filters for detecting a noisy distorted target in background noise,” J. Opt. Soc. Am. 12, 2604–2614 (1995). [CrossRef]
50. J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. 2, 1160–1169 (1985). [CrossRef]
51. T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. Pattern. Anal. Mach. Intell. 18, 959–971 (1996). [CrossRef]
52. J. G. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst. for Video. Tech. 14, 21–30, (2004). [CrossRef]
53. M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Trans. Comput. 42, 300–311 (1993). [CrossRef]
54. R. P. Wurtz, “Object recognition robust under translations, deformations, and changes in background,” IEEE Trans. Pattern. Anal. Mach. Intell. 19, 769–775 (1997). [CrossRef]
55. B. Duc, S. Fischer, and J. Bigun, “Face authentification with Gabor information on deformable graphs,” IEEE Trans. Image Process. 8, 504–516 (1999). [CrossRef]
56. S. Yeom, B. Javidi, Y. J. Roh, and H. S. Cho, “Three-dimensional object recognition using x-ray imaging,” Opt. Eng. 43, 027201-1~23 (2005). [CrossRef]
57. G.W. Snedecor and W.G. Cochran, Statistical Methods, (Iowa State University Press,1989).
58. M. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).
59. H. E. Ives, “Optical properties of a Lippmann lenticuled sheet,” J. Opt. Soc. Am. 21, 171–176 (1931). [CrossRef]
60. Okoshi, Three-Dimensional Imaging Techniques, (New York, Academic,1976).
61. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–76 (1968). [CrossRef]
62. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a there-dimensional image based on Integral Photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]
63. F. Jin, J. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29, 1345–1347 (2004). [CrossRef] [PubMed]
64. J. S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29, 1230–1232 (2004). [CrossRef] [PubMed]
65. R. Martínez-Cuenca, G. Saavedra, M. Martínez-Corral, and B. Javidi, “Enhanced depth of field integral imaging with sensor resolution constraints,” Opt. Express 12, 5237–5242 (2004). [CrossRef] [PubMed]
66. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001). [CrossRef]
67. A. Stern and B. Javidi, “3D image sensing and reconstruction with time-division multiplexed computational integral imaging (CII),” Appl. Opt. 42, 7036–7042 (2003). [CrossRef] [PubMed]
68. S. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579 – 4588 (2004). [CrossRef] [PubMed]
69. Y. Frauel, E. Tajahuerce, O. Matoba, A. Castro, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for three-dimensional object recognition,” Appl. Opt. 43, 452–462 (2004). [CrossRef] [PubMed]
70. A. Stern and B. Javidi, “Three-Dimensional image sensing, visualization, and processing using integral imaging,” Proceedings of the IEEE 94, 591–607(2006). [CrossRef]