## Abstract

In this paper, we describe the bivariate jointly distributed region snake method in segmentation of microorganisms in Single Exposure On-Line (SEOL) holographic microscopy images. 3D images of the microorganisms are digitally reconstructed and numerically focused from any arbitrary depth from a single recorded digital hologram without mechanical scanning. Living organisms are non-rigid and they vary in shape and size. Moreover, they often do not exhibit clear edges in digitally reconstructed SEOL holographic images. Thus, conventional segmentation techniques based on the edge map may fail to segment these images. However, SEOL holographic microscopy provides both magnitude and phase information of the sample specimen, which could be helpful in the segmentation process. In this paper, we present a statistical framework based on the joint probability distribution of magnitude and phase information of SEOL holographic microscopy images and maximum likelihood estimation of image probability density function parameters. An optimization criterion is computed by maximizing the likelihood function of the target support hypothesis. In addition, a simple stochastic algorithm has been adapted for carrying out the optimization, while several boosting techniques have been employed to enhance its performance. Finally, the proposed method is applied for segmentation of biological microorganisms in SEOL holographic images and the experimental results are presented.

© 2006 Optical Society of America

## 1. Introduction

Image Segmentation is a useful operation in image processing [1, 2]. According to Haralick and Shapiro [3], the objective of segmentation is to break down the input image into homogenous partitions with respect to some characteristics such as gray tone or texture. In deed, the characteristics mentioned above are primitive and many problems require much more sophisticated measures of homogeneity [4]. In this paper we address segmentation of 3D complex holographic images in a statistical framework.

Holographic images can be recorded digitally with optoelectronic sensor arrays for future reconstruction, recognition or compression tasks [5–19]. There are several approaches to obtain a three dimensional (3D) digital hologram from an object, including off-axis, phase shift and Single Exposure On-Line (SEOL) digital holography. SEOL holography uses a 3D coherent Mach-Zehnder interferometer (see Fig. 1) to record a single on-line Fresnel interference pattern of the reference and object beams on a solid state imaging sensor such as a charged coupled device [12–16]. 3D microorganism imaging and recognition using SEOL was reported recently in [14–16]. Single exposure method has some advantages over its multiple exposure counterpart. Particularly, it eliminates the need for multiple interferogram recordings with phase shifts in the reference beam [17]. This property is important in dynamic imaging of objects [14–16], since the scene may change during the phase shift recordings.

Off-axis holography also eliminates the need for multiple recordings [18, 19]. However, this method imposes some limitations. Particularly, only a fraction of the space-bandwidth product of the photo sensor is utilized, which results in inefficient usage of the imaging device and reduced quality of visualization. Also, in off-axis holography, the reconstructed image size is a function of the angle between the object and reference beams, thus its application is restricted to objects with limited dimensions.

For segmentation of SEOL holographic images, we develop a statistical segmentation method based on the concept of region snakes [20–23]. We extend the method of statistically independent region snakes [20] to a framework handling images with complex valued pixels (i.e. images resulting from digital reconstruction of SEOL holograms). A statistical framework is developed and an optimization criterion is computed by maximizing the likelihood function of the target support hypothesis. Throughout this approach, bivariate Gaussian distributions are assumed as probability models for the target and background pixels. However no previous knowledge of the statistical moments of pixels’ probability density functions is assumed as a priori. Instead, maximum likelihood estimator is utilized to estimate the necessary statistical parameters.

Clearly, for holographic images, bivariate joint magnitude and phase distribution provides a more accurate model, comparing to independent univariate Gaussian analysis. This is achieved by providing a mechanism to employ the correlation between each pixel’s magnitude and phase content for better segmentation. Thus, this method can be used in segmentation of any 3D holographic image with jointly distributed magnitude and phase.

The paper is organized in six sections. The background and previous work is reported in section 2. Subsequently, the discussion of bivariate jointly distributed region snake and the optimization criterion follows in section 3. The stochastic optimization algorithm along with discussions on the robustness and speed issues are presented in section 4. Next, experimental results on segmentation of microorganisms in SEOL holographic images of biological specimen are illustrated in section 5 and the paper concludes in section 6.

## 2. Background

Based on the work of *Kass* [24], region snake was introduced in [20] as a reformulation of *active contours* in a statistical framework to derive an optimization criterion in analogy with conventional *snake energy* in snake active contours terminology. In addition, an accompanying stochastic algorithm is proposed to carry out the aforementioned optimization [20–23]. Region snake develops a statistically optimal segmentation scheme for images with strong additive or multiplicative noise (such as synthetic aperture radar images [23]), where classic snake active contours exhibit poor performance due to their local processing nature.

Thus, the contribution of statistical region snake was to change the strategy of contour evolution from local pixel processing to region based criterion. A classic difficulty in the original snake active contours is that the snake can easily get trapped in local minimums of image surface if the snake boundary is to be driven by the gradient of local neighboring pixels to the snake contour [24]. The condition gets worse in case the image surface is jagged. This major drawback requires the snake contour to be initialized close to original boundaries of the target for successful convergence. Although some other methods [25,26] have been proposed to overcome the locality issue of snake active contours, more or less, they are specialized for certain type of images and may fail to address a wide range of problems.

Employing the statistical information of regions for segmentation was introduced originally in [27, 28]. However, in [20] statistical region snake uses the concept of non-overlapping properties of target image and background noise [29, 30] and develops a statistical framework to estimate the target support. This approach enables region snake to enter the scope of Synthetic Aperture Radar (SAR) images [23], where speckle noise fails most of the classic edge based approaches as well as degrading the performance of more sophisticated methods like watersheds due to over segmentation [31]. Hence, region snake has inherited the free-shape target detection ability of snake active contours, while broadening its scope of applicability to more sophisticated image models by introducing noise tolerance.

## 3. Bivariate jointly distributed region snake

In the context of snake active contours, a snake is essentially a closed contour which evolves to minimize a certain criterion known as *snake energy* [24]. The contour divides the image into inner and outer separate regions which will be denoted by Ω_{t} and Ω_{b}. Throughout this paper, script *t* and *b* will be used for target and background, respectively.

Statistical region snake’s primary objective is to find the best Ω_{t} matching the original target support. For this purpose, we use hypothesis testing from statistical decision theory. We consider a common assumption in which the probability distribution of pixels’ complex amplitudes of the object are statistically independent of that of background, while no prior knowledge of the distribution parameters is available. In addition, the snake contour can be modeled with a polygon of *N*_{p}
nodes, while the number of nodes is arbitrary and depends on the desired resolution. Also, for the sake of simplicity, one-dimensional image model is used as **s**={*s*_{i}
|*i*∈[1,*N*]}, where *N* is the total number of complex pixels. As mentioned before, reconstructed holographic images are complex, thus each pixel value *s*_{i}
is a complex number that we denote its magnitude and phase by *α*_{i}
and *φ*_{i}
, respectively. Let **w**={*w*_{i}
|*i*∈[1,*N*]} be a binary window that determines the support of the target such that *w*_{i}
=1 for the target pixels and *w*_{i}
=0 elsewhere. Now, the image can be represented as the addition of disjoint target complex pixels (**a**) inside **w** and background complex pixels (**b**) outside the window **w** [29,30]. The latter can be formulated in the compact model of *s*_{i}
=**a**
*iw*_{i}
+**b**
_{i}[1-*w*_{i}
].

#### 3.1. Bivariate Gaussian distribution model

In [22] several optimal criterion laws have been derived for situations in which the statistical behavior of target and background are from exponential family (i.e. Gaussian, Gamma and etc). However, as mentioned earlier, in digitally reconstructed holographic images, one deals with complex pixels with distinct magnitude and phase, which are jointly distributed in spatial domain. Thus single variable approach can not be applied to capture the correlation of magnitude and phase of each pixel. However, it is rational to assume that the target and background regions have independent distributions.

Considering the one dimensional, non-overlapping image model, and by assuming a bivariate Gaussian distribution for amplitude (*α*) and phase (*φ*) random variables inside the target region, one can write the target’s joint probability distribution function as following:

where *α*_{i}
=|*s*_{i}
| and *φ*_{i}
=*∠s _{i}*
are the magnitude and phase of pixel

*s*, respectively. Also, function Φ(

_{i}*x*)=(2

*π*)

^{-1/2}exp(-

*x*

^{2}/2) is the standard normal probability distribution function.

Let the parameter vector for this distribution be:

where *µ, σ* and *ρ* are marginal mean, standard deviation and correlation coefficients of magnitude (*α*) and phase (*φ*) random variables of the target respectively. In the same vein, the background is assumed to have another bivariate Gaussian distribution with a different, independent parameter set Θ_{b} and joint probability density function *f*_{b}
(*α, φ*). Since the separation of two random variables in Eq. (1) is made possible by conditioning *α* on *φ*, the corresponding conditional mean and variances can be used for *α* as follows:

for *u*∈{*t,b*}. For generality, we assume parameter vector Θ={Θ_{t}, Θ_{b}} *a priori* unknown.

#### 3.2. Maximum likelihood hypothesis testing

The problem of segmentation is analogous to estimating the most likely binary window (**w**), best representing the target support. Considering a hypothesis testing approach with hypothesis *H*_{w}
, one needs to maximize the *a posteriori* conditional probability *P*[*H*_{w}
|**s**]. Assuming the general case of equiprobable hypotheses, the Bayes rule states that:

Thus maximizing the *a posteriori* probability *P*[*H*_{w}
|**s**] is equivalent to maximizing the conditional probability *P*[**s**|*H*_{w}
] which corresponds to the likelihood of the hypothesis *H*_{w}
.

Since the bivariate Gaussian probability density functions of the target and background are assumed to be independent, the likelihood function for the hypothesis *H*_{w}
can be written as multiplication of corresponding joint probability distribution functions inside and outside of binary window (**w**) as:

Using the above mathematical likelihood model, by substituting (1) in (5) one can get:

$${\times \left(\frac{1}{\sqrt{2\pi}{\sigma}_{\phi}^{b}{\sigma}_{\alpha \mid \phi}^{b}}\right)}^{{N}_{b}\left(w\right)}\mathrm{exp}\left[-\frac{\sum _{i=1}^{N}{\left({\phi}_{i}-{\mu}_{\phi}^{b}\right)}^{2}.\left(1-{w}_{i}\right)}{2{\left({\sigma}_{\phi}^{b}\right)}^{2}}\right]\times \mathrm{exp}\left[-\frac{\sum _{i=1}^{N}{\left({\alpha}_{i}-{\mu}_{\alpha \mid \phi}^{b}\right)}^{2}.\left(1-{w}_{i}\right)}{2{\left({\sigma}_{\alpha \mid \phi}^{b}\right)}^{2}}\right],$$

where *N*_{t}
(w), (w)*N*_{b}
(W) are the number of hypothetical target and background pixels according to current **w**. Taking the natural log of the above expression and incorporating conditional mean and variance of (3) yields:

$$-\frac{1}{{2\left({\sigma}_{\alpha |\phi}^{t}\right)}^{2}}\sum _{i=1}^{N}{\left({\alpha}_{i}-{\mu}_{\alpha}^{t}-\frac{{\rho}_{t}{\sigma}_{\alpha}^{t}\left({\phi}_{i}-{\mu}_{\phi}^{t}\right)}{{\sigma}_{\phi}^{t}}\right)}^{2}.{w}_{i}$$

$$-{N}_{b}\left(\mathbf{w}\right)\mathrm{log}\left(\sqrt{2\pi}{\sigma}_{\phi}^{b}{\sigma}_{\alpha}^{b}\sqrt{1-{\rho}_{b}^{2}}\right)-\frac{1}{{2\left({\sigma}_{\phi}^{b}\right)}^{2}}\sum _{i=1}^{N}{\left(\phi -{\mu}_{\phi}^{b}\right)}^{2}.\left(1-{w}_{i}\right)$$

$$-\frac{1}{{2\left({\sigma}_{\alpha |\phi}^{b}\right)}^{2}}\sum _{i=1}^{N}{\left({\alpha}_{i}-{\mu}_{\alpha}^{b}-\frac{{\rho}_{t}{\sigma}_{\alpha}^{b}\left({\phi}_{i}-{\mu}_{\phi}^{b}\right)}{{\sigma}_{\phi}^{t}}\right)}^{2}.\left(1-{w}_{i}\right).$$

So far Eq. (7) indicates the log-likelihood function of hypothesis *H*_{w}
with parameter vector of Θ, which is a priori unknown. In order to proceed, one has to estimate unknown parameters. This could be done in several ways including the maximum likelihood method.

In order to derive the maximum likelihood estimation of unknown parameters one has to take partial derivatives of the likelihood (or log-likelihood) function with respect to elements of Θ, and set each to zero to eventually find the estimated parameter vector denoted by $\widehat{\Theta}$
={$\widehat{\Theta}$
_{t}, $\widehat{\Theta}$
_{b}}. The resulting sample mean, variance and correlation are as following:

$${\left({\hat{\sigma}}_{\alpha}^{u}\right)}^{2}=\frac{1}{{N}_{u}\left(\mathbf{w}\right)}\sum _{i\in {\Omega}_{u}}{\left({\alpha}_{i}-{\hat{\mu}}_{\alpha}^{u}\right)}^{2}\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}{\left({\hat{\sigma}}_{\phi}^{u}\right)}^{2}=\frac{1}{{N}_{u}\left(\mathbf{w}\right)}\sum _{i\in {\Omega}_{u}}{\left({\phi}_{i}-{\hat{\mu}}_{\alpha}^{u}\right)}^{2},$$

$${\hat{\rho}}_{u}=\frac{1}{{N}_{u}\left(\mathbf{w}\right){\hat{\sigma}}_{\alpha}^{u}{\hat{\sigma}}_{\phi}^{u}}\sum _{i\in {\Omega}_{u}}\left({\alpha}_{i}-{\hat{\mu}}_{\alpha}^{u}\right)\left({\phi}_{i}-{\hat{\mu}}_{\alpha}^{u}\right)\phantom{\rule{.2em}{0ex}},$$

where *u* denotes either of the target or background regions.

#### 3.3. Optimization criterion

Substituting the maximum likelihood estimates of Eq. (8) in (7), one can rename, simplify and rewrite the log-likelihood function in Eq. (7) as:

$$-{N}_{b}\left(\mathbf{w}\right)\mathrm{log}\left({\hat{\sigma}}_{\phi}^{b}{\hat{\sigma}}_{\alpha}^{b}\sqrt{1-{\hat{\rho}}_{b}^{2}}\right)-N.$$

Since the first and last terms in Eq. (9) are independent of the choice of the binary window function (**w**), it is obvious that maximization of (9) is analogous to minimization of the following criterion with respect to **w**:

Thus the hypothesis *H*_{w}
which minimizes the above criterion, is the optimal binary window and thus the optimal segmentation of the target in a complex image. The reader should note that Eq. (10) depends on parameter estimations over jointly distributed magnitude and phase random variables inside and outside the window function. Moreover, to show the analogy with energy based snakes terminology [24], Eq. (10) can be interpreted as the *snake energy* since its minimization guides the snake contour to enclose the target.

## 4. Stochastic optimization algorithm

The optimization criterion derived in Eq. (10) presents a hyper-plane spanned over *2N*_{p}
dimensions (with *N*_{p}
denoting the number of snake polygon nodes) and is obviously non-linear with respect to **w**. Minimization of such a criterion can be accomplished by several stochastic nonlinear optimization methods. In this paper, we use a simple, yet effective method, to carry out the minimization of the criterion in Eq. (10) [20–22].

Lets approximate the closed snake contour by an *l* node (*l*: constant) polygon. Considering this polygonal representation, the following stochastic algorithm will iteratively tend to find a better binary window (**w**) causing a decrease in the optimization criterion. This algorithm carries out the following steps:

1. Randomly select a node *p*_{i}*, i*∈[1…*l*] from the current snake node set.

2. Consider a random direction displacement with length *d* (*d*: constant) for the selected node, rebuild the binary window with the new node set and denote it as ${\mathbf{w}}_{\mathrm{k}+1}^{\prime}$.

3. Compute *J*(${\mathbf{w}}_{\mathrm{k}+1}^{\prime}$, s) by using the slightly deformed new window. If it happens to be *J*(${\mathbf{w}}_{\mathrm{k}+1}^{\prime}$, s)<*J*(**w**
_{k}, **s**) let **w**
_{k+1}=${\mathbf{w}}_{k+1}^{\prime}$ (i.e. Accept the deformation). Otherwise discard the deformation and start the steps over.

This three step simple algorithm will force the cost function to decrease, but it is clear that it does not guarantee global optimization since the initial snake nodes are set manually. The process terminates when there’s no more progress in minimization of *J*(**w,s**). It is apparent that this algorithm needs a proper initialization, for that its convergence is dependent on that. Nevertheless, due to statistical region based strategy, this approach is still less sensitive to initialization comparing to snake active contours approach. This fact has been illustrated in the results section.

#### 4.1. Parameter estimation over large images

Estimating the distribution parameters [see Eq. (8)] in every iteration of the stochastic algorithm is computationally intense. In this section, we propose a method to reduce this computational burden by employing the information from the previous iteration. This approach helps speeding up the time consuming procedure in large images. We incorporated the fact that moving a single node in each step introduces a quadrilateral deformation between current window candidate ${\mathbf{w}}_{k+1}^{\prime}$ and previous window **w**
_{k} (see Fig. 2). We show that it is possible to evaluate *J*(${\mathbf{w}}_{k+1}^{\prime}$, s) using the statistical information from *k*th iteration along with pixel values inside the aforementioned quadrilateral region.

Assume that node ${p}_{i}^{k}$
has been selected in the node selection step and moved randomly to ${p}_{i}^{k+1}$ (see Fig. 2) in a way that region Ω_{d} connects additively to Ω_{a} (i.e. increases the area of Ω_{a}). Now, let the combination of Ω_{a}+Ω_{d} define the new window function ${\mathbf{w}}_{k+1}^{\prime}$ regarding which we wish to evaluate the optimization criterion *J*(${\mathbf{w}}_{k+1}^{\prime}$s). In order to calculate the new cost function, one can use the statistical information from previous iteration and the newly born Ω_{d} pixel information effectively as follows.

For the sake of clarity, the mathematical manipulation for single variable situation is described here. Nevertheless, generalizing this procedure for the bivariate case is a trivial task. In the following, superscripts denote the iteration number and subscripts indicate the region initial. Using the above assumptions we wish to find ${\widehat{\sigma}}_{a}^{k+1}$ and ${\sigma}_{b}^{k+1}$.

In the following, it has been shown that one can use statistics from iteration *k* and the pixel values in region Ω_{d} to find ${\sigma}_{a\mathit{,}b}^{k+1}$:

$$=\frac{1}{{N}_{{\Omega}_{a}^{k}+{\Omega}_{d}}}\left\{\left({N}_{{\Omega}_{a}^{k}+{\Omega}_{d}}.{\left(\Delta \mu \right)}^{2}\right)+\sum _{{s}_{i}\in {\Omega}_{a}^{k}}\left[{\left({s}_{i}-{\hat{\mu}}_{a}^{k}\right)}^{2}+2\left(\Delta \mu \right)\left({s}_{i}-{\hat{\mu}}_{a}^{k}\right)\right]+\sum _{{s}_{i}\in {\Omega}_{d}}\left[{\left({s}_{i}-{\hat{\mu}}_{a}^{k}\right)}^{2}+2\left(\Delta \mu \right)\left({s}_{i}-{\hat{\mu}}_{a}^{k}\right)\right]\right\}$$

$$=\frac{1}{{N}_{{\Omega}_{a}^{k}+{\Omega}_{d}}}\left\{\left({N}_{{\Omega}_{a}^{k}+{\Omega}_{d}}.{\left(\Delta \mu \right)}^{2}\right)+{N}_{{\Omega}_{a}^{k}}.{\left({\hat{\sigma}}_{a}^{k}\right)}^{2}+\sum _{{s}_{i}\in {\Omega}_{d}}{\left({s}_{i}-{\hat{\mu}}_{a}^{k}\right)}^{2}+2\left(\Delta \mu \right)\sum _{{s}_{i}\in {\Omega}_{d}}\left({s}_{i}-{\hat{\mu}}_{a}^{k}\right)\right\},$$

where

By having Ω_{d} added to Ω_{a}, it should be subtracted from Ω_{b}. Following the same vein in Eq. (11), the following result for the background region is evident:

with

It is also a trivial task to derive expression for a situation in which the selected node is moved in a way that Ω_{d} is added to the background region and subtracted from the target region. As it has been shown in Eqs. (11,13), the new statistical parameters can be derived as a function of *k*th parameter vector $\widehat{\Theta}$
^{k} and pixel values in region Ω_{d}.

#### 4.2. Adaptive node selection and direction inertia

In every iteration of the original stochastic method, a node is selected and moved in a uniform random fashion. However, for most arbitrary initializations of the snake contour, to enclose the target appropriately, some of the nodes need to be moved more often and some of the movement directions are more significant for an individual node. This is particularly true for the cases in which the snake has to creep the structure of an intricate target such as the *sphacelaria* alga (Fig. 4). Therefore, a uniform node selection/movement scheme is far from optimal and slows down the convergence substantially.

Since we have no prior knowledge of the target shape, we have no information about the nodes or directions which are more significant. However, while the snake evolves, the significant nodes can be identified by watching the number of successful movements of each node relative to total successful deformations. Here we exploit this fact to present the following adaptive node selection scheme. Let all nodes be selected equiprobable at first iteration; that is:

For every successful deformation of snake, update the selection probabilities vector **P**
_{k} by the following rule:

In the same vein, it would be beneficial if each node had some sense of the direction to move toward. This explanation suggests that each node can be assigned a *memory* for its movement direction. The role of this memory is to set the probability of each movement direction according to its effectiveness in past trials. This means for every node the most effective direction in minimizing *J*(* w,s*) is assigned the highest chance for selection the next time the algorithm visits that node.

Practically, the above can be implemented by initiating equiprobable directions for all nodes. Each time a successful deformation happens in discrete direction *j* of node *p*(${\alpha}_{p}^{j}$
), the direction probability vector **Q**
_{k} should be updated using the following rule:

The reader should note that all directions still have the chance of being selected. So even if a node has a strong inertia toward one direction and some other direction happens to become a better choice for that specific node, the algorithm will adapt and push the inertia toward the best direction gradually. Utilizing this schema, the number of successful modifications in a run increases; thereby the snake can segment the target faster.

## 5. Experimental results

In this section some experimental results on application of bivariate jointly distributed region snakes are presented. Digitally reconstructed images of several microorganisms from SEOL holographic microscopy [see Fig. 1(a)] experiment are being used [14]. As discussed earlier, bivariate region snake incorporates both the magnitude and phase information simultaneously. But since the holographic images are complex, only the magnitude images are shown here.

The first image in Fig. 3(a) shows a *diatom alga* over which the snake is initialized with 4 nodes. Although the initial contour is completely different from target boundaries, the bivariate region snake is able to capture the object after approximately 1500 iterations, ending up with 24 nodes [Fig. 3(b)]. As it can be seen in Fig. 3(c), the optimization trace obtains a reasonable slope and shows very slight progress after the 1500^{th} iteration.

In the next experiment, segmentation of a *sphacelaria* alga has been illustrated (Fig. 4). This alga has a branch-like structure. The initialization captures a small portion of the living organism and through the iterations, the snake creeps to capture its whole body.

This time, the image is intentionally reconstructed out of focus from a SEOL hologram, so it appears blurred without well defined edges to demonstrate the robustness of the proposed algorithm for holographic images. However, bivariate region snake shows promising results in its segmentation.

## 6. Conclusion

A novel bivariate jointly distributed region snake in a statistical framework has been proposed for segmentation of digitally reconstructed SEOL holographic images. The advantage of holographic images is that they provide both phase and magnitude information from the scene. This additional information aids in a more reliable segmentation. The proposed method extends the region snakes concept [20–23] to images with complex pixel values by exploiting the joint correlated magnitude-phase distribution and eventually offers an optimization criterion for the snake-based segmentation. Moreover, the stochastic algorithm which is normally used to carry out the optimization process of region snakes is modified with some boosting techniques to increase the robustness, convergence rate and to ensure proper segmentation. We believed that this technique can be used for any 3D digital holographic image with complex pixel values. Future work may include (but not limited to) estimation of a reliable initial snake contour through other boundary detection schemes. Experimental results using biological samples that we presented indicate the segmentation of 3D holographic images.

## Acknowledgments

We wish to thank Inkyu Moon for his assistance with optical experiments.

## References and Links

**1. **A. K. Jain, *Fundamentals of digital image processing*, (Prentice Hall, 1989).

**2. **W. K. Pratt, *Digital Image Processing*, (Wiley, 2001). [CrossRef]

**3. **R. M. Haralick and L. G. Shapiro, “Image segmentation techniques,” Computer Vision, Graphics, and Image Processing **29**, 100–132 (1985). [CrossRef]

**4. **R. O. Duda, P. E. Hart, and D. G. Stork, *Pattern classification*, 2nd ed. (Wiley Interscience, New York, 2000).

**5. **J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” App. Phys. Lett. **11**, 77–79 (1967). [CrossRef]

**6. **J. H. Bruning, D. R. Herriott, J. E. Gallagher, D. P. Rosenfeld, A. D. White, and D. J. Brangaccio, “Digital wavefront measuring interferometer for testing optical surfaces and lenses” Appl. Opt. **13**, 2693–2703 (1974). [CrossRef]

**7. **U. Schnars and W. P. O. Juptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. **33**, 179–181 (1994). [CrossRef] [PubMed]

**8. **T. Nomura, A. Okazaki, M. Kameda, Y. Morimoto, and B. Javidi, “Image reconstruction from compressed encrypted digital hologram,” Opt. Eng.44 (2005). [CrossRef]

**9. **T. J. Naughton, Y. Frauel, B. Javidi, and E. Tajahuerce, “Compression of digital holograms for threedimensional object reconstruction and recognition,” Appl. Opt. **41**, 4124–4132 (2002). [CrossRef] [PubMed]

**10. **T. J. Naughton, A. E. Shortt, and B. Javidi, “Nonuniform quantization compression of digital holograms,” Opt. Lett. (2006) (submitted).

**11. **O. Matoba, T. J. Naughton, Y. Frauel, N. Bertaux, and B. Javidi, “Real-time three-dimensional object reconstruction by use of a phase-encoded digital hologram,” Appl. Opt. **41**, 6187–6192 (2002). [CrossRef] [PubMed]

**12. **B. Javidi and D. Kim, “Three-dimensional-object recognition by use of single-exposure on-axis digital holography,” Opt. Lett. **30**, 236–238 (2005). [CrossRef] [PubMed]

**13. **D. Kim and B. Javidi, “Distortion-tolerant 3-D object recognition by using single exposure on-axis digital holography,” Opt. Express **12**, 5539–5548 (2005). [CrossRef]

**14. **B. Javidi, I. Moon, S. Yeom, and E. Carapezza, “Three-dimensional imaging and recognition of microorganism using single-exposure on-line (SEOL) digital holography,” Opt. Express **13**, 4492–4506 (2005). [CrossRef] [PubMed]

**15. **B. Javidi, S. Yeom, I. Moon, and M. Daneshpanah, “Real-time automated 3D sensing, detection, and recognition of dynamic biological micro-organic events,” Opt. Express **14**, 3806–3829 (2006). [CrossRef] [PubMed]

**16. **I. Moon and B. Javidi, “Shape-tolerant three-dimensional recognition of biological microorganisms using digital holography,” Opt. Express **13**, 9612–9622 (2005). [CrossRef] [PubMed]

**17. **T. Zhang and I. Yamaguchi, “Three-dimensional microscopy with phase-shifting digital holography,” Opt. Lett. **23**, 1221–1223 (1998). [CrossRef]

**18. **T. Kreis, ed., *Handbook of Holographic Interferometry*, (Wiley, VCH, 2005).

**19. **H. J. W. Goodman, *Introduction to Fourier Optics*, 2nd ed. (McGraw Hill, New York, 1996).

**20. **O. Germain and P. Refregier, “Optimal snake-based segmentation of a random luminance target on a spatially disjoint background,” Opt. Lett. **21**, 1845–1847 (1996). [CrossRef] [PubMed]

**21. **C. Chesnaud, V. Page, and P. Refregier, “Improvement in robustness of the statistically independent region snake-based segmentation method of target-shape tracking,” Opt. Lett. **23**, 488–490 (1998). [CrossRef]

**22. **C. Chesnaud, P. Refregier, and V. Boulet, “Statistical region snake-based segmentation adapted to different physical noise models,” IEEE Trans. on Pattern Analysis and Machine Intelligence **21**, 1145–1157 (1999). [CrossRef]

**23. **O. Germain and P. Refregier, “Edge detection and location in SAR images: Contribution of statistical deformable models,” in *Image Recognition and Classification: Algorithms, Systems, and Applications*,
B. Javidi, ed. (Marcel Dekker, New York, 2002), Chap. 4.

**24. **M. Kass, A. Witkin, and D. Terzopoulus, “Snakes: Active contour models,” Int. J. Comput. Vis. **1**, 321–331 (1987). [CrossRef]

**25. **C. Xu and J. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Trans. Image Process. **7**, 359–369 (1998). [CrossRef]

**26. **L. D. Cohen, “On active contour models and balloons,” CVGIP: Image Understanding **53**, 211–218 (1991). [CrossRef]

**27. **C. Kervrann and F. Heitz, “A hierarchical statistical framework for the segmentation of deformable objects in image sequences,” in Proceedings of *IEEE Conf. on Computer Vision and Pattern Recognition*, (Institute of Electrical and Electronics Engineers, Seattle, 1994), pp. 724–728.

**28. **R. Deriche, “Using Canny’s criteria to derive a recursively implemented optimal edge detector,” Int. J. Comp.Vis. **1**, 167–187 (1987). [CrossRef]

**29. **B. Javidi and J. Wang, “Limitations of the classic definition of the signal-to-noise ratio in matched filter based optical pattern recognition,” Appl. Opt. **31**, 6826–6829 (1992). [CrossRef] [PubMed]

**30. **B. Javidi and J. Wang, “Optimum distortion invariant filters for detecting a noisy distorted target in background noise,” J. Opt. Soc. Am. A **12**, 2604–2614 (1995). [CrossRef]

**31. **L. Vincent and P. Soille, “Watersheds in digital spaces: an efficient algorithm based on immersion simulations,” IEEE Trans. on Pattern Analysis and Machine Intelligence **13**, 583–598 (1991). [CrossRef]