## Abstract

We address three-dimensional (3D) visualization and recognition of microorganisms using single-exposure on-line (SEOL) digital holography. A coherent 3D microscope-based Mach-Zehnder interferometer records a single on-line Fresnel digital hologram of microorganisms. Three-dimensional microscopic images are reconstructed numerically at different depths by an inverse Fresnel transformation. For recognition, microbiological objects are segmented by processing the background diffraction field. Gabor-based wavelets extract feature vectors with multi-oriented and multi-scaled Gabor kernels. We apply a rigid graph matching (RGM) algorithm to localize predefined shape features of biological samples. Preliminary experimental and simulation results using sphacelaria alga and tribonema aequale alga microorganisms are presented. To the best of our knowledge, this is the first report on 3D visualization and recognition of microorganisms using on-line digital holography with single-exposure.

© 2005 Optical Society of America

## 1. Introduction

Optical information systems have proven to be very useful in the design of two-dimensional (2D) pattern recognition systems [1–6]. Recently, interest in three-dimensional (3D) optical information systems has increased because of its vast potential in applications such as object recognition, image encryption as well as 3D display [4,5]. Digital holography is attractive for visualization and acquisition of 3D information for these various applications [7--13].

In this paper, we address real-time 3D imaging and shape-based recognition of microorganisms. The automatic recognition of living organisms is accompanied by various challenges. First of all, they are not rigid objects, they vary in size and shape, and they can move, grow, and reproduce themselves depending on growth conditions [14]. In particular, bacteria and algae are very tiny and they have relatively simple morphological traits for image intensity-based recognition and identification. They may occur as a single cell or form an association of various complexities according to the environmental conditions. Therefore, special consideration on the morphological and physiological characteristics of algae and bacteria should be preceded to enhance the recognition system.

The applications of 3D imaging and recognition systems are very broad. First of all, it may be used to diagnose an infection caused by specific bacteria or detect biological weapons for security and defense. Identification and quantification of microorganisms are important in wastewater treatment. Monitoring of plankton in the ocean may be another application of the microorganism imaging and recognition system.

Previously, various research has been performed to recognize specific 2D shapes of microorganisms based on image intensity. The recognition and identification of tuberculosis bacteria [15] and vibrio cholera [16] have been studied based on their colors and 2D shapes. In [17], bacteria in a wastewater treatment plant are identified by morphological descriptors. The aggregation of streptomyces is classified into different phases by measuring the aggregation size and reaction time [18]. In [19], plankton recognition is performed using pre-selected geometrical features. More research on image analysis and recognition of microorganism can be found in [20].

Our research focuses on a new approach to provide real-time 3D visualization, monitoring and recognition of microorganisms using single-exposure on-line (SEOL) digital holography. Off-axis digital holography has been extensively studied in recent years because it requires only a single exposure in separating the original image from the undesired DC and conjugate images. However, off-axis digital holography has a number of drawbacks. Only a fraction of the space-bandwidth product of the photo sensor is used to reconstruct the 3D image which results in substantially reduced quality of visualization and compromises resolution. As a result, it reduces the accuracy of object recognition. In addition, the angle between the object beam and reference beam during the holographic synthesis is a function of the reconstructed image size, which creates problems in monitoring dynamic scenes containing objects with varying dimensions. Phase-shifting or on-line digital holography has been proposed to avoid these problems. This technique requires multiple interferogram recordings with phase shifts in the reference beam. The multiple exposures are used to remove the DC and the conjugate images in the interferogram. The Fresnel diffraction field of the 3D object is obtained. However, this procedure is not suitable for dynamic events such as moving 3D microorganism and is sensitive to external noise factors such as environmental vibration and fluctuation. Recently, the SEOL digital holography for 3D object recognition was presented to solve these problems associated with phase-shifting digital holography [21, 22]. SEOL holography can be used for dynamic events because it requires only a single-exposure. The additional benefit of our SEOL digital holography for monitoring of a 3D dynamic time varying scene is that various slices of the 3D microorganism and the 3D scene can be digitally reconstructed and numerically focused without mechanical focusing as is required by conventional microscopy. One important benefit of the proposed technique is that microorganism 3D images are recorded in both magnitude and phase which may provide better classification of algae or bacteria.

In this paper, we visualize and recognize two filamentous microorganisms (sphacelaria alga and tribonema aequale alga) using SEOL digital holography. Assuming that the microorganisms are individually segmented or they are sparsely aggregated, we identify two different microbiological objects with their morphological traits.

The frameworks of our system are composed of several stages as shown in Fig. 1. At the first stage, the SEOL digital holography performs 3D imaging of micro objects. Utilizing a Mach-zehnder interferometer, the system opto-electronically records the complex amplitude distribution generated by the Fresnel diffraction at a single plane. The 3D information of the wave transmitted from the microorganisms can be reconstructed from the hologram at an arbitrary depth plane. Reconstructed images are resized and objects of interest are segmented at the next stage. We segment foreground objects using the histogram analysis. Gabor-based wavelets extract salient features by decomposing them in the spatial frequency domain [23, 24].

The rigid graph matching (RGM) is a feature matching technique to identify reference shapes. During the RGM, we search similar shapes with that of the reference data by measuring similarity and difference between feature vectors. The feature vectors are defined at the nodes of two identical graphs on the reference and the input images, respectively. The RGM combined with Gabor-based wavelets has proven to be a robust template matching technique which is invariant to shift, rotation, and distortion [25].

In our database, two reference graphs are predetermined in order to represent unique shape features of the microorganisms. After the graph matching, the number of detection and the value of feature vectors can be used for further training processes with a pool of training data [26]. In this paper, we present experimental and simulation results as a preliminary step toward a generic and human aided 3D image-based recognition system of microorganisms.

The proposed work is beneficial in a number of ways: 1) the microorganisms are analyzed in 3D topology and coordinates; 2) single-exposure on-line computational holographic sensor allows optimization of the space bandwidth product for detection as well as robustness to environmental variations during sensing process; 3) multiple exposures are not required and moving bacteria can be sensed within the time constant of the detector; 4) complex amplitude of reconstructed holographic images are decomposed in the spatial frequency domain by Gabor-based wavelets to extract distinguishable features; 5) a pattern-matching technique measures the similarity of 3D geometrical shapes between a reference microorganism and an unknown sample.

In Section 2, we present principles of SEOL digital holography and its advantages. The segmentation and Gabor-based wavelets are presented in Sections 3 and 4, respectively. The graph matching technique is described in Section 5. In Section 6, experimental and simulation results are demonstrated. The conclusions follow in Section 7.

## 2. Single exposure on-line (SEOL) digital holography

In the following, we present the SEOL technique and its advantages over the conventional methods. The 3D optical monitoring system using the SEOL digital holographic recording setup is depicted in Fig. 2. Polarized light from an Argon laser with a center wavelength (λ) of 514.5 *nm*, is expanded by use of a spatial filter and a collimating lens to provide spatial coherence. A beam splitter divides the expanded beam into object and reference beam. The object beam illuminates the microorganism sample and the microscope objective produces a magnified image positioned at the image plane of the microscope [see Fig. 3]. The reference beam forms an on-axis interference pattern together with the light diffracted by the microorganism sample which is recorded by the CCD camera. Our system uses no optical components for the phase retardation in the reference beam which the phase-shifting digital holography technique requires. Also, only a single exposure is recorded in our system. In the following, we describe both on-axis phase-shifting digital holography and SEOL.

We start by describing on-axis phase-shifting digital holography [12]. The hologram recorded on the CCD can be represented as follows:

where *A*_{H}
(*x*,*y*) and Φ
_{H}
(*x*,*y*) are the amplitude and phase, respectively, of the Fresnel complex-amplitude distribution of the micro objects at the recording plane generated by the object beam; *A*_{R}
is the amplitude of the reference distribution; *φ*_{R}
denotes the constant phase of the reference beam; and Δ*φ*_{p}
, where the subscript *p* is an integer from 1 and 4, denoting the four possible phase shifts required for on-axis phase-shifting digital holography. The desired biological object Fresnel wave function, *A*_{H}
(*x*,*y*) and Φ
_{H}
(*x*,*y*) can be obtained by use of the four interference patterns with different phase shifts Δ*φ*_{p}
=0, *π*/2, *π* and 3*π*/2.

In this paper, phase-shifting on-axis digital holography with double exposure, and SEOL digital holography are implemented to obtain experimental results for the visualization and recognition of 3D biological objects. The SEOL results are compared with multiple expose phase-shifting digital holographic results. The double-exposure method requires 1) two interference patterns that have a *π*/2 phase difference, 2) the information about a reference beam, and 3) information about the diffracted biological object beam intensity. The complex amplitude of the microscopic 3D biological object wave at the hologram plane from the double-exposure method is represented by:

$$=\left\{{H}_{1}(x,y)-{A}_{H}{(x,y)}^{2}-{{A}_{R}}^{2}\right\}\u2044\left(2{A}_{R}\right)+j\left\{{H}_{2}(x,y)-{A}_{H}{(x,y)}^{2}-{{A}_{R}}^{2}\right\}\u2044\left(-2{A}_{R}\right),$$

where *H*
_{1}(*x*,*y*) and *H*
_{2}(*x*,*y*) can be obtained from Eq. (1). We assume that the recording between two holograms is uniform and reference beam is plane wave. The former assumption requires stable recording environment and stationary objects.

SEOL digital holography is suitable for recording dynamic fast events [21]. It needs to record only one hologram to gain information about the complex amplitude of the 3D biological object. The information about the wave front of a 3D biological object contained in the SEOL digital hologram is represented by the following term:

In Eq. (3), *H*
_{1}(*x*,*y*) can be obtained from Eq. (1). To remove DC terms in Eq. (3), the reference beam intensity |*A*_{R}
|^{2} is removed by only a one time measurement in the experiment. The object beam intensity |*A*_{H}
(*x*,*y*)|^{2} can be considerably reduced by use of signal processing (for example, an averaging technique). Even though SEOL digital holography originally contains a conjugate image, we can utilize the conjugate image in the interferogram in recognition experiments since it has information about the biological object. Thus, the 3D biological object wave function *U*_{h′}
(*x*,*y*) including a conjugate component in Eq. (3) can be obtained by use of SEOL digital holography. In this paper, we show that the index *U*_{h′}
(*x*,*y*) in Eq. (3) obtained by a SEOL hologram can be used for 3D biological object recognition and 3D image formation. The results will be compared with that of index *U*_{h}
(*x*,*y*) in Eq. (2) obtained by on-line phase-shifting holography which requires multiple recordings. The microscopic 3D biological object can be restored by Fresnel propagation of *U*_{h′}
(*x*,*y*) which is the biological object wave information in the hologram plane. We can numerically reconstruct 3D section images on any parallel plane perpendicular to the optical axis by computing the following Fresnel transformation with a 2D FFT algorithm:

$$\sum _{m=1}^{{N}_{z}}\sum _{n=1}^{{N}_{y}}{U}_{h\text{'}}(m,n)\mathrm{exp}\left[-j\frac{\pi}{\lambda d}\left(\Delta {x}^{2}{m}^{2}+\Delta {y}^{2}{n}^{2}\right)\right]\mathrm{exp}\left[j2\pi \left(\frac{mm\text{'}}{{N}_{x}}+\frac{nn\text{'}}{{N}_{y}}\right)\right],$$

where *U*_{o′}
(*m′*,*n′*) and (Δ*X*,Δ*Y*) are the reconstructed complex amplitude distribution and resolution at the plane in the biological object beam, respectively; *U*_{h′}
(*m*,*n*) and (Δ*x*,Δ*y*) are the object wave function including a conjugate component and resolution at the hologram plane, respectively; and *d* represents the distance between the image plane and hologram plane.

## 3. Segmentation

In the following, we present the segmentation of digitally reconstructed holographic images. Since the coherent light is scattered by the semi-transparent objects, the intensity in the object region becomes lower than the background diffraction field. Therefore, for recognition, it is more efficient to filter out unnecessary background from computationally reconstructed holographic images.

In this paper, the threshold for the segmentation is obtained by using histogram analysis. The segmented image (*o*) is defined as:

where *o′*(*m*,*n*) is the intensity of the holographic image; and *m* and *n* are 2D discrete coordinates in *x* and *y* directions, respectively. The threshold *I*_{s}
is decided from the histogram analysis and the maximum intensity rate:

where *r*
_{max} is the maximum intensity rate of coherent light after scattering by the microorganisms. The threshold ${\tau}_{{\kappa}_{min}}$ is a minimum value satisfying the following equation:

where *P*_{s}
is a predetermined probability; *N*_{T}
is the number of pixels; *h*(*τ*_{i}
) is the histogram, i.e., the number of pixels of which intensity is between *τ*
_{1-i} and *τ*_{i}
; *τ*_{i}
is the *i*-th quantized intensity level; and *κ*
_{min} is the minimum number of pixels that satisfies Eq. (7). For the experiments, the total number of intensity levels is set at 256. *P*_{s}
and *r*
_{max} can be decided according to the prior knowledge of the spatial distribution and transmittance of the microorganisms.

## 4. Gabor-based wavelets and feature vector extraction

In this section, we provide a brief review of Gabor-based wavelets and present feature vectors. Gabor-based wavelets are composed of multi-oriented and multi-scaled Gaussian-form kernels which are suitable for local spectral analysis.

#### 4.1 Gabor-based wavelets

The Gabor-based wavelets have the form of a Gaussian envelope modulated by the complex sinusoidal functions. The impulse response (or kernel) of the Gabor-based wavelet is:

where **x** is a position vector, **k** is a wave number vector, and *σ* is the standard deviation of the Gaussian envelope. By changing the magnitude and direction of the vector **k**, we can scale and rotate the Gabor kernel to make self-similar forms.

We can define a discrete version of the Gabor kernel as *g*_{uv}
(*m*,*n*) at **k**=**k**
_{uv}
and **x**=(*m*,*n*), where *m* and *n* are discrete coordinates in 2D space in *x* and *y* directions, respectively. Sampling of **k** is done as **k**
_{uv}
=*k*
_{0u}[cos*ϕ*_{v}
sin*ϕ*_{v}
]
^{t}
, *k*
_{0u}=*k*
_{0}/*δ*
^{u-1}, and *ϕ*_{v}
=[(*v*-1)/*V*]*π*, *u*=1,…,*U* and *v*=1,…,*V*, where *k*
_{0u} is the magnitude of the wave number vector; *ϕ*_{v}
is the azimuth angle of the wave number vector; *k*
_{0} is the maximum carrier frequency of the Gabor kernels; *δ* is the spacing factor in the frequency domain; *u* and *v* are the indexes of the Gabor kernels; *U* and *V* are the total numbers of decompositions along the radial and tangential axes, respectively; and *t* stands for the matrix transpose.

The Gaussian-envelope in the Gabor-filter achieves the minimum space-bandwidth product [23]. Therefore, it is suitable to extract local features with high frequency bandwidth (small *u*) kernels and global features with low frequency bandwidth (large *u*) kernels. It is noted that the Gabor-based wavelet has strong response to the edges if the wave number vector **k** is perpendicular to the direction of edges.

#### 4.2 Feature vector extraction

Let *h*_{uv}
(*m*,*n*) be the filtered output of the image *o*(*m*,*n*) after it is convolved with the Gabor kernel *g*_{uv}
(*m*,*n*):

where *o*(*m*,*n*) is the normalized image between 0 and 1 after the segmentation; and *N*_{m}
, and *N*_{n}
are the size of reconstructed images in *x* and *y* directions, respectively. *h*_{uv}
(*m*,*n*) is also called the “Gabor coefficient.”

A feature vector defined at a pixel (*m*, *n*) is composed of a set of the Gabor coefficients and the segmented image. The rotation-invariant property can be achieved simply by adding up all the Gabor coefficients along the tangential axes in the frequency domain. Therefore, we define a rotation-invariant feature vector **v** as:

Therefore, the dimension of a feature vector **v** is *U*+1. In the experiments, we use only real parts of the feature vector since they are more suitable to recognize filamentous structures. There is no optimal way to choose the parameters for the Gabor kernels, but several values are widely used heuristically depending on the applications. The parameters are set up at *σ*=*π*, *k*
_{0}=*π*/2, *δ*=2√2, *U*=3, *V*=6 in this paper.

## 5. Rigid Graph Matching (RGM)

In this section, we present the RGM technique. Originally, the RGM is part of a dynamic link association (DLA) to allow elastic deformation of the graph [25]. However, we only adopt the rigid graph matching part for our microscopic analysis. The RGM realizes a robust template-matching between two graphs which is tolerant to translation, rotation, and distortion caused by noisy data.

The graph is defined as a set of nodes associated in the local area. Let *R* and *S* be two identical and rigid graphs placed on the reference (o
_{r}
) object and unknown input image (o
_{s}
), respectively. The location of the reference graph *R* is pre-determined by the translation vector **p**
_{r}
and the clock wise rotation angle *θ*_{r}
. A position vector of the node *k* in the graph *R* is:

where ${\mathbf{x}}_{k}^{o}$
and ${\mathbf{x}}_{c}^{o}$
are, respectively, the position of the node* k* and the center of the graph which is located at the origin without rotation; *K* is the total number of nodes in the graph; and A is a rotation matrix.

Assuming the graph *R* covers a designated shape of the representing characteristic in the reference microorganism, we search the similar local shape by translating and rotating the graph *S* on unknown input images. We describe any rigid motion of the graph *S* by translation vector **p** and clock wise rotation angle *θ*:

where ${\mathbf{x}}_{k}^{s}$
is a position vector of the node *k* in the graphs *S*. The transformation in Eq. (13) allows robustness in detection of rotated and shifted reference objects.

A similarity function between the graph *R* and *S* is defined as:

where the similarity at one node is the normalized inner product of two feature vectors:

In Eq. (15), 〈·〉 stands for the inner product of two vectors; and **v**[${\mathbf{x}}_{k}^{r}$
] and **v**[${\mathbf{x}}_{k}^{s}$
(*θ*,**p**)] are feature vectors defined at ${\mathbf{x}}_{i}^{r}$
and ${\mathbf{x}}_{k}^{s}$
(*θ*,**p**), respectively.

We define a difference cost function to improve discrimination capability of two graphs *R* and *S* as:

where the cost at one node is the norm of difference of two feature vectors:

To utilize the depth information of the SEOL hologram, we simultaneously use multiple references. The similarity function ${\Gamma}_{{r}_{j}s}({\theta}_{j};\mathbf{p})$ and the difference cost ${C}_{{r}_{j}s}({\theta}_{j};\mathbf{p})$ are measured by the feature vectors between the graph *R* on the image ${o}_{{r}_{j}}$ and the graph *S* on the image os. The graph *R* covers the fixed region in the reference images, $"{o}_{{r}_{j}}",j=1,\dots ,J;J$ is the total number of reference images reconstructed at different depths.

The graph *S* is identified with the reference shape which is covered by the graph *R* if two conditions are satisfied as follows:

where *ĵ* is the index of the reference image which produces the maximum similarity between the graph *R* and the graph *S* with the translation vector **p** and the rotation angle *$\widehat{\theta}$ _{j}* ;

*α*

_{Γ}and

*α*

_{C}are thresholds for the similarity function and the difference cost, respectively; and

*$\widehat{\theta}$*is obtained by searching the best matching angle to maximize the similarity function:

_{j}## 6. Experiments and simulation result

We will present experimental results of visualization and recognition of two filamentous algae (sphacelaria alga and tribonema aequale alga). First, we present our 3D imaging of algae using SEOL holography compared with phase-shifting on-line digital holography. Second, recognition process using feature extraction and graph matching are presented to localize the predefined shapes of two different microorganisms.

#### 6.1 3D imaging with SEOL digital holography

In this subsection, we experimentally compare the 3D algae visualization of the SEOL digital holography with that of the multiple-exposure phase-shifting on-line digital holography by experiments. In the experiments presented in this paper, the images are reconstructed from digital holograms with 2048×2048 pixels and a pixel size of 9 *µm*×9 *µm*. The microorganisms are sandwiched between two transparent cover slips. The diameter of the sample is around 10~50 *µm*. We generate two holograms for the alga samples. The microscopic 3D biological object was placed at a distance 500 *mm* from the CCD array as shown in Fig. 2. The results of the reconstructed images from the hologram of the alga samples are shown in Fig. 4. Figure 4(a) and (b) shows sphacelaria’s 2D image and the digital hologram by SEOL digital holography technique, respectively. Figure 4(c) and (d) are sphacelaria’s reconstructed images from the blurred digital holograms at distance of *d*=180 *mm* and 190 *mm*, respectively using the SEOL digital holography. Figure 4(e) shows the sphacelaria’s reconstructed image at distance *d*=180 *mm* using phase-shifting on-line digital holography with two interferograms, and Fig. 4(f) is tribonema aequale’s reconstructed image at distance *d*=180 *mm* using SEOL digital holography. In the experiments, we use a weak reference beam for the conjugate image which overlaps the original image. As shown in Fig. 4, we obtained the sharpest reconstruction at distance d that is between 180 *mm* and 190 *mm* for both holographic methods. The reconstruction results indicate that we obtain the focused image by use of SEOL digital holography as well as from the phase-shifting digital holography. We will show that SEOL digital holography may be a useful method for 3D biological object recognition. That is because the conjugate image in the hologram contains information about the 3D biological object. In addition, SEOL digital holography can be performed without stringent environmental stability requirements.

#### 6.2 3D Microorganism reconstruction and feature extraction

To test the recognition performance, we generate 8 hologram samples from sphacelaria and tribonema aequale, respectively. We denote 8 sphacelaria samples as A1,…,A8 and 8 tribonema aequale samples as B1,…,B8. To test the robustness of the proposed algorithm, we have changed the position of the CCD during the experiments resulting in different depths for the sharpest reconstruction image. The samples A1–A3 are reconstructed at 180 *mm*, A4–A6 are reconstructed at 200 *mm*, and A7 and A8 are reconstructed at 300 *mm* and all samples of tribonema aequale (B1–B8) are reconstructed at 180 *mm* for the sharpest images.

Computationally reconstructed holographic images are cropped and reduced into an image with 256×256 pixels by the reduction ratio 0.25. The probability *P*_{s}
and the maximum intensity rate *r*_{max}
for the segmentation are set at 0.25 and 0.45, respectively. We assume less than 25% of lower intensity region is occupied by microorganisms and the intensity of microorganisms is less than 45% of the background diffraction field. Figures 5(a) and (b) show the reconstructed and segmented image of a sphacelaria sample (A1), respectively. Figures 5(c)–(e) show the real parts of Gabor coefficients in Section 4b when *u*=1,2, and 3.

To recognize two filamentous objects which have different thicknesses and distributions, we select two different reference graphs and place them on the sample A1 and B1. The results of the recognition process are followed in the next subsections.

#### 6.3 Recognition of sphacelaria alga

A rectangular grid is selected as a reference graph for sphacelaria which shows regular thickness in the reconstructed images. The reference graph is composed of 25×3 nodes and the distance between nodes is 4 pixels in *x* and *y* directions. Therefore, the total number of nodes in the graph is 75. The reference graph *R* is located in the sample A1 with **p**
_{s}
=[81, 75]
^{t}
and *θ*_{r}
=135° as shown in Fig. 6(a). To utilize the depth information, 4 reference images are used. They are reconstructed at *d*=170, 180, 190, and 200 *mm*, respectively. The threshold *α*
_{Γ} and *α*_{C}
are set at 0.65 and 1, respectively. Thresholds are selected heuristically to produce better results.

Considering the computational load, the graph *S* is translated by every 3 pixels in *x* and *y* directions for measuring its similarity and difference with the graph *R*. To search the best matching angles, the graph *S* is rotated by 7.5° from 0 to 180° at every translated location. When the positions of rotated nodes are not integers, they are replaced with the nearest neighbor nodes.

Figure 6(b) shows one sample (A8) of test images with the RGM process. The reference shapes are detected 62 times along the filamentous objects. Figure 6(c) shows the number of detections for 16 samples. The detection number for A1–A8 varies from 31 to 251 showing strong similarity between the reference image (A1) and test images (A2–A8) of the same microorganism. There is no detection found in B1–B8. Figure 6(d) shows the maximum similarity and the minimum difference cost for all samples.

#### 6.4 Recognition of tribonema aequale alga

To recognize tribonema aequale, a wider rectangular grid is selected to identify its thin filamentous structure. The reference graph is composed of 20×3 nodes and the distance between nodes is 4 pixels in *x* direction and 8 pixels in *y* direction, therefore, the total number of nodes in the graph is 60. The reference graph *R* is located in the sample B1 with **p**
_{s}=[142, 171]
^{t}
and *θ*_{r}
=90° as shown in Fig. 7(a). Four reference images are used which are reconstructed at *d*=170, 180, 190, and 200 *mm*, respectively. The threshold *α*
_{Γ} and *α*_{C}
are set at 0.8 and 0.65, respectively.

Figure 7(b) shows one sample (B2) of test images with the RGM process. The reference shapes are detected 26 times along the thin filamentous object. Figure 7(b) shows the number of detections for 16 samples. The detection number for B1–B8 varies from 5 to 47. One false detection is found in the sample A7. Figure 7(d) shows the maximum similarity and the minimum difference cost for all samples. As a result, we are able to recognize hologram samples of two different microorganisms by counting the number of detections of each reference shape.

For real-time application, computational complexity should be considered. For numerical reconstruction of the holographic image and Gabor filtering, the computational time of the algorithm is of the same order as the fast Fourier transformation (FFT) which is O(*N*)=*N*log_{2}
*N*, where *N* is the total number of pixels in the holographic image. For the graph matching, the computational time depends on the shape and the size of the graph, the dimension of the feature vector, searching steps for the translation vector and the rotation angle. Since the largest operation is caused by searching the translation vector, that is O(*N*)=*N*
^{2}, the proposed system requires quadratic computational complexity. Therefore, real-time processing can be achieved by developing parallel processing. Real-time operation is possible because SEOL holography requires a single exposure. Thus, with high speed electronics, it is possible to have real-time detection. This would not be possible with phase-shift holography which requires multiple exposures.

#### 7. Conclusion

In this paper, we have presented preliminary results for human aided recognition of microorganisms by examining their simple morphological traits. Three-dimensional visualization and recognition of microbiological objects by single exposure on-line (SEOL) digital holography has been described. 3D imaging and recognition with SEOL digital holography is robust to movement of objects, and to environmental conditions during recording as compared with multiple exposure phase-shifting digital holography. Feature extraction is performed by segmentation and Gabor filtering. They are followed by a feature matching technique to localize specific shape features of two different microorganisms. In this paper, we only detect the reference shapes in the unknown samples, however, they can be used for further training procedures. Indeed, several morphological traits can be combined to recognize different classes of microorganisms more efficiently.

Implementation of a fully-automated recognition system of small living organisms presents many challenges due to their spatial and temporal variations. Future work should consider four-dimensional imaging (with consideration of time frames of 3D imaging) which may be a good solution for this task. Also, advanced segmentation techniques using phase distribution can be considered for feature extraction and graph matching. The proposed approach may have great benefits in medicine, environmental monitoring, and defense applications.

## References and links

**1. **A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, “Design and application of quadratic correlation filters for target detection,” IEEE Trans. on AES. **40**, 837–850 (2004)

**2. **F. A. Sadjadi, “Infrared target detection with probability density functions of wavelet transform subbands,” Appl. Opt. **43**, 315–323 (2004). [CrossRef] [PubMed]

**3. **H. Sjoberg, F. Goudail, and P. Refregier, “Optimal algorithms for target location in nonhomogeneous binary images,” J. Opt. Soc. Am. A. **15**, 2976–2985 (1998) [CrossRef]

**4. **B. Javidi, ed., *Image Recognition and Classification: Algorithms, Systems, and Applications*, (Marcel Dekker, New York, 2002). [CrossRef]

**5. **B. Javidi and E. Tajahuerce, “Three dimensional object recognition using digital holography,” Opt. Lett. **25**, 610–612 (2000). [CrossRef]

**6. **O. Matoba, T. J. Naughton, Y. Frauel, N. Bertaux, and B. Javidi, “Real-time three-dimensional object reconstruction by use of a phase-encoded digital hologram,” Appl. Opt. **41**, 6187–6192 (2002). [CrossRef] [PubMed]

**7. **F. Sadjadi, “Improved target classification using optimum polarimetric SAR signatures,” IEEE Trans. on AES. **38**, 38–49 (2002).

**8. **B. Javidi and F. Okano, eds., *Three-dimensional television, video, and display technologies*, (Springer, New York, 2002).

**9. **J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phy. Lett. **11**, 77–79 (1967). [CrossRef]

**10. **T. M. Kreis and W. P. O. Juptner, “Suppression of the dc term in digital holography,” Opt. Eng. **36**, 2357–2360 (1997). [CrossRef]

**11. **G. Pedrini and H. J. Tiziani, “Short-coherence digital microscopy by use of a lensless holographic imaging system,” Appl. Opt. **41**, 4489–4496 (2002). [CrossRef] [PubMed]

**12. **T. Zhang and I. Yamaguchi, “Three-dimensional microscopy with phase-shifting digital holography,” Opt. Lett. **23**, 1221 (1998). [CrossRef]

**13. **Alexander Stadelmaier and Jurgen H. Massig, “Compensation of lens aberrations in digital holography,” Opt. Lett. **25**, 1630 (2000). [CrossRef]

**14. **J. W. Lengeler, G. Drews, and H. G. Schlegel, *Biology of the prokaryotes*, (Blackwell science, New York, 1999).

**15. **M. G. Forero, F. Sroubek, and G. Cristobal, “Identification of tuberculosis bacteria based on shape and color,” Real-time imaging **10**, 251–262 (2004) [CrossRef]

**16. **J. Alvarez-Borrego, R. R. Mourino-Perez, G. Cristobal-Perez, and J. L. Pech-Pacheco, “Invariant recognition of polychromatic images of Vibrio cholerae 01,” Opt. Eng. **41**, 872–833 (2002) [CrossRef]

**17. **A. L. Amaral, M. da Motta, M. N. Pons, H. Vivier, N. Roche, M. Moda, and E. C. Ferreira, “Survey of protozoa and metazoa populations in wastewater treatment plants by image anlaysis and discriminant analysis,” Environmentrics **15**, 381–390 (2004) [CrossRef]

**18. **S.-K. Treskatis, V. Orgeldinger, H. wolf, and E. D. Gilles, “Morphological characterization of filamentous microorganisms in submerged cultures by on-line digital image analysis and Pattern recognition,” Biotechnology and Bioengineering **53**, 191–201 (1997). [CrossRef] [PubMed]

**19. **T. Luo, K. Kramer, D. B. Goldgof, L. O. Hall, S. Samson, A. Remsen, and T. Hopkins, “Recognizing plankton images from the shadow image particle profiling evaluation recorder,” IEEE Trans. on systems, man, and cybernetics Part B **34**, 1753–1762 (2004). [CrossRef]

**20. **J. M. S. Cabral, M. Mota, and J. Tramper eds., *Multiphase bioreactor design: chap2 image analysis and multiphase bioreactor*, (Taylor & Francis, London2001) [CrossRef]

**21. **B. Javidi and D. Kim, “Three-dimensional-object recognition by use of single-exposure on-axis digital holography,” Opt. Lett. **30**, 236–238 (2005). [CrossRef] [PubMed]

**22. **D. Kim and B. Javidi, “Distortion-tolerant 3-D object recognition by using single exposure on-axis digital holography,” Opt. Express **12**, 5539–5548 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-12-22-5539 [CrossRef]

**23. **J. G. Daugman, “Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters,” J. Opt. Soc. Am. **2**, 1160–1169 (1985). [CrossRef]

**24. **T. S. Lee, “Image representation using 2D Gabor wavelets,” IEEE Trans. on PAMI. **18**, 959–971 (1996). [CrossRef]

**25. **M. Lades, J. C. Vorbruggen, J. Buhmann, J. Lange, C. v.d. Malsburg, R. P. Wurtz, and W. Konen, “Distortion invariant object recognition in the dynamic link architecture,” IEEE Trans. Computers **42**, 300–311 (1993). [CrossRef]

**26. **S. Yeom and B. Javidi, “Three-dimensional object feature extraction and classification with computational holographic imaging,” Appl. Opt. **43**, 442–451 (2004). [CrossRef] [PubMed]