## Abstract

In this paper, we discuss the reconstruction and the recognition of partially occluded objects using photon counting integral imaging (II). Irradiance scenes are numerically reconstructed for the reference target in three-dimensional (3D) space. Photon counting scenes are estimated for unknown input objects using maximum likelihood estimation (MLE) of Poisson parameter. We propose nonlinear matched filtering in 3D space to recognize partially occluded targets. The recognition performance is substantially improved from the nonlinear matched filtering of elemental images without 3D reconstruction. The discrimination capability is analyzed in terms of Fisher ratio (FR) and receiver operating characteristic (ROC) curves.

© 2007 Optical Society of America

## 1. Introduction

Imaging systems using a photon counting detector have advantages for detection in the very low light level requiring less power than the conventional imaging system. The photon counting image system has been applied to various fields such as night vision, laser radar imaging, radiological imaging, stellar imaging, and medical imaging [1–10]. Automatic target recognition (ATR) [11–13] with the photon counting detector has been proposed in [2]. Photon counting recognition techniques have been applied to infrared imaging [3]. Three-dimensional (3D) active sensing by LADAR has been researched in [4–7]. In [8], the Bhattacharyya distance has been investigated for target detection. Recently, nonlinear matched filtering and photon counting linear discriminant analysis have been proposed in [9] and [10], respectively.

Integral imaging (II) records and visualizes 3D information using a lenslet array [14–22]. Capture elemental images have different perspective of objects. 3D scenes are reconstructed optically or numerically. Partially obstructed objects can be visualized by the computational reconstruction of sectional images at different depth levels. There have been researches to overcome the partial occlusion for the purpose of the object recognition [21,22]. A reconstruction work of occluded objects using an array of cameras is found in [23].

In this paper, we propose photon counting nonlinear matched filtering in 3D space. Photon counting nonlinear-matched filtering measures correlation between reconstructed irradiance scenes and estimated photon-counting scenes in 3D space. We modify the computational reconstruction method [19] to generate irradiance scenes. The photon counting scenes are composed of the Poisson parameters estimated by maximum likelihood estimation (MLE). This nonlinear matched filtering has the same first and second order statistical properties with the ordinary nonlinear matched filtering which is performed without reconstruction [9]. We show that the proposed system substantially improves the recognition performance in terms of Fisher ratios and ROC curves.

The organization of the paper is as follows. In Sec. 2, we provide the modified reconstruction method for irradiance scenes and the newly proposed reconstruction method of photon counting scenes. The nonlinear matched filtering in 3D space is proposed in Sec. 3. Photon counting images are simulated from experimentally-derived irradiance information. In Sec. 4, these simulated images are used to compare the performances of the nonlinear matched filtering between the reconstructed images in 3D space and the elemental images. Conclusions follow in Sec. 5.

## 2. Computational reconstruction of integral imaging

In this section, the computational reconstruction of II is described. The estimation of photon counting scenes is also discussed.

#### 2.1 Reconstruction of irradiance information

As illustrated in Fig. 1(a), II recording system is composed of a micro-lens array, an imaging lens, and a CCD (charge-coupled device) array camera. The micro-lens array is composed of a large number of small convex lenslets. The ray information captured by each lenslet appears as a small two-dimensional (2D) image with different points of view. The imaging lens is used to compensate the short focal length of micro-lenslets. Except for the magnification factor and negligible aberration of the imaging lens, the image on the image plane of the micro-lens array and the captured image by the CCD camera are assumed to be identical.

By the computational reconstruction of II, the volumetric information of 3D objects is numerically reconstructed by means of the ray projection method [19]. Elemental images are projected through a virtual pinhole array at the reconstruction plane with arbitrary depths. Therefore, volumetric scenes are reconstructed to reproduce the original 3D information.

Suppose that a point *A* in Fig. 1(b) is located at [*i _{A}*,

*j*,

_{A}*z*] on the 3D object surface. The power density at the point

_{A}*A*is denoted as

*x*. Let

_{A}*x*be the captured irradiance corresponding to the point

_{n}*A*on the imaging plane of the

*n*-th micro-lenslet. The points are located at [

*i*,

_{n}*j*,

_{n}*g*],

_{A}*n*=1,…,

*N*and assumed to have a unit area; o

_{A}*is the center of the*

_{n}*n*-th micro-lenslet. Under the assumption that the distance (

*z*) between the point

_{A}*A*and the micro-lens array is large, the same power is transferred from

*x*and collected for ${x}_{1},\dots ,{x}_{{N}_{A}}$; the scale factors between

_{A}*x*and

_{A}*x*,

_{n}*n*=1,…,

*N*are approximately the same. Therefore,

_{A}*x*can be estimated as proportional to the average of

_{A}*x*.

_{n}#### 2.2 Reconstruction of photon counting information

Assuming the fluctuations in irradiance are small compared to the fluctuations produced by the quantized nature of the radiation, the Poisson distribution is considered for the photon counting model. In this case, the Poisson parameter for the photon-counts at each pixel can be assumed to be proportional to the irradiance of the pixel in the detector [9,10]. Therefore, the probability of photon event at pixel *i* can be given by

where *y*(*i*) is the number of photons detected at pixel *i*, *N _{P}* is an expected number of photon-counts in the scene,

*x*(

*i*) is irradiance at pixel

*i*,

*N*is the total number of pixels in the scene. Let

_{T}*y*,

_{n}*n*=1,…,

*N*be the photon counts detected with the parameter

_{A}*λ*which is associated with

_{n}*x*. Let us consider

_{n}*λ*which is associated with

_{A}*x*. With the assumption that ${x}_{1},\dots ,{x}_{{N}_{A}}$ are measurements of

_{A}*x*, ${x}_{A},{\lambda}_{1},\dots ,{\lambda}_{{N}_{A}}$ are also originated from

_{A}*λ*. Therefore, the joint probability density function of photon counts is calculated as

_{n}The MLE (maximum likelihood estimation) [24] of *λ _{A}* is

## 3. Recognition of occluded objects from the photon counting scene

Matched filtering for photon counting recognition estimates the correlation between the irradiance image of a reference target and an unknown input image obtained during the photon counting event. The nonlinear matched filtering is defined as the nonlinear correlation normalized with the power *v* of the photon-counting image [9]:

where *x _{r}*(

*i*) is the irradiance at pixel

*i*of the reference image of the target

*r*, and

*y*(

_{s}*i*) is the photon counts at pixel

*i*of the unknown input object

*s*. The first and second order characteristics of the nonlinear matched filtering is that the mean of

*C*(1) is constant and the variance is approximately proportional to 1/

_{rs}*N*[9].

_{P}We propose nonlinear matched filtering in 3D space as

where *x̂ _{r}*(

*i*;

*z*) is the reconstructed irradiance information of the reference target at the depth

_{d}*z*;

_{d}*i*denotes the position of the virtual voxels on the reconstruction plane Ω

*, thus, voxels on the plane Ω*

_{d}*have the same longitudinal distance (*

_{d}*z*) from the micro-lens array;

_{d}*$\widehat{\lambda}$*(

_{s}*i*;

*z*) is the estimated Poisson parameter on the reconstruction plane Ω

_{d}*d*for the unknown input target. It is noted that the sailing factors between

*x*and

_{A}*x*,

_{n}*n*=1,…,

*N*are set at 1 without the loss of generality for the reconstruction and recognition purpose.

_{A}Since there is no information of the object location in the 3D space, the nonlinear matched filtering is performed on the entire reconstruction space Ω*d*, *d*=1,…,*N _{d}*, where

*N*is the total number of depth levels. The nonlinearity decided by

_{d}*v*in the denominator in Eq. (7) causes the same first and second order characteristics with Eq. (6) since

*x̂*(

_{r}*i*) and

*$\widehat{\lambda}$*(

_{s}*i*) are merely the linear combination of the irradiance and the photon counts in the elemental images.

We employ Fisher ratio (*FR*) as the performance metric [3,9]. Receiver operating characteristic (ROC) curves are also illustrated in the next section to investigate the discrimination capability of the proposed system.

## 4. Experimental and simulation results

In this section, we present the experimental and simulation results of the reconstruction and the recognition of occluded objects in 3D space.

#### 4.1 Reconstruction results

The II recording system is composed of a micro-lens array, a pick-up camera as shown in Fig. 1(a). The pitch of each micro-lenslet is 1.09 *mm*, and the focal length of each micro-lenslet is about 3 *mm*. The size of the cars is about 4.5 *cm*×2.5 *cm*×2.5 *cm*. To simulate the partial occlusion, a tree model is placed between the toy car and the micro-lens array. The distance between the imaging lens and the micro-lens array is 9.5 *cm*, and the distance between the micro-lens array and the occluding objects is 4~5 *cm* and the distance between the micro-lens array and the toy car is 9.5 *cm*. Figure 2 shows the elemental images of the white car for the reference image and the partially occluded white toy car for the true-class target and the yellow car for the false-class target. The size of the elemental image array is 1161×1419 pixels and the number of elemental images is 18×22. Figure 3 shows the central parts of elemental images in Fig. 2. Movies in Fig. 4 show that the reconstructed sectional images in the 3D space for the partially occluded true-class target. The reconstruction distance of Fig. 4(a) is 30~50 *mm* and the distance for Fig. 4(b) is 66~80 *mm*.

#### 4.2 Recognition results

In this subsection, we show the recognition results between the reference target and two unknown input objects. We generate photon counting images, each elemental array in Fig. 3 with a random number of photons. One hundred photon counting images for each object are generated to compute the means and variances of the nonlinear matched filtering when *v*=1. It is noted that only the gray level of irradiance in Fig. 3 is used for photo-counts simulation. Several values of *N _{P}*, mean number of photo-counts in the entire image, are used to test the recognition performances. Figures 5(a) and 5(b) show the experimental results of nonlinear matched filtering for elemental images, that is without reconstruction [See Eq. (6)] and with reconstruction in 3D space [See Eq. (7)], respectively. The red solid line graph represents the sample mean for the true class target with occlusion and the blue dotted line graph is the sample mean for the false class target. Error bars stand for

*m*±

_{rs}*σ*where

_{rs}*m*and

_{rs}*σ*are the sample mean and the sample standard deviation of the matched filtering, respectively.

_{rs}*N*in Eq. (7) is set at 1 where the depth of reconstruction plane is 74

_{d}*mm*.

Figure 6(a) shows the Fisher ratios. Fisher ratio decreases when a lower number of photo-counts are used. Figures 6(b) and 6(c) show the ROC curves when *N _{p}*=500 and

*N*=100, respectively. It is shown that the recognition performance of the proposed technique is substantially improved compared from the conventional method.

_{p}In Eq. (7), it is noted that the estimated Poisson parameters of the unknown input object and reconstructed irradiance information of the reference object are used instead of the photon counts and the irradiance elemental images, respectively. When the object is occluded partially, we can visualize the target by II computational reconstruction as shown in Fig. 4(b). This reconstruction processes might be equivalent with extracting “good” features for the matched filtering recognition. For photon counting matched filtering, 3D reconstruction of irradiance scenes and estimation of photon counting scenes are shown to be proper feature extraction techniques resulting in substantially improved performance compared with the conventional method. Another merit of these reconstruction techniques is that the nonlinear matched filtering in 3D space has the same first and second order statistical properties with the ordinary nonlinear matched filtering, which is superior to the linear matched filtering in terms of the recognition performance [9].

## 5. Conclusions

In the paper, we propose nonlinear matched filtering in 3D space using the computational reconstruction of elemental images. The computational II visualizes the partially occluded object by generating sectional images at different depth planes. Computationally reconstructed irradiance scenes and estimated photon counting scenes are nonlinearly correlated for pattern recognition purpose. We have observed in the experiments that the output of the proposed nonlinear matched filtering with reconstruction in 3D space provides better performance than the nonlinear matched filtering between elemental images in 2D space.

## Acknowledgements

This research was supported in part by the Daegu University Research Grant 2007.

## References and links

**1. **J. W. Goodman, *Statistical Optics*, (Jonh wiley & Sons, inc., 1985), *Chap 9.*

**2. **G. M. Morris, “Scene matching using photon-limited images,” J. Opt. Soc. Am. A. **1**, 482–488 (1984). [CrossRef]

**3. **E. A. Watson and G. M. Morris, “Comparison of infrared upconversion methods for photon-limited imaging,” J. Appl. Phys. **67**, 6075–6084 (1990). [CrossRef]

**4. **D. Stucki, G. Ribordy, A. Stefanov, H. Zbinden, J. G. Rarity, and T. Wall, “Photon counting for quantum key distribution with Peltier cooled InGaAs/InP APDs,” J. Mod. Opt. **48**, 1967–1981 (2001). [CrossRef]

**5. **P. A. Hiskett, G. S. Buller, A. Y. Loudon, J. M. Smith, I Gontijo, A. C. Walker, P. D. Townsend, and M. J. Robertson, “Performance and design of InGaAs/InP photodiodes for single-photon counting at 1.55 um,” Appl. Opt. **39**, 6818–6829 (2000). [CrossRef]

**6. **L. Duraffourg, J.-M. Merolla, J.-P. Goedgebuer, N. Butterlin, and W. Rhods, “Photon counting in the 1540-nm wavelength region with a Germanium avalanche photodiode,” IEEE J. Quantum Electron. **37**, 75–79 (2001). [CrossRef]

**7. **J. G. Rarity, T. E. Wall, K. D. Ridley, P. C. M. Owens, and P. R. Tapster, “Single-photon counting for the 1300–1600-nm range by use of Peltier-cooled and passively quenched InGaAs avalanche photodiodes,” Appl. Opt. **39**, 6746–6753 (2000). [CrossRef]

**8. **P. Refregier, F. Goudail, and G. Delyon, “Photon noise effect on detection in coherent active images,” Opt. Lett. **29**, 162–164 (2004). [CrossRef] [PubMed]

**9. **S. Yeom, B. Javidi, and E. Watson, “Photon counting passive 3D image sensing for automatic target recognition,” Opt. Express13, 9310–9330 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-23-9310. [CrossRef] [PubMed]

**10. **S. Yeom, B. Javidi, and E. Watson, “Three-dimensional distortion-tolerant object recognition using photon-counting integral imaging,” Opt. Express15, 1513–1533 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-4-1513. [CrossRef] [PubMed]

**11. **F. Sadjadi, ed., *Selected Papers on Automatic Target Recognition*, (SPIE-CDROM, 1999).

**12. **A. Mahalanobis, R. R. Muise, S. R. Stanfill, and A. V. Nevel, “Design and application of quadratic correlation filters for target detection,” IEEE Trans. on Aerosp. Electron. Syst. **40**, 837–850 (2004). [CrossRef]

**13. **H. Kwon and N. M. Nasrabadi, “Kernel RX-algorithm: a nonlinear anomaly detector for hyperspectral imagery,” IEEE Trans. on Geosci. Remote Sens. **43**, 388–397 (2005). [CrossRef]

**14. **J.-S. Jang and B. Javidi, “Time-multiplexed integral imaging for 3D sensing and display,” Optics and Photonics News **15**, 36–43 (2004).

**15. **J. Y. Son, V. V. Saveljev, J. S. Kim, S. S. Kim, and B. Javidi, “Viewing zones in three-dimensional imaging systems based on lenticular, parallax-barrier, and microlens-array plates,” Appl. Opt. **43**, 4985–4992 (2004). [CrossRef] [PubMed]

**16. **A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proceedings of the IEEE **94**, 591–607 (2006). [CrossRef]

**17. **Y. Frauel, E. Tajahuerce, O. Matoba, M.A. Castro, and B. Javidi, “Comparison of passive ranging integral imaging and active imaging digital holography for 3D object recognition,” Appl. Opt. **43**, 452–462 (2004). [CrossRef] [PubMed]

**18. **Raul Martinez-Cuenca, Amparo Pons, Genaro Saavedra, Manuel Martinez-Corral, and Bahram Javidi, “Optically-corrected elemental images for undistorted Integral image display,” Opt. Express14, 9657–9663 (2006), http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-21-9657. [CrossRef] [PubMed]

**19. **S. H. Hong, J. S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express12, 483–491 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-3-483. [CrossRef] [PubMed]

**20. **J.-H. Park, J. Kim, and B. Lee, “Three-dimensional optical correlator using a sub-image array,” Opt. Express13, 5116–5126 (2005), http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-13-5116. [CrossRef] [PubMed]

**21. **B. Javidi, R. Ponce-Diaz, and S-. H. Hong, “Three-dimensional recognition of occluded objects by using computational integral imaging,” Opt. Lett. **31**, 1106–1108 (2006). [CrossRef] [PubMed]

**22. **S. H. Hong and B. Javidi, “Distortion-tolerant 3D recognition of occluded objects using computational integral imaging,” Opt. Express14, 12085–12095 (2006), http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-25-12085. [CrossRef] [PubMed]

**23. **V. Vaish, R. Szeliski, C. L. Zitnick, S. B. Kang, and M. Levoy, “Reconstructing occluded surfaces using synthetic: stereo, focus and robust measures,” Proceedings of the IEEE CVPR’06 (2006).

**24. **S. M. Kay, *Fundamentals of Statistical Signal Processing* (Prentice Hall, New Jersey, 1993).