## Abstract

In this paper, a statistical approach is presented for three-dimensional (3D) visualization and recognition of objects having very small number of photons based on a parametric estimator. A truncated Poisson probability density function is assumed for modeling the distribution of small number of photons count observation. For 3D visualization and recognition of photon-limited objects, an integral imaging system is employed. We utilize virtual geometrical ray propagation for 3D reconstruction of objects. A maximum likelihood estimator (MLE) and statistical inference algorithms are applied to small number of photons counted elemental images captured with integral imaging. We have demonstrated that the MLE using a truncated Poisson model for estimating the average number of photon for each voxel of a photon starved 3D object has a small estimation error compared with the MLE using a Poisson model. Also, we present experiments to investigate the effect of 3D sensing parallax on the recognition performance under a fixed mean number of photons.

©2009 Optical Society of America

## 1. Introduction

There are many potential benefits and applications for three-dimensional (3D) sensing, visualization and recognition of objects using integral imaging (II), including photon-counting scenarios [1-8]. In [6], photon counting 3D object recognition using II was shown. Recently, 3D visualization of photon-limited objects by II was reported where a maximum likelihood estimator (MLE) [9-10] was applied to reconstruct the irradiance of the 3D scene pixels with Poisson distribution [7]. However, in very low illumination environments the scaling factor of a Poisson distribution may overestimate or underestimate the dispersion of observed photon counts. Therefore, the estimates of the parameter of a Poisson distribution provided by the MLE may be inaccurate. In this paper, we focus on a statistical approach which can provide 3D visualization and recognition of objects with small number of photons. For 3D sensing and visualization of photon starved objects, multi view imaging, that is, integral imaging is utilized. We shall consider cases where the average total number of photons in the scene is very small (less than 50). The image detector array senses a very small number of photons per each elemental image (or view image). Thus, a small number of pixels of the elemental image may detect a photon and vast majority of the pixels will not detect any photons. The likelihood that a pixel detects more than one photon becomes extremely small. For these reasons, we consider a truncated Poisson distribution [11] in modeling the photon counting elemental images. The MLE of the truncated Poisson distribution is applied to visualize the photon starved objects. For 3D recognition, statistical sampling algorithms [9] are developed to measure statistical characteristics of the photon counted objects and statistical hypothesis testing is used to analyze the difference of the sampling distributions of two populations.

## 2. Analysis

Typically, a Poisson model is used for modeling the distribution of the discrete count observation or approximating its distribution. The probability density function for counting *C* photons at an observation area or a detector pixel over time interval *T* seconds is well-known to be a Poisson distribution [12]:

*ρ*is the intensity parameter measured in photons per second, and the random variable

*C*has values 0, 1, 2, · · ·. We assume that an observation area has uniform size and is distributed to cover the imaging plane.

In II system, we assume that a camera array of regular square elements with uniform size records the view images of a 3D object illuminated by a photon beam. The irradiance at a voxel (pixel point) of a 3D object is projected and recorded on the corresponding pixel position of each elemental (view) image. The values of such pixels are assumed to be a random variable following a Poisson density function. And, it is assumed that the Poisson parameter *ρ* in Eq. (1) for the photon counts at each pixel in the photon limited image is proportional to the irradiance of the pixel in elemental image. Therefore, the photon limited images can be simulated from the elemental images by generating a Poisson random number *C _{p}* with the mean parameter $\tilde{N}{I}_{p}$ at each pixel of the irradiance image, where subscript

*p*denotes the pixel index, $\tilde{N}$ is mean number of photons in the photon limited image and ${I}_{p}$ is the normalized irradiance [6]. For computational reconstruction [7] of a photon-limited object in II, geometrical ray optics computationally simulates the reverse of the pick up process. In this method, a 2D sectional image of the 3D object located at a particular distance from the sensor is reconstructed by back propagating the elemental images through the virtual pin-hole arrays.

It is possible that the dispersion of the Poisson model overestimates or underestimates the dispersion of the observed number of photons in photon starved environments because a single parameter of Poisson distribution is often insufficient to represent the population. In modeling discrete counts, it can be assumed that the observed counts are coming from the mixture density. A truncated Poisson distribution [11] is one in which frequencies of particular classes are only observed from the sample space. Since the image channels in II system is illuminated by a few photon beams, a truncated Poisson density function can be applied for modeling the distribution of the count observation as follows:

*k*are observed. The mean and variance of the truncated Poisson distribution are given by:

For 3D visualization of an object in extremely low illumination, we estimate each voxel value for 3D object with the samples obtained from one pixel value of each photon limited elemental image corresponding to one pixel point in the image reconstruction plane using a MLE on the truncated Poisson distribution. The log likelihood function is given by [9]:

*N*is the total number of elemental images. Equation (4) provides a MLE as follows:

_{x}N_{y}*k*is set to 2, and

*z*is a distance between the arbitrary point of the 3D object and the camera array [7]. It is noted that MLE of the irradiance at the arbitrary point of the original 3D object, $MLE(\tilde{N}{I}_{p})$, is greater than the $\overline{MLE}(\tilde{N}{I}_{p})$which is obtained by standard Poisson model. For evaluation of the $MLE(\tilde{N}{I}_{p})$performance, we assume that the $MLE(\tilde{N}{I}_{p})$ obtained by truncated Poisson distribution is $\overline{MLE}(\tilde{N}{I}_{p})\times \gamma $, where a factor

_{0}*γ*can be an arbitrary value which is greater than 1 or is equal to 1 ($\gamma \ge 1$). Therefore, the estimator error can be defined as follows [9]:

*α*/2)% point of the standard normal distribution and $\overline{MLE}(\tilde{N}{I}_{p})=\tilde{N}{\widehat{I}}_{p}$. If$\gamma =1$, the confidence interval of the$MLE(\tilde{N}{I}_{p})$is equal to the one of $\overline{MLE}(\tilde{N}{I}_{p})$. The estimation error is proportional to $1/\sqrt{\gamma}$ according to Eq. (7). It is noted that the MLE using a truncated Poisson model for estimating the average number of photons for each voxel of a photon starved 3D object has a small estimation error compared with the MLE using a Poisson model. In the following, we describe the design procedure for 3D recognition of objects with very small number of photons. The integrated image $I{I}_{p}$ is reconstructed

*n*times by using the computational II reconstruction algorithm and the parametric MLE [9] with the photon-counts modeled by the truncated Poisson density function. Let $I{I}_{p}^{i}$ denote the

*i*integrated image reconstructed from the corresponding photon-limited elemental images set, where the photon-limited elemental image is generated with a random number of photons. The statistical sampling distribution [9] of the dispersion parameter for the reconstructed integrated image is obtained by calculating the statistical standard deviation of the each $I{I}_{p}^{i}$, where

^{th}*i*=

*0,…,n*. Then, given

*n*ordered data samples$X(1),\text{\hspace{0.17em}}X(2),$ $\cdots \text{\hspace{0.17em}}X(n)$ the empirical cumulative density function (ECDF) [13] for the statistical distribution is computed. A statistical K-S test [13] is applied for a statistical decision about populations on the basis of statistical sampling distribution information. The statistical method calculates the maximum distance between the ECDF ${F}^{r}(u)$ and ${F}^{i}(u)$of reference and unknown input data sets.

## 3. Experimental results

Experiments to verify the proposed statistical approach for 3D visualization and recognition of objects with very small number of photons are presented. We recorded a 3D object’s elemental images set by moving a CCD camera transversally in both *x* and *y* directions using II system [7]. The resolution of image sensor array is a 2028×2044 and has a pixel size of 9μm×9μm. Each imaging channel in the II system captured a 2D elemental image containing directional information of the object (toy car) and then 10×10 elemental images were generated. In the experiments, the elemental image sets of the rear and front views of a toy car are recorded (see Fig. 1
). The front and rear views are used as two different classes of data for classification.

The photon-counted elemental images of the toy car were generated according to a photon counting detection model [5-6]. Figure 2
shows the sectional images of the 3D scenes for the front and rear views of toy car reconstructed with the corresponding photon-counted elemental images set using the proposed truncated photon counting model, respectively, where the cars were reconstructed at distance *z _{0}* = 100cm and we set the expected number of photons with $\tilde{N}$=10 (the

*k*in Eq. (4) is set to 2) in each elemental image. The reconstructed image results are statistically analyzed in order to inspect the 3D recognition enhancement of objects with very small number of photons. A statistical K-S test is applied to test the 3D recognition performance. We first computationally reconstruct one sectional image of 3D scene located at distance

*z*= 100cm from the photon-limited elemental images set using the truncated photon counting method. The process is repeated

_{0}*n*times and each time the statistical standard deviation of the reconstructed sectional image is computed. We define the computed statistical standard deviation as random variable

*σ*. As a result, we have

*n*samples for the random variable

*σ*and the statistical sampling distribution or histogram for the random variable is formed.

The statistical K-S test is performed to compare the two histograms between reference and unknown input data to reach a statistical decision. K-S statistic tests the null hypothesis (${\text{H}}_{0}:{F}^{r}(u)={F}^{i}(u)$) that the two histograms follow same distributions using a level of significance. Figure 3(a)
shows the computed K-S test p-value versus the sample size *n* for the random variables *σ*, where we changed the sample size *n* for calculation of *σ* from *n*=100 to *n*=1000 with an interval of 100. The expected number of photons and the total number of elemental images were set with $\tilde{N}$=10 and ${N}_{e}$=4, respectively. As shown in Fig. 3(a), a statistical p-value calculated by the proposed truncated photon counting method was less than a statistical p-value by the conventional photon counting method in the range of the sample size $100\le n\le 1000$. The statistical p-value is a measure of how much evidence we have against the null hypothesis. A small p-value is evidence against the null hypothesis while a large p-value means little or no evidence against the null hypothesis. A hypothesis may be rejected if the p-value is less than 0.01. It is noted in Fig. 3(a) that the statistical p-value computed by the proposed truncated photon counting method for random variable *σ* is smaller than 0.01 in the range of the sample size $500\le n\le 1000$ and the statistical p-value decreases very rapidly when the sample size increase, while the statistical p-values computed by the conventional Poisson counting method were larger than 0.19 in the same range. K-S test statistic calculated by truncated photon counting method discriminates between the two different classes with over 99% accuracy. However the conventional Poisson distribution with K-S states that with 81% confidence the two classes of data are same.

Varying the parallax in II affects the photon counting 3D recognition performance for fixed number of photons. We reconstruct the 3D images with varying number of elemental images (parallax) while keeping the same average number of photons$\tilde{N}$=10. Figure 3(b) shows the computed K-S test p-value versus the sample size *n* for the random variables *σ*, where ${N}_{e}$=25. As shown in Fig. 3(b), the statistical p-values computed by using the truncated photon counting is much lower than the Poisson distribution, that is, 1.54×10^{−66} versus 2.14×10^{−56} for sample size *n*=1000, respectively. The statistical p-values computed by using both the truncated photon counting and Poisson distribution rapidly decrease when the total number of elemental images increases. However the truncated photon counting method has a much lower statistical p-value than the Poisson distribution method. The difference between the statistical p-values of the two methods is more substantial when the total number of elemental images (parallax) increases. Therefore, given *n* sample size, the test statistic for random variable *σ* calculated by the truncated photon counting method discriminates between the two different data sets with much better accuracy compared with Poisson method. Figure 4
shows the cumulative density function plots of the statistical sampling distributions of known reference and unknown input data from the toy car for the random variable *σ*, where the sample size *n* was 1000. The total number of elemental images (parallax) ${N}_{e}$ is 4 and the mean number of photons$\tilde{N}$is 10. The graph shows that the K-S distance *Λ* obtained by the truncated photon counting method is larger than the Poisson distribution method, where the maximum difference between the cumulative distributions were 0.0480 for the Poisson distribution method and 0.1297 for the proposed truncated photon counting method. Thus, truncated photon counting method may provide better results for statistical classification.

## 4. Conclusion

We have proposed multi view 3D visualization and recognition of objects with small number of photons using truncated photon counting method. For 3D visualization and recognition of photon counting objects, the sectional image of a 3D scene is estimated using parametric MLE of the photon-counts modeled by the truncated Poisson density function. Statistical inference algorithms are applied for a statistical decision about populations on the basis of statistical sampling distribution. It is shown that the parametric MLE using a truncated Poisson model for estimating the average number of photons for each voxel of a photon starved 3D object has a smaller estimation error than the MLE using a Poisson model. The 3D recognition performance for objects having small number of photons can be enhanced by using the truncated photon counting method. We have investigated effects of 3D imaging parallax on the recognition performance when a fixed mean number of photons are used.

## References and links

**1. **B. Javidi, F. Okano, and J. Sun, 3D Imaging, Visualization, and Display Technologies (Springer, 2008).

**2. **G. Lippmann, “La photographie intégrale,” C. R. Acad. Sci. **146**, 446–451 (1908).

**3. **H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. A **15**(8), 2059–2065 (1998). [CrossRef]

**4. **A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE **94**(3), 591–607 (2006). [CrossRef]

**5. **O. Matoba, E. Tajahuerce, and B. Javidi, “Real-time three-dimensional object recognition with multiple perspectives imaging,” Appl. Opt. **40**(20), 3318–3325 (2001). [CrossRef]

**6. **S. Yeom, B. Javidi, and E. Watson, “Three-dimensional distortion-tolerant object recognition using photon-counting integral imaging,” Opt. Express **15**(4), 1513–1533 (2007). [CrossRef] [PubMed]

**7. **I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett. **34**(6), 731–733 (2009). [CrossRef] [PubMed]

**8. **Proceedings of IEEE Journal, special issue on 3D Technologies for Imaging and Displays 94, (2006).

**9. **N. Mukhopadhyay, ed., Probability and Statistical Inference (Marcel Dekker, Inc. New York, 2000).

**10. **M. Guillaume, P. Melon, P. Refregier, and A. Llebaria, “Maximum-likelihood estimation of an astronomical image from a sequence at low photon levels,” J. Opt. Soc. Am. A **15**(11), 2841–2848 (1998). [CrossRef]

**11. **V. Rohatgi, and A. Ehsanes Saleh, eds., An Introduction to Probability and Statistics (Wiley, 2000).

**12. **J. Goodman, ed., Statistical Optics, (John Wiley & Sons, Inc. 1985).

**13. **M. Hollander, and D. Wolfe, eds., Nonparametric Statistical Methods (Wiley, 1999).