## Abstract

A microlens-based optical detector was developed to perform small animal optical imaging. In this paper we present an iterative reconstruction algorithm yielding improved image quality and spatial resolution as compared to conventional inverse mapping. The reconstruction method utilizes the compressive sensing concept to cope with the undersampling nature of the problem. Each iteration in the algorithm contains two separate steps to ensure both the convergence of the least-square solution and the minimization of the *l*
_{1}-norm of the sparsifying transform. The results estimated from measurements, employing a Derenzo-like pattern and a Siemens star phantom, illustrate significant improvements in contrast and spatial resolution in comparison to results calculated by inverse mapping.

© 2011 OSA

## 1. Introduction

Optical imaging, including bioluminescence imaging (BLI) as well as fluorescence imaging and tomography (FLI and FMT), has become widely used in preclinical research to investigate biological processes in small animals [1]. In comparison with other modalities such as PET – often considered as the gold standard in molecular imaging [2, 3], optical imaging has the advantages of high target sensitivity and low cost but, in the meanwhile, rather poor spatial localization capabilities because of the highly scattering of light photons in tissue. Moreover, it is difficult to translate preclinical optical imaging studies directly to clinical studies for various reasons [4]. A microlens-based optical detector was proposed and assembled in our lab [5] to overcome several limitations with a conventional imaging technique employing a lens-based photon sensor, such as fully 3D data acquisition and multimodal operation (Fig. 1).

While such detector addresses the aforementioned problems, an immediate image is not available but rather needs to be calculated due to the multilens structure of the detector [5,6]. An appropriate reconstruction procedure is therefore necessary to calculate a projectional view of the object. The rationale for such reconstruction is somewhat related to the techniques as used in integral imaging (II) to synthesize the captured 3-dimensional object [7, 8]. In general there are two kinds of reconstruction methods developed in II: optical II reconstruction (OIIR) and computational II reconstruction (CIIR) [9]. In OIIR, the detected elemental images are firstly displayed on a screen and afterwards optically reconstructed to form the original object by attaching another micro-lens based optical system (Fig. 2) [10–12]. This technique can achieve real time reconstruction without any computational burden. However, it involves further image degradation due to the physical limitations of optical systems [12], and it requires additional optical devices to perform the processing and to record the data digitally. In CIIR, the real images are generated in a computer through digital simulation of the ray geometry [13, 14]. In [13], a set of low-resolution images is generated with the pixels extracted periodically from each lens unit representing images acquired from different view angles (Fig. 3(a)). The spatial resolution of this approach is limited by the size of the individual microlens units, and moreover, the overlap of rays from different lens units is not considered. In [14], the approach is based on inversely mapping and superposition of all the elemental images to a specific reconstruction plane through a hypothetical pinhole array (Fig. 3(b)). This approach yields higher resolution than the previous one because all the detected pixels are considered for each reconstructed image plane. However, due to the overlap of the mapped elemental images from the individual lens units, the direct inverse mapping method yields image blurring and degradation of spatial resolution. Furthermore, the different overlapping numbers of inversely mapped images contributed to different pixels result in square-shaped intensity irregularities in the reconstructed images. Even though this irregularities can be slightly reduced by normalization [15], this kind of artifact still degrades the image quality and resolution. Nevertheless, the inverse mapping method is widely used in CIIR to generate depth slice images [16]. In a previous study, we adapted this inverse mapping approach to our specific microlens-based optical detector [6].

Various techniques have recently been proposed in the field of CIIR to improve the spatial resolution and quality of the reconstructed images over the original inverse mapping method. Hong and Javidi proposed to adopt a nonstationary lensarray to improve the image resolution [15, 17]. Shin and Yo suggested the magnification of elemental images by interpolation before inverse mapping to enhance spatial resolution [18]. Hwang et al introduced an intermediate-view reconstruction technique to increase the number of elemental images and, thus, also improved image resolution [19]. Lee et al used a blur metric to detect and eliminate unfocused parts in the reconstruction plane which could lead to image degradation [20]. Saavedra et al applied Fourier filtering to the elemental images before inverse mapping reconstruction in order to further suppress the blurred images [21].

From the expertise of image reconstruction techniques in medical imaging, iterative reconstruction algorithms play a crucial role to achieve better image quality and increased robustness against noise and/or data incompleteness [22]. Recently, the conception of compressive sensing (CS) has also been applied to image reconstruction problems when the data acquired are undersampled [23, 24]. In this paper, we propose an iterative reconstruction method based on CS theory as an alternative to the classic CIIR approach to improve the estimation of the projection data acquired from the microlens-based optical detector.

## 2. Methods

#### 2.1. System Description

The microlens-based optical detector used in this study is built of three layers: a large-area CMOS imaging sensor (Rad-icon Imaging Corp., Santa Clara, CA), a septum mask and a microlens array (MLA) (Fig. 1) [5]. The photon sensor is equipped with a 512 × 1024 matrix of photodiodes at 48 μm pixel size. It can be operated at time frame rates of 0.01 – 4.5 per second and has a dynamic range of 12 bits. The overall size of MLA matches the size of photon sensor which is 24.6 mm × 49.2 mm. Each individual lens unit on the MLA has the dimension of 480 μm × 480 μm, corresponding to local fields of 10 × 10 pixels on the photon sensor. All lenses possess focal lengths with 2.2 mm, forming a focal plane where the photon sensor is aligned. The septum mask is placed between the photon sensor and the MLA, with borehole diameter of 400 μm and borehole pitch of 480 μm. This mask is co-aligned with the MLA in order to avoid cross-talk between the light fields of adjacent lens units on the MLA.

With this setup, all the microlens units are focused at the infinite object distance. According to geometrical optics, the hyperfocal distance *H* of the system beyond which all objects are acceptably sharp can be calculated with [25]

*N*is the f-number which is calculated by the proportion of focal length

*f*to the borehole diameter of the septum mask

*d*as: and

*c*is the circle of confusion which is in this case limited by the pixel size on the photon sensor, 0.048 mm. Therefore, the hyperfocal distance of this system as calculated from Eq. 1 is 18.3 mm, which means that all objects placed farther than 18.3 mm to the MLA are adequately focused. Applying the ray tracing theory with the thin lens approximation, this MLA can then be simplified to a pinhole array for all the imaging objects behind this plane.

#### 2.2. Inverse Mapping Method

The inverse mapping method is a straightforward solution to calculate projection images at any depth slice of the object volume [6, 14, 15]. As shown in Fig. 3(b), the reconstructed image plane is located at a distance *z* towards the MLA plane, and the photon sensor is aligned at the focal plane with a length *f* to the lens units. Assuming that the MLA is modeled as a pinhole array, the inverse mapping method is implemented in this work with the following three steps:

- Each elemental image is inversely mapped to the reconstructed image plane with a magnification factor
*M*, calculated by*M*=*z*/*f*. - Sum up all the inversely projected images to the plane.
- Normalize each reconstructed pixel by the number of the contributions of the inversely mapped elemental images.

#### 2.3. Iterative Reconstruction Method

Here, we reformulate the reconstruction problem in an iterative scenario. Assume that an image vector **X** is the image to be estimated and a vector **Y** refers to the measured elemental image on the photon sensor. From the geometry shown in Fig. 3(b), we can construct a system matrix **A**, which maps any pixel at the reconstructed image plane to the sensor plane. Since the sizes of the raw detector data and the reconstructed image in our case are 512 × 1000, the dimension of the matrix would be 512000 × 512000. However, since only few pixels in the reconstructed image correspond to one specific detector pixel, this matrix is very sparse. Therefore, we store the matrix in a sparse manner with an index vector and a value vector for each detector pixel. Therefore, the reconstruction problem is to find **X**, which conforms to **AX = Y**.

Since each pixel in **X** can be mapped to various pixel positions in **Y**, which causes redundancy in the measured image, the solution of this inverse problem is not unique. Assuming a simple 1-dimensional example with 5 microlenses shown in Fig. 4, a relatively large light source will generate the same elemental images as a very small point source. In the other words, the acquisition is considered to be undersampled if a high-resolution image is supposed to be reconstructed. The compressive sensing (CS) theory can deal with such data incompleteness [26,27]. In our application, the target objects are biomedical samples, particularly nude mice, which can spread out, or more precisely, scatter the incoming or self-generated light yielding severely smoothed light output at surface. Therefore, we can assume that the “right” **X** is more likely to be smooth, which represents the upper large light source in Fig. 4, rather than the lower sharp point one. With this assumption, the resultant image is compressive and can be sparsified with a gradient operation. This estimation of **X** with given **A** and **Y** can be set in the framework of CS image reconstruction by solving the following *l*
_{1}-minimization problem:

*X*(

*i*,

*j*) is the value of

**X**at pixel (

*i*,

*j*).

In our application, however, data inconsistencies induced by signal noise must be considered. Therefore, we don’t strictly enforce the agreement of the forward model (*AX* = *Y*). Instead, this minimization is numerically implemented in an iterative process with two separate steps for each iteration:

In this algorithm, the first step is the standard simultaneous iterative reconstruction technique (SIRT) which converges the result to a least-square solution of **AX = Y**. The second one is a step of steepest descent method which minimize the *l*
_{1}-norm of |Φ**X|**. The parameter *α* defines the step length for *l*
_{1}-minimization at each iteration process. The reconstruction is terminated when the convergence criterion ||**X**
^{(}
^{k}^{+1)} − **X**
* ^{k}*||

^{2}<

*ε*is reached.

#### 2.4. Experimental Setup

In order to investigate the performance of the iterative reconstruction algorithm we firstly adopted a planar light source phantom with Derenzo-like geometry with circles ranging in diameter from 0.3 mm to 0.8 mm in 0.1 mm steps (Fig. 5(a)). This pattern was placed over an electro-luminescent light screen (El-Light, Strausberg, Germany), with an overall size of 10 cm × 10 cm. The optical detector was positioned at a distance of 27 mm to the pattern object. Two different acquisitions were performed when the electro-luminescent light screen was powered with 6 V and 2 V, respectively. In the first case, the light screen generated enough photons so that the optical detector could work in its optimal dynamic range. In the second case, however, the light source was set so weak that only few counts over background noise could be detected. The exposure time was set to 1 s in both cases.

In the second experimental setup a Siemens star pattern made of glass (Edmund Optics GmbH, Karlsruhe, Germany, see Fig. 5(b)) was placed over the same light screen to quantify the intrinsical spatial resolution of the system. This Siemens star has a sector pattern of 5 cm in diameter, with 72 wedge pairs (5 degree for each wedge pair), and was measured from distances of 25 mm, 30 mm and 35 mm, respectively. The power supply for the light screen was set to 5 V which lead to an optimal light output for the optical detector with 1 s exposure time.

## 3. Results

#### 3.1. Experiment with Derenzo-like Geometry

Fig. 6(a) shows the raw sensor data detected from the pattern with Derenzo-like geometry with adequate illumination. The maximal intensity detected (2082) is about half of the maximal detector range (2^{12} = 4096). Fig. 6(b) shows the result calculated with the inverse mapping method, and Fig. 6(c) is the result estimated from the iterative algorithm. The parameter *α* was selected to 0.25, and the reconstruction stopped at 37 iterations. Both reconstruction results were normalized to the same total sum of pixel intensities, and shown with the scale, 0–1, for better comparison. Fig. 6(d) shows the profiles for both reconstruction results. When *α* was set to 0 (identical to SIRT reconstruction), the reconstruction process did converge after 338 iterations and the result is shown in Fig. 7.

When the power supply for the electro-luminescent light screen was set to 2 V, the raw sensor data acquired became noisy and the highest intensity of the elemental images was reduced to 7 counts (after substraction of dark noise data, see Fig. 8(a)). Fig. 8(b) and 8(c) show the results calculated from the inverse mapping method and estimated with the iterative approach, respectively. In this case, the parameter *α* for the iterative method was chosen as 0.00075, and the reconstruction process did converge after 15 iterations.

#### 3.2. Experiment with Siemens Star

Fig. 9 shows the reconstruction results from both algorithms performed to the same acquisitions with Siemens star pattern. During the iterative reconstruction process, the parameter *α* was set to 0.4, 0.3 and 0.25 for the measurements with the distances of 25 mm, 30 mm and 35 mm, respectively. The spatial resolutions are then calculated in line pairs per millimeter according to the minimum detectable contrast as listed in table 1.

## 4. Discussion and Conclusion

In our application, the typical magnification factor *M* used in the inverse mapping approach is about 10–20. With such magnification the enlarged mappings of the elemental images at the selected reconstruction plane are greatly overlapped with their neighbors. This overlapping and superposition can cause blurring of the reconstructed images and degradation of the resolution of the system. The iterative algorithm solves the problem in the other way round. It searches the optimized estimation that has the least difference between the computer calculated elemental images and the measured sensor data. Since the inverse problem is not unique, the proposed algorithm tries to look for the estimation that is locally smooth according to the theory of compressive sensing. The sparsifying transform Φ in the algorithm is designed in such a way to ensure the local smoothness and, in the meanwhile, to reduce the square-like artifact from the natural characteristic of microlens-based imaging [15, 28].

The Siemens star is a phantom that is widely used to test the resolution of optical instruments. Due to the radially traced structures the square-like artifacts are more prominent than the data acquired from the Derenzo-like pattern. As shown in Fig. 9, such artifact is slightly intensified with the iterative approach in comparison with the inverse mapping results. However, both image contrast and intrinsic resolution are greatly improved by the iterative process (Fig. 9 and table 1). One should notice, though, that the Siemens star phantom represents a pattern that is not realistic in *in vivo* imaging applications. It has been included here to illustrate the potential of the proposed algorithm in general.

The Derenzo phantom, on the other hand, represents biomedical data much better. Its geometry is also frequently used in nuclear medicine image reconstruction studies. In the results of the measurements of a Derenzo-like geometry shown in Fig. 6 and 8, the resolution limit for the results calculated by the inverse mapping method is 0.5 mm for the both cases, while the 0.4-mm hot rods can still be resolved when the iterative algorithm is applied. During biomedical applications such as bioluminescent imaging (BLI), the light output generated by biological processes is usually very weak. The second acquisition of the Derenzo-like pattern simulates such situation in which the counts of raw data acquired are just over the minimum detectable range. In this case, the noise in the background is more prominent than in the acquisition with adequate illuminance (compare Fig. 6(a) with Fig. 8(a)). During the inverse mapping process, all the noisy pixels detected are collected and added to the reconstructed result, leading to a high background noise level and a decrease of resolvable resolution (Fig. 8(b)). In the iterative reconstruction algorithm, however, the noisy data were suppressed with the consideration of the neighboring pixels and the reconstructed data can still resolve the 0.4-mm rods (Fig. 8(c)).

It is worth to notice that the selection of parameter *α* is according to the overall illuminance level of the acquisition. If *α* is too large, the estimated image becomes uniform, losing the information from the raw detected data. On the other hand, if the parameter is too small (set to 0, as the extreme case), the algorithm is then reduced to SIRT reconstruction. In such case, the reconstruction is converged to a result that is filled with sharp speckles (Fig. 7). For the results shown in this article we selected *α* as between 1/10000 and 1/8000 of max grey value of the raw detector data based on various tests. This selection keeps a good balance between the SIRT step and the steepest gradient descent of the l1-norm. The method seems to be robust towards the slight changes to the *α* parameter. Further investigation of the selection of parameters may improve the performance.

As discussed in section 2.1, this microlens-based optical detector has a rather wide depth of field with 18.3 mm to the infinity. In our application, the typical distance between the imaging object and the MLA is 20–40 mm. Therefore, the presented algorithm designated to reconstruct the planes within the focal range seems preferred for our specific applications. More generally, for any reconstructed image plane that is off-focusing, the method as presented in [29] can be applied to calculate a modified system matrix **A′** instead of considering a pinhole array for the simulation of ray tracing. The remaining iteration steps presented in this paper are the same.

In this paper, the precondition of this algorithm is that the distance between the imaging object and MLA *z* is known. This precondition can be easily overcome as our optical detector is supposed to acquire data simultaneously with other imaging modalities, such as PET and MRI. Hence the distance information of a three-dimensional object surface can be extracted from the modality easily.

In conclusion, a new iterative algorithm to estimate the projection images as detected from a microlens-based optical detector is presented. In comparison with the traditional inverse mapping method, this approach yields improved contrast and spatial resolution of the reconstructed images.

## References and links

**1. **N. Henriquez, P. van Overveld, I. Que, J. Buijs, R. Bachelier, E. Kaijzel, C. Löwik, P. Clezardin, and G. van der Pluijm, “Advances in optical imaging and novel model systems for cancer metastasis research,” Clin. Exp. Metastasis **24**, 699–705 (2007). [CrossRef] [PubMed]

**2. **D. Rowland, J. Lewis, and M. Welch, “Molecular imaging: the application of small animal positron emission tomography,” J. Cell. Biochem. **87**, 110–115 (2002). [CrossRef]

**3. **G. Kelloff, K. Krohn, S. Larson, R. Weissleder, D. Mankoff, J. Hoffman, J. Link, K. Guyton, W. Eckelman, H. Scher, J. O’Shaughnessy, B. D. Cheson, C. C. Sigman, J. L. Tatum, G. Q. Mills, D. C. Sullivan, and J. Woodcock, “The progress and promise of molecular imaging probes in oncologic drug development,” Clin. Cancer Res. **11**, 7967–7985 (2005). [CrossRef] [PubMed]

**4. **R. Weissleder and V. Ntziachristos, “Shedding light onto live molecular targets,” Nat. Med. **9**, 123–128 (2003). [CrossRef] [PubMed]

**5. **J. Peter, D. Unholtz, R. Schulz, J. Doll, and W. Semmler, “Development and initial results of a tomographic dual-modality positron/optical small animal imager,” IEEE Trans. Nucl. Sci. **54**, 1553–1560 (2007). [CrossRef]

**6. **D. Unholtz, W. Semmler, O. Dössel, and J. Peter, “Image formation with a microlens-based optical detector: a three-dimensional mapping approach,” Appl. Opt. **48**, D273–D279 (2009). [CrossRef] [PubMed]

**7. **R. Martinez-Cuenca, G. Saavedra, M. Martinez-Corral, and B. Javidi, “Progress in 3-D multiperspective display by integral imaging,” Proc. IEEE **97**, 1067–1077 (2009). [CrossRef]

**8. **M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE **99**, 556–575 (2011). [CrossRef]

**9. **A. Stern and B. Javidi, “Three-dimensional image sensing, visualization, and processing using integral imaging,” Proc. IEEE **94**, 591 –607 (2006). [CrossRef]

**10. **N. Davies, M. McCormick, and M. Brewin, “Design and analysis of an image transfer system using microlens arrays (Journal Paper),” Opt. Eng. **33**, 3624–3633 (1994). [CrossRef]

**11. **S. Min, B. Javidi, and B. Lee, “Enhanced three-dimensional integral imaging system by use of double display devices,” Appl. Opt. **42**, 4186–4195 (2003). [CrossRef] [PubMed]

**12. **J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. **37**, 2034–2045 (1998). [CrossRef]

**13. **H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. **26**, 157–159 (2001). [CrossRef]

**14. **S. Hong, J. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004). [CrossRef] [PubMed]

**15. **S. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express **12**, 4579–4588 (2004). [CrossRef] [PubMed]

**16. **J. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. **48**, 77–94 (2009). [CrossRef]

**17. **J. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. **27**, 324–326 (2002). [CrossRef]

**18. **D. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express **15**, 12039–12049 (2007). [CrossRef] [PubMed]

**19. **D. Hwang, J. Park, S. Kim, D. Shin, and E. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. **45**, 4631–4637 (2006). [CrossRef] [PubMed]

**20. **K. Lee, D. Hwang, S. Kim, and E. Kim, “Blur-metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. **47**, 2859–2869 (2008). [CrossRef] [PubMed]

**21. **G. Saavedra, R. Martinez-Cuenca, M. Martinez-Corral, H. Navarro, M. Daneshpanah, and B. Javidi, “Digital slicing of 3D scenes by Fourier filtering of integral images,” Opt. Express **16**, 17154–17160 (2008). [CrossRef] [PubMed]

**22. **J. Qi and R. Leahy, “Iterative reconstruction techniques in emission computed tomography,” Phys. Med. Biol. **51**, R541–R578 (2006). [CrossRef] [PubMed]

**23. **E. Sidky, C. Kao, and X. Pan, “Accurate image reconstruction from few-views and limited-angle data in divergent-beam CT,” J. X-Ray Sci. Technol. **14**, 119–139 (2006).

**24. **G. Chen, J. Tang, and S. Leng, “Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets,” Med. Phys. **35**, 660–663 (2008). [CrossRef] [PubMed]

**25. **M. Katz, *Introduction to Geometrical Optics* (World Scientific Pub. Co. Inc., 2002).

**26. **E. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory **52**, 489–509 (2006). [CrossRef]

**27. **D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory **52**, 1289–1306 (2006). [CrossRef]

**28. **M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Multifacet structure of observed reconstructed integral images,” J. Opt. Soc. Am. A **22**, 597–603 (2005). [CrossRef]

**29. **D. Shin, E. Kim, and B. Lee, “Computational reconstruction of three-dimensional objects in integral imaging using lenslet array,” Jpn. J. Appl. Phys. **44**, 8016–8018 (2005). [CrossRef]