## Abstract

We propose a computational synthetic aperture integral imaging technique that increases the field of view (FOV) and viewing angle of integral imagines systems. The synthetic aperture is obtained by relative movement of the imaging system and the object in a plane perpendicular to the optical axis. Integral images (IIs) captured during the scanning process are combined together to create a synthetic aperture integral image (SAII) that has an enlarged effective FOV. Three-dimensional (3D) images are computed digitally from the constructed SAII in similar ways that are computed from single digital II. Since the synthetic aperture is obtained by a scanning process, the proposed method is suitable when the integral imaging system is located on moving platforms such as aircrafts or when the object is in motion such as objects on assembly lines. CompSAII allows reconstruction of the images at different distances from the lenslets using light backward or forward propagation to position the viewing plane at any arbitrary position. To the best of our knowledge this is the first time a computational synthetic aperture technique is applied to integral imaging.

©2003 Optical Society of America

## 1. Introduction

Integral photography is one of the 3D imaging technologies based on a pinhole array or a lenslet array to capture incoherent light rays from different directions [1]. In the traditional integral photography the rays emerging a 3D objects pass through a lenslet or pinhole array and are captured on a photographic film. Then the reconstruction is carried out optically by generating inverse propagating rays with system similar to the recording one. In [2] we used the term “integral imaging” to describe this technique since we are performing imaging. If the II is recorded digitally the 3D images can be reconstructed by computational methods [2,3,4,5]. A typical example of a digital II recording setup is shown in Fig. 1.

In this paper we present experimental results for CompSAII. In addition, we investigate the FOV and viewing angle limitation of integral imaging systems. We analyze those limitations for typical integral imaging system using a lenslet array. In general the FOV is limited by the aperture of the system. In order to increase the aperture we implement a synthetic aperture approach by which we produce a relative motion between the object and the II system. This process can be viewed as laterally scanning the object or alternatively moving the object across the viewing area. All optical 3D synthetic aperture technique for II was demonstrated in reference [6]. The reconstruction in reference [6] is performed optically which requires synchronizing the movement of the display with that of the II pickup. In this work we use computer reconstruction of 3D images from a SAII together with the synthetic aperture approach.

During the recording step multiple laterally displaced exposures of the 3D object are taken. The elemental IIs obtained this way are combined together to form a SAII. The SAII formed is equivalent to an II captured with an integral imaging system that has an enlarged aperture. The SAII is a compact representation of the information from multiple exposures. Since it has the same format as a regular II it can be further manipulated with tools developed for II. In this work we demonstrate perspective reconstruction of a 3D object from the SAII by applying digital computation methods developed for regular II’s [2–4].

## 2. FOV limitation in integral images

The analysis in the following is carried out in one dimension to simplify the mathematical expressions. We assume one-dimensional integral imaging system but the results can be easily expanded to two-dimensional system.

The FOV for any integral image (traditional or digital) is primarily limited by the maximum lenslet exit angle *θ _{l}* [7]:

given by:

where *z*
_{1} is the distance from the object to the lenslet array, *f _{l}* is the lenslet focal length and

*p*is the lenslet lattice pitch. This limitation rises from the need to avoid overlapping of the elemental images. Elemental images, that are the images obtained from each lenslet, should not overlap in order to avoid spatial information cross-talk. This angle also determines the maximal viewing area defined as the range in which the observer can move laterally [6]. The maximum size of an object

_{l}*L*that doesn’t cause image overlapping is:

_{0max}An object obeying Eq. (3) can be placed in a lateral range (Fig. 2):

where *L _{A}*=(

*n-1*)

*p*is the lenslet array effective size and

_{l}*n*is the number of lenses per dimension.

The object size restriction given by Eq. (3) can be remedied avoiding the overlapping of elemental images using gradient-index lens array or by barriers [8]. In this case, object covering the entire range of Eq. (4) can be imaged and reconstructed completely from at least one viewing direction.

In general the FOV is further limited by the finite aperture of optical elements following the lenslet array such as that of pickup lenses or recording media. Figure 1 illustrates a typical case where a pickup lens with a finite aperture is placed behind the lenslet array. The amount of limitation depends on the viewing point location during the reconstruction. The maximal limitation is for a viewing point located at infinite distance from the lenslet array. In such a case the viewing rays are parallel, crossing the center of the lenslet. Computer reconstruction for a virtual viewing point located at infinity is obtained simply by sampling the II on a grid with lattice constant of the pitch size [2,4]. Other virtual viewing points, closer to the lenslet array, can be simulated by sampling a set of converging rays. In such a case larger viewing angles are permitted at the expense of angular resolution.

Figure 3 illustrates the FOV limitation for the parallel projective case (viewing point is distant from the lenslet array). The object is denoted by a stripped arrow having lateral size *L _{0}*. For a 3D object,

*L*can be approximated as the projection of the object in a plane perpendicular to the optical axis at distance

_{0}*z*from the lenslet array. Parallel rays are emerging from the object that go through the lenslet array to a pickup lens located at distance

_{1}*z*from the lenslet array. The pickup lens has an aperture diameter

_{2}*D*. Let us define an observation angle as an angle for which finite size objects located somewhere in the FOV can be completely reconstructed. The observation angle range for this setup is:

where *θ _{l}* is the maximum lenslet array exit angle as defined in Eq. (2) and

*θ*is an additional angle limitation given by:

_{m}The latitudinal location of the object is restricted in this case to the range:

The observation angles defining the FOV should be distinguished from the viewing (or the perspective) angle. We define a viewing angle as the angle from which an object at a given location can be completely seen. Fig 4 shows the viewing angle range for the optical setup of Fig. 3. The maximal viewing angle is $t{g}^{-1}\left[\frac{D-{L}_{0}}{2\left({z}_{1}+{z}_{2}\right)}\right]$ for small pickup lens aperture and $t{g}^{-1}\left(\frac{{L}_{A}-{L}_{0}}{2{z}_{1}}\right)$ for large pickup lens aperture. Consequently the viewing angle range is given by:

## 3. Increasing the FOV and viewing angles by CompSAII

From Eqs. (3–7) it can be seen that the FOV and the viewing angles range of an integral image depend on the aperture of the system determined by the lenslet array aperture *L _{A}* and the pickup lens aperture

*D*. Therefore the FOV and the viewing angles range can be increased by increasing the II system aperture. This can be done synthetically by taking multiple exposures while moving the II system relative to the object in a perpendicular plane to the optical axis. Figure 5 illustrates the principle of the synthetic aperture by scanning II the object in

*y*direction. If the displacement between two consecutive exposures is

*Δx*then the effective lenslet array aperture size is increased to

*L*+

_{A}*Δx*and the effective pickup lens aperture is increased to

*D*+

*Δx*. Consequently, the FOV is increased according to Eqs. (3–7) and the projective angle range is increased as demonstrated by the solid ray in Fig. 5 viewing the object by an angle

*β*larger than the original angle

_{m}*α*. The relative displacement

_{m}*Δx*(either of the object or the II system) has to be smaller than the critical aperture of the system (the minimum of

*L*and

_{A}*D*).

The IIs captured by scanning the object can be combined together to form a SAII. The SAII is similar to the II that would have been obtained from an II system with an enlarged aperture. In order to construct the SAII the displaced IIs need to be registered properly, that is to realign them in order to compensate for the image motion in the detector plane. The exact displacement vector (DV) in the detector plane needs to be known. If the motion of II system during the scanning is mechanically controlled, the DV can be calculated from the precisely known motion steps. Alternatively, the DV can be estimated from the displaced IIs by using motion estimation methods [9].

## 4. Experimental results

In Figs. 6(a) and (b) the elemental images from two displaced II are shown. Only the central part consisting of 5X18 elementary image are shown. Note that elemental images distant from the center appear truncated due to the finite system aperture. The II were captured with the optical setup described in Fig. 1. The objects are two die with linear dimensions of *L _{0}*=15mm and 7mm and were placed

*z*

_{1}=70mm from the lenslet array. The lenslet array has 53×53 lenslets. The lenslet pitch is approximately

*p*≈1.1 mm and the lenslet array width is

_{l}*L*=58.3mm. The focal length of the lenslets is

_{A}*f*=5.2 mm. The pickup lens has an aperture of

_{l}*D*=36mm and was placed z

_{2}=110mm from the lenslet array. The two II images are captured by moving the II system

*Δx*=25mm in the horizontal direction. The DV was estimated in the II plane by using a block matching algorithm [9]. The DV is found to be

**d**=(

*d*)=(743,1) pixels where (

_{x},d_{y}*d*) are the horizontal and vertical DV components. The II presented in Fig. 6(b) is shown after shifting it -

_{x}, d_{y}**d**pixels in order to align it with respect the II shown in Fig. 6(a). Using the two IIs in Figs. 6(a) and (b) we formed the SAII shown in Fig. 6(c). It can be seen that the truncated elemental images in Figs. 6(a) and (b) appear completely in Fig. 6(c).

The viewing angle of the IIs shown in Figs. 6(a) or (b) is limited according to Eq. (7) to
$2\mathit{tg}\left[\frac{D-{L}_{0}}{2\left({z}_{1}+{z}_{2}\right)}\right]={6}^{0}$
. Figure 7 demonstrates computer reconstructions from a single II and from the SAII. In the upper row are shown perspective images for different viewing angles reconstructed from the II shown in Fig. 6(a). In the lower row are shown reconstructions from the SAII of Fig. 6(c) for the same viewing angles. Reconstructions form the SAII can be viewed in an enlarged range of approximately 11 degrees which is consistent with Eq. (7). According to Eqs. (7) and (2), the viewing angle with the synthetically enlarged apertures *L’ _{A}*=

*L*+

_{A}*Δx*=83.3mm and

*D’*=

*D*+

*Δx*=61mm is limited by the maximum lenslet exit angle; 2

*θ*=11.2

_{l}^{0}°.

## 5. Conclusion

In this paper we studied the FOV and viewing angle limitation of integral imaging technique. The main limiting sources are the finite aperture of elements of the imaging system. In order to relax those limitations we can use a computational synthetic aperture technique by which the integral imaging system or object are globally moved with respect to another. Integral images captured during the scanning process are combined together to create a synthetic integral image that has an enlarged effective FOV. The SAII is a compact representation of the captured information and have the same format as regular IIs, therefore can be manipulated in a similar way as IIs are. 3D computed images reconstructed from the SAII are demonstrated to have a wider viewing angle range then that from a single II.

## References and links

**1. **G. Lippmann, “La photographic intergrale” C. R. Acad. Sci. **146**, 446–451 (1908).

**2. **H. Arimoto and B. Javidi, “Integral Three-dimensional Imaging with Computed Reconstruction” Opt. Lett. **26**, 157–159 (2001) [CrossRef]

**3. **T. Naemura, T. Yoshida, and H. Harashima, “3-D computer graphics based on integral photography” Opt. Express **8**, 255–262 (2001), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-8-4-255 [CrossRef] [PubMed]

**4. **A. Stern and B. Javidi, “3D Image Sensing and Reconstruction with Time-Division Multiplexed Computational Integral Imaging (CII)” to appear in Applied Optics.

**5. **B. Javidi and F. Okano, *Three Dimensional Television, Video, and Display Technologies*Springer Berlin, 2002.

**6. **J. S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging” Opt. Lett. **27**, 1144–1146 (2002). [CrossRef]

**7. **H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography” J. Opt. Soc. Am. **15**, 2059–2065 (1998). [CrossRef]

**8. **J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lenss-array method based on real time integral photography for three-dimensional Images” Appl. Opt. **37**, 2034–2045 (1998). [CrossRef]

**9. **C Stiller and J Konrad, “Estimating motion in image sequences - A tutorial on modeling and computation of 2D motion” IEEE Sig. Proc. **16**, 70–91 (1999) [CrossRef]