Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Recent progress in three-dimensional information processing based on integral imaging

Open Access Open Access

Abstract

Recently developed integral imaging techniques are reviewed. Integral imaging captures and reproduces the light rays from the object space, enabling the acquisition and the display of the three-dimensional information of the object in an efficient way. Continuous effort on integral imaging has been improving the performance of the capture and display process in various aspects, including distortion, resolution, viewing angle, and depth range. Digital data processing of the captured light rays can now visualize the three-dimensional structure of the object with a high degree of freedom and enhanced quality. This recent progress is of high interest for both industrial applications and academic research.

© 2009 Optical Society of America

Data sets associated with this article are available at http://hdl.handle.net/10376/1451. Links such as View 1 that appear in figure captions and elsewhere will launch custom data views if ISP software is present.

1. Introduction

Three-dimensional (3D) information processing covers entire stages of the data processing stream, including acquisition, processing, and display. Al though various techniques have been studied since Wheatstone first suggested the stereoscope one and a half centuries ago [1], no technique has reached a satisfactory performance level with sufficient practical values yet. The techniques that have been developed so far can be listed according to the amount of data they address. Stereoscopy and holography are located at the opposite ends of that list as shown in Table 1. Stereoscopy accesses the 3D information by using two view images. The required bandwidth is only two times larger than that of the two-dimensional (2D) case, and the system requirement is also relatively simple—a stereo camera for acquisition and view splitting optical means such as a parallax barrier and lenticular lens for display. The explicit 3D data extraction, however, requires massive image processing and generally is prone to errors since the depth is only implicitly encoded in the disparity between two view images. The display of 3D images also results in eye fatigue or discomfort, since only limited depth cues are provided to the viewer. Holography directly addresses the wavefront of the light from the object scene. Since the whole data extent of the object light can be captured and reproduced without loss, the 3D information processing can be achieved in a complete way. However, the required bandwidth is too huge, and no device is currently available for handling the holographic data in real time with satisfactory resolution and viewing angle.

Integral imaging is an interesting alternative of stereoscopy and holography. Integral imaging addresses the spatioangular distribution of light rays. Although it depends on sampling density, it is safe to say that the data extent of integral imaging is larger than stereoscopy and smaller than holography. Hence integral imaging has higher practical value than holography and provides a more comprehensive form of 3D information than stereoscopy.

Integral imaging was first invented by Lippmann in 1908 [2] but attracted only limited attention due to lack of devices that could handle the required data bandwidth. The recent progress of the high- resolution camera and spatial light modulator (SLM) solves this issue to some extent, and thus integral imaging has been actively studied for the past decade. The graph in Fig. 1 shows the number of OSA journal and conference publications on integral imaging over roughly the past decade. The number increases sharply from the year 2001, showing recent great attention to this technique. The development of integral imaging covers all aspects of acquisition, processing, and display of 3D information and enhances various parameters including distortion, resolution, field of view (FOV), and depth range. In this review, we will discuss recent progress of integral imaging technology.

2. Principle of Integral Imaging

Before discussing recent accomplishments, the basic principle of integral imaging is briefly introduced in this section. Figure 2 shows the concept of integral imaging. For the 3D information acquisition, the object is captured by an image sensor such as a charge-coupled device (CCD) through a lens array. The lens array consists of many identical lenses, i.e., elemental lenses, and forms an array of the images of the object that are called elemental images. These elemental images are captured and stored by a CCD. For 3D data processing, the captured elemental images are digitally processed to extract 3D data explicitly or to visualize the 3D structure of the object for other applications. For the 3D display, the elemental images are presented by an SLM and observed through the lens array. The light rays from the elemental images are integrated by the lens array such that they form a 3D image of the captured object.

The capability of 3D information processing of integral imaging can be simply explained using ray optics. Here we assume a pinhole array instead of a lens array for simplicity. Suppose there are two object points. Each object point will emit or reflect the light rays to all directions as shown in Fig. 3a. If we capture these two object points using a pinhole array, the object light rays are sampled at the pinhole positions and recorded in the elemental images as shown in Fig. 3b. The captured rays in the elemental images show different disparity according to the depth of the object points as shown in Fig. 3c. Hence the depth information of the object is encoded as a form of the disparity, and this disparity is extracted explicitly or used implicitly for the 3D data processing. Figure 3d shows the optical reproduction of the light rays. The elemental images are displayed on the SLM with diffusive back illumination. Each elemental image point emits light rays in all directions. Among them, only rays that join the elemental image points and the pinholes pass through the pinhole array. These rays are a replica of the captured sampled light rays of Fig. 3b and intersect at the object points. Therefore, the continuous light rays of the object points are sampled [Fig. 3b] and reproduced [Fig. 3d] by a pinhole array, enabling 3D information capture, processing, and display.

Although integral imaging can capture and reproduce the light rays using a lens array, the basic configuration shown in Fig. 3 has limitations in terms of sampling rate and angular range. The spatial sampling rate of the light rays is determined by the elemental lens pitch as shown in Fig. 3b. The angular sampling rate indicates the number of light rays that can be captured by one elemental lens. The angular sampling rate is determined by the number of the pixels in one elemental image and is limited by the finite pixel size of the CCD or SLM. The angular range of the light rays that can be captured and reproduced is also restricted by the FOV of each elemental lens. These factors pose a fundamental limitation on the 3D data processing and display. The various techniques that will be discussed in the following sections modify the system configurations to use the limited information in a more effective way or to increase the total information extent by using multiplexing scheme. In the followings, we describe recently developed integral imaging techniques.

3. Three-Dimensional Information Acquisition

3A. Pickup Methods

The first stage of integral imaging is the acquisition of the spatioangular light ray distribution, i.e., elemental images, which is referred to as the pickup process. The basic configuration where the recording medium has the same size as the lens array is simple as shown in the pickup part of Fig. 2. In practice, however, the CCD sensor, which is used as a recording medium, is much smaller than the lens array, requiring modification of the basic configuration. The immediate modification would be addition of one imaging lens for demagnification of the elemental images as shown in Fig. 4. Usual issues associated with this pickup system include (1) crosstalk between neighboring elemental images, (2) nonparallel pickup directions, and (3) difficulty of simulta neous pickup of real and virtual objects. The crosstalk means overlapping of the elemental images on the CCD plane as shown in Fig. 4. The overlapped elemental images cannot be separated in later steps, and eventually this degrades the quality of the reproduced 3D images. The pickup direction means the direction from which the object is captured by a given elemental lens. If one draws a trajectory of a chief ray that passes through the principal points of an elemental lens and the imaging lens as shown in Fig. 4, all the other rays refracted by the elemental lens will be evenly distributed with respect to that chief ray. Hence the direction of the chief ray in the object space can be regarded as the pickup direction [3]. The pickup directions should be parallel, since the display system of integral imaging has parallel directions for all elemental lenses. Nonparallel pickup directions as shown in Fig. 4 cause depth-dependent distortion of the reconstructed images [3, 4]. Moreover the basic configuration shown in Fig. 4 can capture only real objects, and the simultaneous pickup of real and virtual objects is not possible..

Recent progress with the pickup system makes it possible to solve these issues. For the nonparallel pickup directions, adding a large aperture field lens after the aerial elemental image plane and locating the imaging lens at the focal length of the field lens as shown in Fig. 5a can be one solution [3]. By controlling the size of the imaging lens aperture, reduction of the crosstalk is also possible to some extent. However, recent analysis shows the crosstalk cannot be completely eliminated by the setup of Fig. 5a [5]. The enhanced system is shown in Fig. 5b [4]. In this configuration, a telecentric lens system behind the lens array aligns the pickup directions parallel to each other. The aperture stop also eliminates the crosstalk. Hence clear and distortion-free elemental images can be captured. However, only real objects can be captured, and simultaneous pickup of real and virtual objects is not possible yet. The configuration shown in Fig. 5c tackles these three issues at the same time [6]. As shown in Fig. 5c, a telecentric lens system is used behind the lens array to make the pickup directions parallel and prevent crosstalk as before. The unique point is the use of the 4-f optics in front of the lens array. The 4-f optics, which consists of 5 planes, i.e., the critical plane, first lens, aperture plane, second lens, and rear focal plane, separated from each other by the focal length, relays the object to the lens array space maintaining the parallel pickup directions and no crosstalk condition. Therefore the objects located around the critical plane are relayed by the 4-f optics to the space around the lens array, and captured, spanning real and virtual fields simultaneously without any geometrical distortion. The dynamic control of the lateral location of the aperture at the Fourier plane of the 4-f optics can also change the angular range captured in each elemental image, making it possible to increase the viewing angle of the 3D images by time multiplexing afterwards [5].

Another issue with the pickup system is pseudoscopic–orthoscopic conversion. When objects are captured by a pickup system and reproduced by a display system, the depth order of the objects is reversed. To the viewer, the farther object looks like it is occluding the closer object, which is unnatural. A simple way to remedy this is to rotate each elemental image by 180° [7]. The real image is converted to a virtual image with corrected depth order. The elemental image rotation can be done digitally or optically. For optical operation, several systems using a gradient-index lens array [7] or overlaid multi ple lens arrays [8] have been proposed as shown in Figs. 6a, 6b. Instead of rotating each elemental image, it is also possible to invert the depth order of the objects using an optical depth converter that usually consists of multiple lens arrays as shown in Fig. 6c [9, 10]. A digital second pickup as shown in Fig. 6d has also been proposed, where not only the depth order but also the depth range can be controlled [11, 12].

For practical applications of the pickup system, compact implementation of overall system is one of the major issues. Recently, some progress has been reported. In one study, a micro lens array was inserted in the main body of the camera such that the overall system looks like an ordinary hand-held camera [13]. A direct integration of the multiaperture complimentary metal oxide semiconductor image sensor has also been reported [14].

3B. Postprocessing

For the acquisition of the 3D information, the lens array should be aligned precisely with respect to the CCD. Misalignment of the lens array distorts the elemental images, which deteriorates the reproduced 3D images in the display process. Many 3D data processing methods are applied to each of the elemental images, requiring them to be identified with high precision. Several studies have been conducted regarding local and global misalignment of the lens array [15] or statistical position uncertainty of individual elemental image [16]. The results show that these errors cause image splitting, shifting along the lateral or longitudinal direction, or blurring according to reconstruction distance in the optical or computational reconstruction process. Therefore the exact lens array alignment and elemental image identification are the most important factors for clear reconstruction.

Automated methods that can identify each elemental image have been reported. For the automated identification, a regular grid structure of the lens array is assumed. The image discontinuity at the boundary between neighboring elemental images is exploited to detect the grid structure, which may have rotational distortion and translational offset. The first step is edge detection with median filtering. A Hough transform is then applied to the obtained edge image. The rotation angle is detected by finding the θ column that has the maximum number of strong peaks in the Hough transform parameter space (θ,x) [17, 18]. Using the detected rotation angle, the captured elemental image array is rotated back to be aligned properly in horizontal and vertical lines. The offset and the elemental lens pitch are then detected by selecting the best sequence in the thresholded horizontal and vertical projection profiles of the rotation-angle-compensated edge image [18]. Using the detected offset, elemental lens pitch, and rotation angle, the initially captured elemen tal images can be compensated, and each elemental image is identified for later steps for display or processing.

4. Three-Dimensional Data Processing

Using captured elemental images, the 3D information can be visualized in various ways, extracted explicitly, or utilized for the reconstruction of the wavefront. In this section, various digital processing methods of the elemental images are reviewed.

4A. Depth Slice Image Generation

A representative method of the 3D visualization using the elemental images is the depth slice generation, which is also called computational integral imaging reconstruction (CIIR). Figure 7 shows the concept of CIIR. Suppose that there are elemental images that correspond to two object points at different depths as shown in Fig. 7. CIIR computationally projects all elemental image points through the lens array to a given depth plane. The elemental image points that correspond to an object point in the given depth plane are superposed at the same position (black dots in the closer depth plane and red dots in the farther depth plane in Fig. 7), while the other elemental image points are scattered in the depth plane (red dots in the closer depth plane and black dots in the farther depth plane in Fig. 7). In practice, the 3D object consists of numerous object points, hence this scattering of the corresponding elemental image points leads to blurring of the image. Consequently, CIIR can generate depth slice images where the in-plane part of the object is focused and the out-of-plane part is blurred. The intensity of the projected elemental image points is sometimes normalized according to (1) the distance between the elemental image point and the reconstruction point in the depth plane, which can be neglected in most paraxial cases [19], or (2) the number of the super position at each reconstruction point [20]. Figure 8 shows an example of a set of the elemental images and the reconstructed depth slice images using them.

In essence, the CIIR method reconstructs the depth slice image by averaging the magnified and shifted elemental images. Thus the resolution of the generated depth slice image depends on the number and the resolution of the elemental images. In this sense, some methods for enhancing the resolution of CIIR by preinterpolating individual elemental image [20] or increasing the number of the elemental images using an intermediate-view reconstruction technique [21] have been proposed. A curved lens array system can gather more elemental images than a flat lens array system. Hence CIIR for curved lens array has also been proposed for resolution enhancement [22, 23].

In the basic CIIR method, the object close to the reconstruction depth plane is synthesized clearly while the object away from the depth plane is blurred. In many cases, for example the depth detection, it is desirable that the blurred images are eliminated, leaving the focused image unchanged. A digital image processing method has been proposed where the blurred portion in the reconstructed depth slice image is detected using a blur metric and eliminated [24]. Fourier filtering of the set of the elemental images can also be used for suppression of the blurred images [25]. Each object point appears regularly in the set of the elemental images with a depth-dependent spacing, i.e., disparity, thus Fourier filtering can select image components corresponding to a particular depth. By applying CIIR to the Fourier filtered elemental images, blurred images can be suppressed. An optical method has also been developed [26]. In the optical method, a random pattern illumination is used to encode the complex-valued elemental images, which are added destructively for the off-focused objects, suppressing blurred images.

The CIIR method was originally for the elemental images that are captured by the lens array of a regular grid structure. Recently, CIIR has not been limited to elemental images but generalized for multiple view images captured at random positions [27]. The CIIR has also been applied to imaging of the object behind a scattering medium [28], near-infrared imaging [29], underwater imaging [30], and a 3D microscope [31].

Volumetric reconstruction of CIIR enables one to recognize a 3D object with higher precision. Many techniques developed for 2D image-based recognition can be extended and applied to the volumetric reconstruction of CIIR. Recent reports include recognition of biological micro-organisms [32], recognition under photon-starving conditions [33, 34, 35], and distortion-tolerant recognition [36]. Another important feature of CIIR is recovery of an occluded object. Since CIIR uses light rays collectively, even though an object is located behind a partially occluding object, the original shape of the occluded object can be reproduced unless all the light rays from the occluded object point are blocked by the front occluding object. Using this feature, recognition or tracking of the occluded object has also been proposed [37, 38, 39].

4B. View Image Generation

Elemental images can be exploited to generate a view image at an arbitrary view point. Actually, each elemental image itself represents a view image of perspective projection geometry at the principal point of the corresponding elemental lens. The limitation of FOV and the resolution, however, degrades its usefulness. Here, FOV is defined as a lateral extent at a given depth plane in the object space that is included in the image. A small FOV of the elemental image means that only a limited part of the object scene is captured by each elemental image. FOV limitation can be partially relaxed by subimage generation. A subimage is a view image of orthographic projection geometry. Figure 9 shows the concept of subimage generation. From the geometry shown in Fig. 9, each point in the elemental image plane represents a light ray passing through the principal point of the corresponding elemental lens. Hence the pixels at the same local position in every elemental image constitute a view image of the object at a particular viewing direction, which is called a subimage. Since each subimage is generated from every elemental image, the FOV of the subimage can exceed that of the elemental image. Specifically, the FOV of a subimage is given by the lateral size of the lens array, and it is larger than the FOV of each elemental image for objects that are not located too far from the lens array [40]. However the resolution is still a problem. Since only one pixel is extracted from each elemental image, the resolution of the generated subimage is given by the number of the elemental images, which is generally not enough for a crisp image. Also the projection geometry of the generated subimage is orthographic, where all the projection lines are parallel to each other. Although the orthographic projection geometry is useful in some cases [40, 41, 42, 43, 44], it is more desirable to have full freedom in the projection geometry.

A few methods have been reported to enhance the resolution and obtain full freedom in projection geometry [45, 46]. The basic idea is to apply a correspondence-matching technique to the elemental images to get explicit information on the 3D shape of the object. Based on the extracted 3D shape, the pixels in the elemental images can be mapped onto the reconstructed object surface and the view images can be synthesized. Since all pixels of the elemental images can contribute to the synthesis of the view image, the resolution is significantly enhanced. Also the view point and the projection geometry can be set arbitrarily [46]. Although the use of correspondence matching increases the complexity of the process, it is possible to control the accuracy of 3D shape extraction and the processing complexity since the algorithm is scalable [45]. Figure 10 shows an example of the views generated following the method presented in Ref. [46].

4C. Depth Map Calculation

Depth map of the object space can be estimated by analyzing disparity between the elemental images. The elemental images are considered multiple- perspective images captured at different locations, hence the multi-baseline stereo matching techniques developed in image processing field can be applied with minimal modification [47]. Subimages can also be used for stereo matching instead of elemental images. In this case, one has to consider the different disparity characteristics of the elemental images and subimages, which originate from their different projection geometry, i.e., perspective for elemental images and orthographic for subimages [40]. Figure 11 shows the disparity characteristics of elemental images and subimages. In the elemental images shown in Fig. 11a, the disparity is inversely proportional to the object depth, while the dependency is reversed in the subimages as shown in Fig. 11b. This difference can be exploited to enhance the accuracy of the depth mapping using integral imaging.

The simplest method for generating a depth map using integral imaging would be the application of the multi-baseline stereo matching algorithm to the elemental images. One of the elemental images is selected as a reference image, and a depth map is generated by estimating the disparities between the reference image and all other (or a selected set of) elemental images [48]. The primary problem with this method is the low resolution of the elemental image. Moreover, this method is limited by the narrow FOV of the elemental image, which means the depth of only a small portion of the object scene can be detected. Instead of setting one elemental image as a reference image, one can distribute reference image in the entire set of elemental images [45]. The dis parities of the central pixels of every elemental image are estimated and used to construct a triangular mesh model of the object. This initial mesh model can be refined by taking other pixels in every elemental image and extracting their disparity information. Since the disparity is extracted from all over the set of the elemental images, the FOV limitation of the simplest method can be remedied. Figure 12 shows the object 3D mesh model reconstructed from the elemental images following the method presented in Ref. [45].

The use of the subimages instead of elemental images can also alleviate FOV limitation, since the FOV of the subimage is larger than that of the elemental image in general case [42]. The conventional multi-baseline algorithm can be applied to the subimages with little modification considering different disparity characteristics of the subimages. Since the disparity can be assumed to be piecewise continuous, consideration of the neighborhood disparity information can enhance the detection accuracy [43].

The different characteristics of the elemental images and the subimages can be exploited more deliberately. The reversed depth dependency of the disparity between elemental images and subimages brings different quantization error characteristics in depth estimation [40, 41]. By selectively using either an elemental image or a subimage, the quantization error can be kept small over a large range of depth. Combinational use of the elemental image and the subimage can also reduce ambiguity in the cor respondence matching process, leading to precise depth estimation [40].

In typical integral imaging pickup system, what one can capture is abundant number of low- resolution elemental images. The subpixel disparity between the elemental images can be detected by using a large number of elemental images, which results in estimation of the high-resolution elemental images [49]. This resolution enhancement of each elemental image eventually contributes to accuracy enhancement of the depth estimation.

4D. Hologram Generation

Hologram represents a wavefront of the object wave. A given wavefront can be regarded as a set of the light rays propagating normal to the wavefront. Hence it is also possible to reconstruct the wavefront from the given spatioangular distribution of the light rays. Elemental images of the integral imaging are, in essence, the angular light ray distribution at many sampled positions. Therefore one can synthesize the hologram using the elemental images.

One method to generate a hologram is to simulate the optical reconstruction process of integral imaging. In the optical reconstruction process, i.e., display process, of integral imaging, every elemental image is imaged by the corresponding elemental lens and integrated into 3D image in the reconstruction space. Using the Fresnel diffraction formula, the propagation of the light emanating from each elemental image can be calculated. The calculated optical field of each elemental image is superposed at one plane in the reconstruction space, resulting in Fresnel hologram [50]. The generation of Fourier hologram is also possible. The elemental images are the view images of the object at different viewing points. Each point in the Fourier hologram accounts for the directional information of the object wave. Therefore, roughly speaking, each elemental image can be matched to one point in the Fourier hologram. Based on this observation, Shaked et al. calculated each point in the Fourier hologram by integrating corresponding elemental image after multiplying it with a slanted plane wave [51]. More strictly speaking, since the light rays emanating from a single point in the Fourier hologram are collimated by the Fourier transform lens, a subimage that has an orthographic projection geometry (parallel projection) is more appropriate than an elemental image that has a perspective projection geometry in Fourier hologram generation. Hence by using subimages instead of elemental images, a Fourier hologram can be generated with minimal approximations [44, 52]. Simple parallel geometry of the projection lines of the subimage also makes it possible to consider each subimage as a slanted plane wave. By considering the phase change of each plane wave, one can generate a Fresnel hologram as well [52]. Figure 13 shows an example of the Fourier hologram calculated from the elemental images following Ref. [52] and its reconstructions at various distances.

These hologram generation methods have much potential in the point that the hologram of the 3D objects can be generated without any coherent light source. Using a simple pickup device, i.e., a lens array plus a camera, the hologram of real existing objects can be captured under regular white illumination, which eliminates a great portion of the dif ficulty of capturing a hologram. However, the currently available number and resolution of elemental images are not enough for high-resolution holography, requiring further development.

4E. Compression of Elemental Images

One of the practical issues of integral imaging is the large data size of elemental images. Since the elemental images contain 3D information, their data amount for a given object scene is much more than a simple 2D projection of the object space. This high bandwidth of elemental images should be reduced for storing and transmitting elemental image data. The reduction can be achieved by using redundancy in the elemental images. Since elemental images are the perspective projections of the object space at slightly different positions, a large amount of the information is shared by neighboring elemental images. Note also that each elemental image itself has some degree of pixel-value consistency as usual 2D images. Therefore there are two kinds of re dundancies, i.e., intraelemental images and interelemental images. This feature resembles the situation of a 2D moving picture that has intraframe and interframe redundancy. Therefore by applying well- developed conventional 2D moving picture com pression techniques, the elemental images can be coded with a reduced data amount [53, 54, 55]. Table 2 summarizes 3D data processing techniques for elemental images.

5. Displays

5A. Principle

The last stage of integral imaging is the optical reconstruction of the light rays. The light rays of the object space are reconstructed using a lens array, forming 3D images in the reconstruction space by intersecting large number of rays at every 3D image point. Each eye of the viewer sees a different ray of the same 3D image point, hence the depth is perceived. Because each 3D image point is reproduced with a large number of rays, the motion parallax is presented smoothly. The 2D array structure of the lens array enables not only the horizontal parallax but also vertical parallax to be provided.

Current implementation of integral imaging displays, however, has limitations in terms of various viewing parameters such as viewing angle, resolution, and depth range due to limited resolution of the display panel. In the followings, these limitations are briefly discussed.

5B. Viewing Parameters of an Integral Imaging Display

Although many different kinds of integral imaging displays have been developed, most of them can be classified into two categories, i.e., real/virtual display mode and focused display mode. Two display modes of integral imaging are shown in Fig. 14. The difference between them lies only in the gap between the lens array and the display panel. In the real/virtual mode, the gap is set to be larger or smaller than the focal length of the lens array. In the focused mode, on the contrary, the display panel is positioned at the focal length of the display panel. In both of the display modes, the light rays emanating from these elemental image points are redirected by the corresponding elemental lenses such that they intersect at the desired position, forming a 3D image point there. Since the 3D image can be regarded as a set of 3D image points, any 3D images can be displayed by this principle.

In the real/virtual mode, the light emanating from single elemental image point can be considered as a Gaussian beam whose waist is located at the focal distance of the lens array. Note that the focal distance of individual light beam from each elemental image is determined by the focal length of the lens array and the gap between the lens array and the display panel. Hence the focal distance is not necessarily the same as the 3D image point distance, which is determined by the intersection of every light beam from the elemental images. On the contrary, in the focused mode, the light beam from each elemental image point is collimated by the corresponding elemental lens, maintaining the beam diameter constant if we do not consider diffraction and spreading due to nonzero elemental image pixel size. This difference brings different characteristics of the viewing parameters.

The size of the spot that constitutes the generated 3D images is determined by the size of the intersection of the light beams from the elemental images. In the real/virtual mode, the spot size is minimum at the focal plane, providing best image quality, because individual light beam from the elemental lens has minimum waist at the focal plane. In this sense, the focal plane is also called the central depth plane (CDP). As the 3D image point goes away from the focal plane, the spot size increases, degrading image quality. The depth range, which is defined by the tolerable limit of the image quality, is given by a range around CDP as shown in Fig. 15a. In the focused mode, on the contrary, the diameter of the individual beam is minimum at the lens array plane and increases as the beam propagates due to the diffraction and the spreading that originate from the nonzero elemental image pixel size. Hence the depth range of the focused mode is given by a range around the lens array plane including the real and virtual field as shown in Fig. 15b. Although many factors affect the image quality and the depth range, in many cases the real/virtual mode is better in the image quality and the focused mode is better in the depth range.

The viewing angle is determined by the angle between two extreme rays that contribute to the integration of a given 3D image point as shown in Fig. 15c. Each elemental lens has its corresponding area in the elemental image plane, and the elemental images should be located inside the corresponding area in order to prevent image flipping. In the basic configuration, this area in the elemental image plane has the same size and position as the corresponding elemental lens. This limits the maximum number of the elemental image points for a given 3D image point and eventually limits the viewing angle. The exact value of the viewing angle of the 3D image point depends on the position of the 3D image point. Roughly speaking, however, it is given by 2tan1(w/2g) regardless of the display modes, where w is the elemental lens pitch and g is the distance between the display panel and the lens array.

Three viewing parameters discussed above are not independent but have trade-off relationship [56, 57, 58, 59, 60, 61, 62, 63]. The simultaneous enhancement of the viewing parameters is possible only by increasing the bandwidth of the information presented on the display panel, which is accomplished by reducing the pixel size and increasing the number of pixels of the display panel or by using a temporal/spatial multiplexing scheme.

5C. Viewing Quality Enhancement

There has been intensive research to enhance the viewing quality of the integral imaging display system. These systems enhance viewing parameters by increasing the information bandwidth using temporal or spatial multiplexing or by modifying the configuration such that the limited information bandwidth contributes more to a specific viewing parameter while minimally sacrificing others.

The depth range is one of the essential parameters of the integral imaging display since it characterizes the 3D nature of the integral imaging. One possible method for depth range enhancement is to combine floating displays with the integral imaging as shown in Fig. 16a. The floating display relays the object or image to the observer space. It is possible to design the relay optics so that the image is magnified along the longitudinal direction during the relay. Therefore, combined with integral imaging, the insufficient depth range of integral imaging display can be enhanced, giving much improved depth sensation to the observer [64, 65, 66, 67]. Creating multiple CDPs shown in Fig. 16b is another solution. Since the depth range is formed around a CDP, the available depth range is widened by creating multiple CDPs. This is achieved by moving the elemental image plane [68], using a birefringent plate [69], overlaying multiple liquid crystal display panels [70], or using multiple electrically controllable active diffuser screens made of polymer-dispersed liquid crystal (PDLC) [71].

The viewing angle enhancement is achieved by enlarging the area in the elemental image plane that corresponds to each elemental lens or by arranging it such that more elemental images can contrib ute to the integration of the 3D images. Elemental lens switching using an orthogonal polarization mask was an early but very effective method [72]. The curved lens array structure shown in Fig. 17a can further increase the horizontal viewing angle [73, 74]. A horizontal viewing angle of 66° for real 3D images was achieved experimentally using curved screen and lens array [75]. The use of the multiple axis telecentric relay system shown in Fig. 17b can provide the elemental images to the lens array with proper directions, increasing the viewing angle [76]. Head tracking, shown in Fig. 17c, is another approach [77]. Instead of enlarging the static viewing angle, the head tracking system can be used to dynamcally adapt the system for the observer, enhancing the effective viewing angle. Although it is not practical yet, it is also reported that a lens array made of negative refractive index material can have a much smaller f-number, and hence the viewing angle can be enhanced [78].

Resolution enhancement is mainly achieved by presenting more information using a higher- resolution display panel or using a temporal/spatial multiplexing scheme. Okano et al. used ultrahigh definition video system of over 4000 scan lines for developing a high-resolution integral imaging system [79]. The use of multiple projectors, which is shown in Fig. 18a, has also been proposed to increase the resolution of the elemental images [80]. The time multiplexing scheme is usually combined with the movement of the lens array in an effort to reduce the grid pattern that is visible due to the lens array structure and to increase the effective resolution of the display panel as well. The moving lenslet array technique is the first report of a time multiplexing resolution enhancement method [81, 82]. The lens array, however, should be mechanically scanned along two directions, which makes actual implementation difficult. A rotating prism sheet in front of the lens array, which is shown in Fig. 18b, can relax this limitation [83], but mechanical movement is still required. A recently proposed electrically controllable pinhole array, which is shown in Fig. 18c, eliminates this requirement completely [84]. Low light efficiency, however, still remains a problem. Table 3 summarizes techniques for enhancing the viewing parameters of integral imaging displays.

5D. Other Systems

2D/3D convertibility is another important point of integral imaging. Nowadays, 2D/3D convertibility is considered the key feature for the penetration of 3D displays into the current 2D display market. A few approaches have been reported. In one approach, the lens array is placed behind the SLM and an electrically controllable diffuser made of PDLC is attached to the lens array. In 3D mode, the active diffuser is controlled to be transparent without diffusing, and the collimated light passes through the diffuser and is focused by the lens array to form a point light source array. The light from this point light source array is modulated by the SLM to form 3D images. In 2D mode, the active diffuser is set to be diffusive, and the lens array cannot form a point light source array but just relays the diffused light to the SLM. Therefore regular 2D images are displayed with full resolution of the SLM itself [85, 86]. This configuration is further developed to enhance the viewing angle or the resolution and to make the system more compact [87]. Figure 19a shows a compact 3D/2D convertible display using electroluminescent film. A different method that places a lens array between two display panels has also been proposed [88]. By using the front panel as either a 2D display panel or a transparent plate, 2D/3D conversion is achieved.

Instead of direct viewing integral imaging system, optical screens for projection-type integral imaging have also been proposed and demonstrated [89]. Recently, an integral imaging system using a plastic optical fiber bundle was proposed [90]. This system provides better flexibility of the system, making it easier to install in another host system as shown in Fig. 19b. Finally, the use of boundary folding mirrors has been proposed recently to enhance the uniformity of the angular resolution by effectively using all elemental image regions without loss as shown in Fig. 19c [91].

6. Conclusion

Recent progress in integral imaging was reviewed. Integral imaging captures and reconstructs light ray distribution using a lens array, providing an efficient way to access 3D information of the object space. Under the unified framework of integral imaging, the 3D information of the object space can be captured, digitally processed for 3D data processing, and optically reproduced for 3D displays.

In the 3D information capture stage, the crosstalk between the elemental images, unparallel pickup directions, and simultaneous pickup of real and virtual fields are the main issues, along with the resolution of each elemental image. Recently developed systems can largely alleviate these issues, making it possible to capture elemental images with minimal distortions.

The captured elemental images can be digitally processed for various applications. Depth slice reconstruction and view reconstruction have been successfully demonstrated, and many methods enhancing the reconstruction quality have been continuously reported. The reconstructed set of depth slices is also applied to the recognition of the 3D objects, improving recognition accuracy. Explicit estimation of the depth map from the elemental images has also been addressed. Moreover, recently proposed techniques that generate a hologram from the elemental images provide an alternative to the conventional complex hologram acquisition process.

3D display techniques using integral imaging have also been developed actively. The limitations of viewing angle, resolution, and depth range have been tackled by various methods. Along with the inher ent advantages of integral imaging, which include continuous horizontal and vertical motion parallax, these recent developments are making integral imaging an attractive 3D display technology.

Integral imaging was originally a technique for 3D display of a static image when it was first proposed. Now the technical field of integral imaging is not limited to the 3D display but has been expanded to all aspects of 3D information processing. With continuing effort, it is believed that integral imaging will not be limited to academic interests but can be extended to industrial applications in the near future.

This work was supported by the National Research Foundation and Ministry of Education, Science and Technology of Korea through the Creative Research Initiatives Program (#R16-2007-030-01001-0).

Tables Icon

Table 1. Comparison between Stereoscopy, Integral Imaging and Holography

Tables Icon

Table 2. 3D Data Processing based on Integral Imaging

Tables Icon

Table 3. Viewing Parameter Enhancement of 3D Displays based on Integral Imaging

 figure: Fig. 1

Fig. 1 Number of integral imaging papers published by OSA. Both journal and conference papers are included.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Concept of integral imaging.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Principle of integral imaging (a) continuous light rays from the object points, (b) sampled light rays by a pinhole array, (c) disparity between elemental images, (d) optical reproduction of the sampled light rays using a pinhole array.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Pickup system using a single imaging lens.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Configurations of pickup system: (a) imaging lens at focal length of the field lens, (b) telecentric relay, (c) telecentric relay plus 4-f optics.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Pickup system for pseudoscopic to orthoscopic conversion: (a) GRIN lens array, (b) overlaid lens arrays, (c) optical depth converter, (d) digital second pickup.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Concept of depth slice reconstruction (CIIR).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Example of the reconstructed depth slice images (View 1).

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Subimage generation.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Example of the view generation (following the method presented in Ref. [46]): (a) elemental images, (b) generated view images (View 2).

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Disparity dependency on the object depth: (a) disparity between elemental images, (b) disparity between subimages.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 3D mesh model of the object reconstructed from the elemental images (following the method presented in Ref. [45]): (a) elemental images, (b) reconstructed mesh model (View 3).

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Example of the Fourier hologram calculated from the elemental images following Ref. [52]: (a) elemental images, (b) calculated Fourier hologram, (c) hologram reconstruction at various distances (View 4).

Download Full Size | PDF

 figure: Fig. 14

Fig. 14 Display modes of integral imaging: (a) real/virtual mode, (b) focused mode.

Download Full Size | PDF

 figure: Fig. 15

Fig. 15 Depth range and viewing angle of integral imaging display: (a) depth range of real mode, (b) depth range of focused mode, (c) viewing angle.

Download Full Size | PDF

 figure: Fig. 16

Fig. 16 Examples of depth range enhancing configuration: (a) floating display, (b) multiple CDPs.

Download Full Size | PDF

 figure: Fig. 17

Fig. 17 Examples of viewing angle enhancing configuration: (a) curved lens array, (b) multiaxis telecentric system, (c) head tracking.

Download Full Size | PDF

 figure: Fig. 18

Fig. 18 Examples of resolution enhancing configuration: (a) multiprojection, (b) rotating prism, (c) electrically movable pinhole array.

Download Full Size | PDF

 figure: Fig. 19

Fig. 19 Other advanced integral imaging display systems: (a) 3D/2D convertible display, (b) flexible configuration using a plastic fiber array, (c) uniform angular resolution by boundary mirrors.

Download Full Size | PDF

1. C. Wheatstone, “Contributions to the physiology of vision.—Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision,” Philos. Trans. R. Soc. London 128, 371–394 (1838). [CrossRef]  

2. G. Lippmann, “Epreuves reversible donnant la sensation du relief,” J. Phys. 7, 821–825 (1908).

3. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]  

4. R. Martinez-Cuenca, A. Pons, G. Saavedra, M. Martinez- Corral, and B. Javidi, “ Optically-corrected elemental images for undistorted integral image display,” Opt. Express 14, 9657–9663 (2006). [CrossRef]  

5. K. Yamamoto, T. Mishina, R. Oi, T. Senoh, and M. Okui, “Cross talk elimination using an aperture for recording elemental images of integral photography,” J. Opt. Soc. Am. A 26, 680–690 (2009). [CrossRef]  

6. J. Hahn, Y. Kim, E.-H. Kim, and B. Lee, “Undistorted pickup method of both virtual and real objects for integral imaging,” Opt. Express 16, 13969–13978 (2008). [CrossRef]  

7. J. Arai, F. Okano, H. Hoshino, and I. Yuyama, “Gradient-index lens-array method based on real-time integral photography for three-dimensional images,” Appl. Opt. 37, 2034–2045 (1998). [CrossRef]  

8. J. Arai, H. Kawai, and F. Okano, “Microlens arrays for integral imaging system,” Appl. Opt. 45, 9066–9078 (2006). [CrossRef]  

9. N. Davies, M. McCormick, and L. Yang, “3D imaging systems: a new development,” Appl. Opt. 27, 4520–4528 (1988). [CrossRef]  

10. S.-W. Min, J. Hong, and B. Lee, “Analysis of an optical depth converter used in a three-dimensional integral imaging system,” Appl. Opt. 43, 4539–4549 (2004). [CrossRef]  

11. J. Arai, H. Kawai, M. Kawakita, and F. Okano, “Depth-control method for integral imaging,” Opt. Lett. 33, 279–281 (2008). [CrossRef]  

12. M. Martinez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Formation of real, orthoscopic integral images by smart pixel mapping,” Opt. Express 13, 9175–9180 (2005). [CrossRef]  

13. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford Tech. Rep. CTSR 2005-02 (Stanford University, 2005).

14. K. Fife, A. E. Gamal, and H.-S. P. Wong, “A multiaperture image sensor with 0.7um pixels in 0.11um CMOS technology,” IEEE J. Solid-State Circuits 43, 2990–3005 (2008). [CrossRef]  

15. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21, 951–958 (2004). [CrossRef]  

16. B. Tavakoli, M. Daneshpanah, B. Javidi, and E. Watson, “Performance of 3D integral imaging with position uncertainty,” Opt. Express 15, 11889–11902 (2007). [CrossRef]  

17. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. 2, 393–400 (2006).

18. N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14, 10403–10409 (2006). [CrossRef]  

19. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]  

20. D.-H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15, 12039–12049 (2007). [CrossRef]  

21. D.-C. Hwang, J.-S. Park, S.-C. Kim, D.-H. Shin, and E.-S. Kim, “Magnification of 3D reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45, 4631–4637 (2006). [CrossRef]  

22. J.-B. Hyun, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Curved computational integral imaging reconstruction technique for resolution-enhanced display of three-dimensional object images,” Appl. Opt. 46, 7697–7708 (2007). [CrossRef]  

23. D.-H. Shin and H. Yoo, “Signal model and granular-noise analysis of computational image reconstruction for curved integral imaging systems,” Appl. Opt. 48, 827–833 (2009). [CrossRef]  

24. K.-J. Lee, D.-C. Hwang, S.-C. Kim, and E.-S. Kim, “Blur- metric-based resolution enhancement of computationally reconstructed integral images,” Appl. Opt. 47, 2859–2869 (2008). [CrossRef]  

25. G. Saavedra, R. Martinez-Cuenca, M. Martinez-Corral, H. Navarro, M. Daneshpanah, and B. Javidi, “Digital slicing of 3D scenes by Fourier filtering of integral images,” Opt. Express 16, 17154–17160 (2008). [CrossRef]  

26. G. Baasantseren, J.-H. Park, and N. Kim, “Depth discrimination enhanced computational integral imaging using random pattern illumination,” Jpn. J. Appl. Phys. 48, 020216 (2009).

27. M. DaneshPanah, B. Javidi, and E. Watson, “Three dimensional imaging with randomly distributed sensors,” Opt. Express 16, 6368–6377 (2008). [CrossRef]  

28. I. Moon and B. Javidi, “Three-dimensional visualization of objects in scattering medium by use of computational integral imaging,” Opt. Express 16, 13080–13089 (2008). [CrossRef]  

29. B. Javidi and Y. S. Hwang, “Passive near-infrared 3D sensing and computational reconstruction with synthetic aperture integral imaging,” J. Display Technol. 4, 3–5 (2008).

30. R. Schulein and B. Javidi, “Underwater multi-view three- dimensional imaging,” J. Display Technol. 4, 351–353 (2008).

31. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graphics (Proc. SIGGRAPH) 25, 924–934 (2006).

32. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14, 12096–12108 (2006). [CrossRef]  

33. I. Moon and B. Javidi, “Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling,” Opt. Lett. 34, 731–733 (2009). [CrossRef]  

34. S. Yeom, B. Javidi, and E. Watson, “Three-dimensional dis tortion-tolerant object recognition using photon-counting integral imaging,” Opt. Express 15, 1513–1533 (2007). [CrossRef]  

35. B. Tavakoli, B. Javidi, and E. Watson, “Three dimensional visualization by photon counting computational integral imaging,” Opt. Express 16, 4426–4436 (2008). [CrossRef]  

36. C. M. Do, R. Martínez-Cuenca, and B. Javidi, “Three-dimensional object-distortion-tolerant recognition for integral imaging using independent component analysis,” J. Opt. Soc. Am A 26, 245–251 (2009).

37. B. Javidi, R. Ponce-Díaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects by using computational in tegral imaging,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef]  

38. M. Cho and B. Javidi, “Three-dimensional tracking of occluded objects using integral imaging,” Opt. Lett. 33, 2737–2739 (2008). [CrossRef]  

39. T.-C. Wei, D.-H. Shin, and B.-G. Lee, “Resolution-enhanced reconstruction of 3D object using depth-reversed elemental images for partially occluded object recognition,” J. Opt. Soc. Korea 13, 139–145 (2009).

40. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one- dimensional elemental image modification,” Appl. Opt. 43, 4882–4895 (2004). [CrossRef]  

41. J.-H. Park, S. Jung, H. Choi, and B. Lee, “A novel depth extraction algorithm incorporating a lens array and a camera by reassembling pixel columns of elemental images,” Proc. SPIE 4929, 49–58 (2002).

42. C. Wu, A. Aggoun, M. McCormick, and S. Y. Kung, “Depth extraction from unidirectional integral image using a modified multi-baseline technique,” Proc. SPIE 4660, 135–143 (2002).

43. C. Wu, M. McCormick, A. Aggoun, and S. Y. Kung, “Depth mapping of integral images through viewpoint image extraction with a hybrid disparity analysis algorithm,” J. Display Technol. 4, 101–108 (2008).

44. M.-S. Kim, G. Baasantseren, N. Kim, and J.-H. Park, “Hologram generation of 3D objects using multiple orthographic view images,” J. Opt. Soc. Korea 12, 269–274 (2008).

45. G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46, 5311–5320 (2007). [CrossRef]  

46. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16, 8800–8813 (2008). [CrossRef]  

47. M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Patt. Anal. Machine Intell. 15, 353–363 (1993).

48. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “A new stereovision scheme using a camera and a lens array,” Proc. SPIE 4471, 73–80 (2001).

49. K. Hong, J. Hong, J.-M. Kang, J.-H. Jung, J.-H. Park, and B. Lee, “Improved three-dimensional depth extraction using super resolved elemental image set,” in Digital Holography and Three-Dimensional Imaging (DH) (Optical Society of America, 2009), paper DWB1.

50. T. Mishina, M. Okui, and F. Okano, “Calculation of holograms from elemental images captured by integral photography,” Appl. Opt. 45, 4026–4036 (2006). [CrossRef]  

51. N. T. Shaked, J. Rosen, and A. Stern, “Integral holography: white-light single-shot hologram acquisition,” Opt. Express 15, 5754–5760 (2007). [CrossRef]  

52. J.-H. Park, M.-S. Kim, G. Baasantseren, and N. Kim, “Fresnel and Fourier hologram generation using orthographic projection images,” Opt. Express 17, 6320–6334 (2009). [CrossRef]  

53. S. Yeom, A. Stern, and B. Javidi, “Compression of 3D color integral images,” Opt. Express 12, 1632–1642 (2004). [CrossRef]  

54. N. Sgouros, I. Kontaxakis, and M. Sangriotis, “Effect of different traversal schemes in integral image coding,” Appl. Opt. 47, D28–D37 (2008). [CrossRef]  

55. E. Elharar, A. Stern, O. Hadar, and B. Javidi, “A hybrid compression method for integral images using discrete wavelet transform and discrete cosine transform,” J. Display Technol. 3, 321–325 (2007).

56. S.-W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44, L71–L74 (2005).

57. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of viewing parameters for two display methods based on integral photography,” Appl. Opt. 40, 5217–5232 (2001). [CrossRef]  

58. X. Wang, L. He, and Q. Bu, “Performance characterization of integral imaging systems based on human vision,” Appl. Opt. 48, 183–188 (2009). [CrossRef]  

59. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33, 684–686 (2008). [CrossRef]  

60. J.-Y. Son, S.-H. Kim, D.-S. Kim, B. Javidi, and K.-D. Kwack, “Image-forming principle of integral photography,” J. Display Technol. 4, 324–331 (2008).

61. F. Okano, J. Arai, and M. Kawakita, “Wave optical analysis of integral method for three-dimensional images,” Opt. Lett. 32, 364–366 (2007). [CrossRef]  

62. R. Martínez-Cuenca, G. Saavedra, A. Pons, B. Javidi, and M. Martínez-Corral, “Facet braiding: a fundamental problem in integral imaging,” Opt. Lett. 32, 1078–1080 (2007). [CrossRef]  

63. V. V. Saveljev and S.-J. Shin, “Layouts and celss in integral photography and pointlight source model,” J. Opt. Soc. Korea 13, 131–138 (2009).

64. J. Kim, S.-W. Min, and B. Lee, “Floated image mapping for integral floating display,” Opt. Express 16, 8549–8556 (2008). [CrossRef]  

65. J. Kim, S.-W. Min, and B. Lee, “Viewing window expansion of integral floating display,” Appl. Opt. 48, 862–867 (2009). [CrossRef]  

66. J. Kim, S.-W. Min, Y. Kim, and B. Lee, “Analysis on viewing characteristics of an integral floating system,” Appl. Opt. 47, D80–D86 (2008). [CrossRef]  

67. J. Kim, S.-W. Min, and B. Lee, “Viewing region maximization of an integral floating display through location adjustment of viewing window,” Opt. Express 15, 13023–13034 (2007). [CrossRef]  

68. B. Lee, S. Jung, S.-W. Min, and J.-H. Park, “Three-dimensional display by use of integral photography with dynamically variable image planes,” Opt. Lett. 26, 1481–1482 (2001). [CrossRef]  

69. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Integral imaging with multiple image planes using a uniaxial crystal plate,” Opt. Express 11, 782 (2003). [CrossRef]  

70. Y. Kim, J.-H. Park, H. Choi, J. Kim, S.-W. Cho, and B. Lee, “Depth-enhanced three-dimensional integral imaging by use of multilayered display devices,” Appl. Opt. 45, 4334–4343 (2006). [CrossRef]  

71. Y. Kim, H. Choi, J. Kim, S.-W. Cho, Y. Kim, G. Park, and B. Lee, “Depth-enhanced integral imaging display system with electrically variable image planes using polymer-dispersed liquid-crystal layers,” Appl. Opt. 46, 3766–3773 (2007). [CrossRef]  

72. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27, 818–820 (2002). [CrossRef]  

73. J.-H. Jung, Y. Kim, Y. Kim, J. Kim, K. Hong, and B. Lee, “Integral imaging system using an electroluminescent film backlight for three-dimensional-two-dimensional convertibility and a curved structure,” Appl. Opt. 48, 998–1007 (2009). [CrossRef]  

74. D.-H. Shin, B. Lee, and E.-S. Kim, “Multidirectional curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef]  

75. Y. Kim, J.-H. Park, S.-W. Min, S. Jung, H. Choi, and B. Lee, “Wide-viewing-angle integral three-dimensional imaging system by curving a screen and a lens array,” Appl. Opt. 44, 546–552 (2005). [CrossRef]  

76. R. Martínez-Cuenca, H. Navarro, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Enhanced viewing-angle integral imag ing by multiple-axis telecentric relay system,” Opt. Express 15, 16255–16260 (2007). [CrossRef]  

77. G. Park, J. Hong, Y. Kim, and B. Lee, “Enhancement of viewing angle and viewing distance in integral imaging by head tracking,” in Digital Holography and Three-Dimensional Imaging (DH) (Optical Society of America, 2009), paper DWB27.

78. H. Kim, J. Hahn, and B. Lee, “The use of a negative index planoconcave lens array for wide-viewing angle integral imaging,” Opt. Express 16, 21865–21880 (2008). [CrossRef]  

79. J. Arai, M. Okui, T. Yamashita, and F. Okano, “Integral three-dimensional television using a 2000-scanning-line video system,” Appl. Opt. 45, 1704–1712 (2006). [CrossRef]  

80. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12, 1067–1076 (2004). [CrossRef]  

81. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]  

82. X. Wang and H. Hua, “Theoretical analysis for integral imaging performance based on microscanning of a microlens array,” Opt. Lett. 33, 449–451 (2008). [CrossRef]  

83. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15, 4814–4822 (2007). [CrossRef]  

84. Y. Kim, J. Kim, J.-M. Kang, J.-H. Jung, H. Choi, and B. Lee, “Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array,” Opt. Express 15, 18253–18267 (2007). [CrossRef]  

85. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29, 2734–2736 (2004). [CrossRef]  

86. J.-H. Park, J. Kim, J.-P. Bae, Y. Kim, and B. Lee, “Viewing angle enhancement of three-dimension/two-dimension convertible integral imaging display using double collimated or non-collimated illumination,” Jpn. J. Appl. Phys. 44, L991–L994 (2005).

87. S.-W. Cho, J.-H. Park, Y. Kim, H. Choi, J. Kim, and B. Lee, “Convertible two-dimensional-three-dimensional display using an LED array based on modified integral imaging,” Opt. Lett. 31, 2852–2854 (2006). [CrossRef]  

88. H. Choi, Y. Kim, J. Kim, S.-W. Cho, and B. Lee, “Depth- and viewing-angle-enhanced 3-D/2-D switchable display system with high contrast ratio using multiple display devices and a lens array,” J. Soc. Info. Display 15, 315–320 (2007).

89. M. Okui, J. Arai, Y. Nojiri, and F. Okano, “Optical screen for direct projection of integral imaging,” Appl. Opt. 45, 9132–9139 (2006). [CrossRef]  

90. Y. Kim, H. Choi, S.-W. Cho, Y. Kim, J. Kim, G. Park, and B. Lee, “Three-dimensional integral display using plastic optical fibers,” Appl. Opt. 46, 7149–7154 (2007). [CrossRef]  

91. J. Hahn, Y. Kim, and B. Lee, “Uniform angular resolution integral imaging display with boundary folding mirrors,” Appl. Opt. 48, 504–511 (2009). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1
Fig. 1 Number of integral imaging papers published by OSA. Both journal and conference papers are included.
Fig. 2
Fig. 2 Concept of integral imaging.
Fig. 3
Fig. 3 Principle of integral imaging (a) continuous light rays from the object points, (b) sampled light rays by a pinhole array, (c) disparity between elemental images, (d) optical reproduction of the sampled light rays using a pinhole array.
Fig. 4
Fig. 4 Pickup system using a single imaging lens.
Fig. 5
Fig. 5 Configurations of pickup system: (a) imaging lens at focal length of the field lens, (b) telecentric relay, (c) telecentric relay plus 4-f optics.
Fig. 6
Fig. 6 Pickup system for pseudoscopic to orthoscopic conversion: (a) GRIN lens array, (b) overlaid lens arrays, (c) optical depth converter, (d) digital second pickup.
Fig. 7
Fig. 7 Concept of depth slice reconstruction (CIIR).
Fig. 8
Fig. 8 Example of the reconstructed depth slice images (View 1).
Fig. 9
Fig. 9 Subimage generation.
Fig. 10
Fig. 10 Example of the view generation (following the method presented in Ref. [46]): (a) elemental images, (b) generated view images (View 2).
Fig. 11
Fig. 11 Disparity dependency on the object depth: (a) disparity between elemental images, (b) disparity between subimages.
Fig. 12
Fig. 12 3D mesh model of the object reconstructed from the elemental images (following the method presented in Ref. [45]): (a) elemental images, (b) reconstructed mesh model (View 3).
Fig. 13
Fig. 13 Example of the Fourier hologram calculated from the elemental images following Ref. [52]: (a) elemental images, (b) calculated Fourier hologram, (c) hologram reconstruction at various distances (View 4).
Fig. 14
Fig. 14 Display modes of integral imaging: (a) real/virtual mode, (b) focused mode.
Fig. 15
Fig. 15 Depth range and viewing angle of integral imaging display: (a) depth range of real mode, (b) depth range of focused mode, (c) viewing angle.
Fig. 16
Fig. 16 Examples of depth range enhancing configuration: (a) floating display, (b) multiple CDPs.
Fig. 17
Fig. 17 Examples of viewing angle enhancing configuration: (a) curved lens array, (b) multiaxis telecentric system, (c) head tracking.
Fig. 18
Fig. 18 Examples of resolution enhancing configuration: (a) multiprojection, (b) rotating prism, (c) electrically movable pinhole array.
Fig. 19
Fig. 19 Other advanced integral imaging display systems: (a) 3D/2D convertible display, (b) flexible configuration using a plastic fiber array, (c) uniform angular resolution by boundary mirrors.

Datasets

Datasets associated with ISP articles are stored in an online database called MIDAS. Clicking a "View" link in an Optica ISP article will launch the ISP software (if installed) and pull the relevant data from MIDAS. Visit MIDAS to browse and download the datasets directly. A package containing the PDF article and full datasets is available in MIDAS for offline viewing.

Questions or Problems? See the ISP FAQ. Already used the ISP software? Take a quick survey to tell us what you think.

Tables (3)

Tables Icon

Table 1 Comparison between Stereoscopy, Integral Imaging and Holography

Tables Icon

Table 2 3D Data Processing based on Integral Imaging

Tables Icon

Table 3 Viewing Parameter Enhancement of 3D Displays based on Integral Imaging

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.