Abstract

In this paper, we propose a computational integral imaging reconstruction (CIIR) method by use of image interpolation algorithms to improve the visual quality of 3D reconstructed images. We investigate the characteristics of the conventional CIIR method along the distance between lenslet and objects. What we observe is that the visual quality of reconstructed images is periodically degraded. The experimentally observed period is half size of the elemental image. To remedy this problem, we focus on the interpolation methods in computational integral imaging. Several interpolation methods are applied to the conventional CIIR method and their performances are analyzed. To objectively evaluate the proposed CIIR method, we introduce an experimental framework for the computational pickup process and the CIIR process using a Gaussian function. We also carry out experiments on real objects to subjectively evaluate the proposed method. Experimental results indicate that our method outperforms the conventional CIIR method. In addition, our method reduces the grid noise that the conventional CIIR method suffers from.

© 2007 Optical Society of America

1. Introduction

Integral imaging has been one of the attractive autostereoscopic three-dimensional (3D) display techniques since it was proposed by Lippman in 1908 [116]. It has attracted many researchers because of various merits such as full parallax, continuous viewing angle and full color display. In general, an integral imaging system consists of two parts; pickup and reconstruction. In the pickup part, the rays coming from a 3D object through a lenslet array is recorded as elemental images representing different perspectives of a 3D object. On the other hand, in the reconstruction part, there are two kinds of integral imaging reconstruction techniques. One is based on optical integral imaging reconstruction (OIIR) [19] and the other is based on computational integral imaging reconstruction (CIIR) [1012]. In the OIIR techniques, the recorded elemental images are displayed on a display panel and then a 3D image can be reconstructed and observed optically through a lenslet array. However, the OIIR techniques produce a low-resolution 3D image due to an insufficient number of elemental images. Also, physical limitations of optical devices degrade the image quality of the 3D images.

Recently, to overcome these drawbacks of the OIIR techniques and to extract voxel (volumetric pixel) information of 3D objects, a CIIR method has been introduced [1012]. Also, it is utilized for a recognition system [15] which is an actual optical system for computational integral imaging. The basic structure of a CII system is composed of an optical pickup process and a CIIR process. In the optical pickup process, elemental images are recorded by use of a lenslet array and an image sensor. In the CIIR process, the voxel information of 3D objects is digitally reconstructed from elemental images by use of a computer without optical devices. The extracted voxels can be used for 3D visualization as like OIIR [12,13] and object recognition using correlation methods to recognize occluded 3D objects [15,16].

The principle of CIIR, which is based on the pinhole-array model, is that 3D images are digitally reconstructed at the required output planes by superposition of all of the inversely mapped and magnified elemental images. The magnification factor increases in proportion to the distance of required output plane. When 3D object is recorded through a square-shaped lens array in the pickup process, the reconstructed images of CIIR have intensity irregularities with the grid noise. Thus, the visual quality of reconstructed image gets degraded. To solve this problem, some studies were discussed [13,14]. Among them, Hong and Javidi reported a method using the hybrid moving lenslet array technique where they normalized the intensity of the pixels of reconstructed images by the overlapping numbers of elemental images and increased the resolution of reconstructed images by using the moving technique of lenslet array [13]. However, it is a sophisticated process including movement of camera sensor and their results showed only a visual evaluation, not an objective evaluation.

In this paper, we focus on the behavior of interpolation methods in the CIIR to reduce the intensity irregularity along the distance from the virtual pinhole array and the grid noise in the reconstructed images. The conventional CIIR method employs image magnification before superimposing elemental images. This image magnification is considered to be the zero-order interpolation method, i.e., the pixel replication method. There exist numerous methods of interpolation in the image processing literature. Among image interpolation methods [1721], several methods are applied to many applications because of a good tradeoff between the image quality and the implementation cost; the zero-order interpolation method is the simplest method and it is applied to real-time applications although it has the blocking artifact. The linear interpolation method is simple and provides moderate image quality. Thus it is applied to many applications such as image compression, image display, computer graphics, and so on. But it still has the serious blurring problem. To overcome the blurring problem and to obtain a substantial gain in image quality, higher order interpolation methods were developed such as the Keys’ cubic convolution interpolation (CCI) method. Although the CCI method provides better image quality than the two previous methods, the computational cost is much higher than those methods. Thus the CCI method is limited to applying to non-real-time applications or to high performance applications. We employ the three interpolation methods as a magnification method of elemental images in the CIIR method and analyze the characteristics of the three methods.

To objectively evaluate the three methods, we introduce an experimental framework for the computational pickup and the CIIR process using a Gaussian function. An optical experiment is also carried out to subjectively evaluate the three methods. Experimental results indicate that the proposed interpolation-based CIIR methods improve the quality of reconstructed images and reduce the grid noise in reconstructed images whereas the conventional CIIR suffers from the noise.

2. Proposed CIIR Methods

2.1 The conventional CIIR method

Figure 1 shows the conventional CIIR method based on the pinhole-array model [12]. As shown Fig. 1(a), 3D object are recorded as elemental images through lenslet array. In the computational reconstruction process as shown in Fig. 1(b), the elemental images are digitally reconstructed by use of a computer where 3D images can be easily reconstructed at any output planes without optical devices. Each elemental image is inversely mapped on the output plane through each pinhole. And the mapped elemental images are magnified by a factor of z/g, where z is the distance between the reconstructed output plane and the virtual pinhole array and g is the distance between the elemental images and the virtual pinhole array. Each magnified elemental image is overlapped each other and an reconstructed image is finally produced at the reconstructed output plane z. Iterative computation of the above process by varying the z value provides a series of images along z axis. This 3D image is a so called CIIR image.

The simple magnification of elemental images used in the conventional CIIR method is shown in Fig. 2(a). The size of single elemental image is set to be 2×2 and the magnification factor is set to be z/g=2. Each pixel is simply magnified into 2×2 pixels so the intensity values of these four pixels are equal. This method is considered to be the zero-order interpolation algorithm in 2D image processing. Therefore magnification process of the conventional CIIR is identical with the zero-order interpolation algorithm.

When 3D object is recorded with square-shaped lens array in the pickup process as shown in Fig. 1(a), the reconstructed images of CIIR have intensity irregularities with the grid noise. Thus, image quality becomes degraded. An example is shown in Fig. 2(b). The grid noise in the reconstructed image is caused by square-shaped elemental images and simple magnification mapping [12,13].

 

Fig. 1. Principle of conventional CIIR method (a) Pickup (b) Display.

Download Full Size | PPT Slide | PDF

 

Fig. 2. (a). Simple magnification in the conventional CIIR (b). Example of reconstructed image with the grid noise

Download Full Size | PPT Slide | PDF

2.2 Proposed CIIR method by use of an image interpolation algorithm

The block diagram of proposed CIIR method by additional use of an image interpolation algorithm is shown in Fig. 3. The main difference compared with the conventional CIIR method is to use the magnification process of resolution-enhanced elemental images obtained by applying an image interpolation algorithm. Each of pickuped elemental images is magnified using image interpolation algorithm and superimposed in the reconstructed output plane as shown in Fig. 3. In general, the image interpolation algorithm has been used for simple magnification of 2D image since the beginning of digital image processing. The basic principle of image interpolation algorithm is to interpolate new pixels between neighboring pixels of the original 2D image. In the proposed method, various popular interpolation algorithms such as the standard linear interpolation [19], the CCI [21] are used to magnify elemental images. And the conventional CIIR method utilizes the zero-order interpolation. Therefore, we specify CIIR methods by which interpolation methods are employed.

To understand what the interpolation technique is, the explanation for one-dimensional function is follows. The extension to two-dimension is straight forward. Let f(xk) be the sampled version of a continuous function f(x). The Shannon theorem states that the Sinc interpolation perfectly reconstructs the continuous function f(x) from its samples f(xk) if the sampling frequency is larger than twice the maximum frequency of the function f(x). The relationship between f(x) and its samples f(xk) is represented in the form

f(x)=k=0N1f(xk)β(xk),

where β(x) is the interpolation kernel and the sinc interpolation kernel is called as the sinc function defined by sin(πx)/πx. Unfortunately, the practical implementation of Eq. (1) is impossible because of the infinite support of the sinc function. Thus finite-support kernels have been studied. Especially, a short support is desirable in image processing to obtain the computational efficiency. We now introduce three interpolation kernels mentioned in the previous section. The kernel of the zero-order interpolation used in the conventional CIIR method is defined as

β0(x)={1,0x<0.50,elsewhere.

The linear interpolation kernel is defined as

β1(x)={1x,0x∣<10,elsewhere.

And the CCI kernel is defined as

β3(x)={32x352x2+1,0x<112x3+52x24x2,1x<20,elsewhere.

Note that the sizes of the three interpolation kernels β 0(x), β 1(x), and β 3(x) are one, two, and four, respectively. It can be easily seen that the complexity of interpolation increases as the support of interpolation kernels increases. We apply the linear interpolation and the CCI to the conventional CIIR method as an image magnification technique.

In CIIR, each elemental image superimposed on the reconstructed output plane is magnified by a factor of M=z/g. Thus an elemental image having K×K pixels becomes larger as much as MK×MK pixels after the process of image interpolation. Each magnified elemental image is overlapped each other to reconstruct 3D images at the reconstruction output plane. To completely reconstruct a 3D plane image, this same process is repeatedly performed to all of the elemental images through each corresponding pinhole. As shown in Fig. 3, the proposed method use resolution-improved elemental images compared with conventional one because of using an image interpolation algorithm thus it provides improvement of reconstructed 3D images.

 

Fig. 3. Process of proposed method

Download Full Size | PPT Slide | PDF

3. Experiments and Results

3.1 Experiments using Gaussian function

In this paper, to numerically evaluate the proposed interpolation-based CIIR methods and to investigate the characteristics of the CIIR methods, a framework having a computational pickup and a CIIR process using one-dimensional (1D) Gaussian function is introduced as shown in Fig. 4. In pickup process, suppose a 1D Gaussian function G(x) as the continuous function f(x), whose pixels are N, be located at the distance z. The Gaussian function used in this paper is defined by

G(x)=ex22

And the pinhole array used in this experimental setup is composed of 30 pinholes and is located at z=0 mm. The interval between pinholes is 1.08 mm and the gap g between the elemental images and the pinhole array is 3 mm. Then 1D elemental images are computed by a computational pickup [9]. The pickuped 1D elemental images are used in the CIIR process using an interpolation algorithm. In CIIR process, the elemental images are magnified by a factor of z/g by using an interpolation algorithm and are superimposed on the reconstruction plane at z. Here we consider that 1D elemental images are interpolated at the distance z equal to a multiple of g. Finally, the reconstructed function R(xk) at z is obtained after superimposition of all elemental images. To objectively evaluate the quality of a reconstructed function (image), we calculate the mean square error (MSE) defined as

MSE=1Nk=1NG(xk)R(xk)2

For comparison, three kinds of interpolation algorithm were used: zero-order interpolation, linear interpolation and CCI. By varying z value, a series of R(xk) is calculated and compared with their original version in terms of MSE.

 

Fig. 4. Experimental structure for performance evaluation of Gaussian function.

Download Full Size | PPT Slide | PDF

3.2 Analysis of Gaussian function test

The MSE results of reconstructed images according to the distance z for three kinds of interpolation algorithm are shown in Fig. 5. Also the results were shown for different pixel number p of each elemental image. Here the distance was normalized to z/g. It is shown that conventional CIIR method, which is identical with the zero-order interpolation algorithm, has large variation of MSE value because of appearing serious intensity irregularities. The position of large variation appears periodically. The period is half size p/2 of the elemental image. On the other hand, CCI algorithm gives us better results regardless of the distance z. Figure 6 shows two examples of the reconstructed images at z/g=14 and 15 in case that p=30. Figure 5(a) indicates that the performances of the zero-order interpolation method, the linear interpolation method, and the CCI method are similar to each other for z/g=14. This is illustrated in Fig. 6(a). The reconstructed 1D-images of the three methods looks similar. On the other hand, the performances of the three interpolation methods are totally different for z/g=15. This is illustrated in Fig. 6(b). The reconstructed 1D image of the zero-order interpolation method suffers from the blocking artifact much higher than the others. Thus the previous method using the zero-order interpolation method suffers from the artifact at some locations whereas the two proposed methods do not produce the artifact. CCI algorithm is well known as having good performance of resolution enhancement. This fact also agrees with our proposed CIIR method. From results of Fig. 6, the proposed CIIR method can provide decreasing of intensity variation in the reconstructed images and improve the quality of image.

 

Fig. 5. Comparison of MSE according to three types of interpolation algorithm. Round mark (blue line): zero-order interpolation. Diamond mark (red line): linear interpolation. Star mark (black line): CCI (a) p=30 (b) p=40 (c) p=50 (d) p=60.

Download Full Size | PPT Slide | PDF

 

Fig. 6. Examples of reconstructed images when p=30. (a) z/g=14 (b) z/g=15. Dot blue line: original Gaussian function. Dash red line : reconstructed image.

Download Full Size | PPT Slide | PDF

3.3 Experiments using 3D objects

To show the usefulness of our proposed CIIR method, some experiments on reconstruction of 3D objects in a scene were performed. The experimental structure is shown in Fig. 7. In the experiment, 3D test objects composed of three patterns, ‘tree’ and ‘car’ are used. Each of the patterns has 1020×750 pixels. The ‘tree’ and ‘car’ patterns are longitudinally located at z=18 mm and z=45 mm, respectively.

The elemental images have been synthesized by computational pickup based on simple ray geometric [9]. The resultant synthesized elemental images are shown in right side of Fig. 7. In the next step, the 3D images can be reconstructed computationally by using CIIR method with three kinds of interpolation algorithms from the synthesized elemental images. Figure 8 shows the computationally reconstructed images at z=45 mm where ‘car’ pattern was originally located. Reconstructed images for three kinds of interpolation algorithms are shown, respectively. Here we can see the nature of the CIIR method. Figure 8 shows that the image reconstructed at the output plane (where the ‘car’ pattern is originally located) is clearly focused, whereas ‘tree’ pattern is out of focus. The ‘car’ image is clearly shown in the three reconstructed images. The partial ‘tire’ images were enlarged for visual test. In case of using zero-order interpolation algorithm, the reconstructed image was composed of large square-type pixel, which is the reason why the intensity variation exists as shown in Fig. 5. On the other hand, in case of using linear interpolation and CCI algorithm, the reconstructed images were smoother so that intensity variation might be reduced.

To quantitatively estimate the viewing quality enhancement of the reconstructed images in the proposed method, MSE was calculated between original image and each reconstructed image. The calculated MSE values are presented in Fig. 9. For three different interpolation algorithms, the MES values were found for two reconstructed images, respectively. From the MSE results, image quality of the image reconstructed by using CCI algorithm might be improved about averagely 12% comparing with those by using zero-order interpolation algorithm. Specially, the large improvement was obtained at ‘car’ image located at z/g=15 which is the same with the pixel number of elemental image.

 

Fig. 7. Experimental Structure

Download Full Size | PPT Slide | PDF

 

Fig. 8. Images reconstructed at z=45 mm (z/g=15) by using three interpolation algorithms. (a) Zero-order interpolation (b) Linear interpolation (c) CCI

Download Full Size | PPT Slide | PDF

 

Fig. 9. MSE results for 3D objects

Download Full Size | PPT Slide | PDF

3.4 Experiments using real 3D objects

Next, an experiment using elemental images of real object picked up from optical pickup was carried out as shown in Fig. 10(a). A test real object is composed of two mark pattern, ‘Mark1’ and ‘Mark2’. The ‘Mark1’ and ‘Mark2’ patterns are longitudinally located at z=30 mm and z=45 mm, respectively. The lenslet array with 34×25 lenslets is located at z=0 mm. Each lenslet size d is 1.08 mm and single elemental image is composed of 60×60 pixels. The elemental images obtained through an optical pickup are shown in Fig. 10(b).

Figure 11 show the computationally reconstructed images at z=30 mm and 45 mm for both conventional method and proposed method, respectively. It must be noted here that there is the improvement of visual quality between two images reconstructed. In the result of the conventional method of Fig. 11(a), we can see the intensity irregularities with a grid structure cased by square-shaped mapping of elemental images. This is the reason why visual quality of the 3D reconstructed image is degraded in the conventional method. However, the proposed method can provide better results as shown in Fig. 11(b). This is due to the superposition of resolution-improved elemental images using CCI algorithm for each magnified elemental image.

 

Fig. 10. (a). Structure of optical pickup (b). Pickuped elemental images

Download Full Size | PPT Slide | PDF

 

Fig. 11. Experiments by optical pickup. (a) Conventional CIIR method. (b) Proposed CIIR method.

Download Full Size | PPT Slide | PDF

4. Conclusions

In this paper, we have proposed interpolation-based CIIR methods to improve the viewing quality of 3D reconstructed images. The conventional CIIR method is considered to be the zero order interpolation-based CIIR method. And we applied two well-known interpolation techniques to the CIIR method and then analyzed the characteristics of the three interpolation-based CIIR methods. To objectively evaluate the three methods, we introduced an experimental framework to compare them in terms of MSE. This evaluation is what we believe the first objective evaluation in the literature. This evaluation indicates that reconstructed images along the distance axis by the conventional CIIR method are periodically degraded and the period is related to the size of the elemental images. In detail, the dominant degrading period is estimated to be half size of the elemental images. In addition, the reconstructed images suffer from the grid noise caused by superposition of square-shape elemental images. Experimental results showed that these problems are easily eliminated in the proposed interpolation-based CIIR techniques. This is the reason why we focus on the interpolation method in the CIIR. Therefore, an efficient interpolation method plays a key role in CII to improve the viewing quality of the reconstructed images.

References and links

1. G. Lippmann, “La photographic intergrale,” Comptes-Rendus, Acad. Sci. 146, 446–451 (1908).

2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef]   [PubMed]  

3. B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27, 818–820 (2002). [CrossRef]  

4. J.-S. Jang and B. Javidi, “Formation of orthoscopic three-dimensional real images in direct pickup one-stepintegral imaging,” Opt. Eng. 42, 1869–1870 (2003). [CrossRef]  

5. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003). [CrossRef]   [PubMed]  

6. D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI Journal 27, 289–293 (2005). [CrossRef]  

7. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude modulated microlens array,” Appl. Opt. 43, 5806–5813 (2004). [CrossRef]   [PubMed]  

8. J.-H. Park, J. Kim, Y. Kim, and B. Lee, “Resolution-enhanced three-dimension/two-dimension convertible display based on integral imaging,” Opt. Express 13, 1875–1884 (2005). [CrossRef]   [PubMed]  

9. D.-H. Shin, B. Lee, and E.-S. Kim, “Multi-direction-curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef]   [PubMed]  

10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001) [CrossRef]  

11. Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef]   [PubMed]  

12. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef]   [PubMed]  

13. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004) [CrossRef]   [PubMed]  

14. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced computational integral imaging reconstruction using intermediate-view reconstruction technique,” Opt. Eng. 45, 117004 (2006). [CrossRef]  

15. S. -H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Display Technol. 1, 354- (2005). [CrossRef]  

16. B. Javidi, R. Ponce-Diaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects using volumetric reconstruction,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef]   [PubMed]  

17. W. K. Pratt, Digital Image Processing, (New York: Wiley, 1991).

18. E. Meijering, “A Chronology of interpolation: From ancient astronomy to modern signal and image processing,” Proc. IEEE 90, 319–342 (2002).0 [CrossRef]  

19. T. Blu, P. Thevenaz, and M. Unser, “Linear interpolation revitalized,” IEEE Trans. Image Proc. 13, pp.710–719 (2004). [CrossRef]  

20. H. Yoo, “Closed-form least-squares technique for adaptive linear image interpolation,” Elect. Lett. 43, pp. 210–212 (2007). [CrossRef]  

21. R.G Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. Acoust. Speech Signal Process. 29, 1153–1160 (1981). [CrossRef]  

References

  • View by:
  • |
  • |

  1. G. Lippmann, "La photographic intergrale," Comptes-Rendus, Acad. Sci. 146, 446-451 (1908).
  2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, "Real-time pickup method for a three-dimensional image based on integral photography," Appl. Opt. 36, 1598-1603 (1997).
    [CrossRef] [PubMed]
  3. B. Lee, S. Jung, and J.-H. Park, "Viewing-angle-enhanced integral imaging by lens switching," Opt. Lett. 27, 818-820 (2002).
    [CrossRef]
  4. J.-S. Jang and B. Javidi, "Formation of orthoscopic three-dimensional real images in direct pickup one-stepintegral imaging," Opt. Eng. 42, 1869-1870 (2003).
    [CrossRef]
  5. A. Stern and B. Javidi, "Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging," Appl. Opt. 42, 7036-7042 (2003).
    [CrossRef] [PubMed]
  6. D.-H. Shin, M. Cho and E.-S. Kim, "Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets," ETRI Journal 27, 289-293 (2005).
    [CrossRef]
  7. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, "Integral imaging with improved depth of field by use of amplitude modulated microlens array," Appl. Opt. 43, 5806-5813 (2004).
    [CrossRef] [PubMed]
  8. J.-H. Park, J. Kim, Y. Kim, and B. Lee, "Resolution-enhanced three-dimension/two-dimension convertible display based on integral imaging," Opt. Express 13, 1875-1884 (2005).
    [CrossRef] [PubMed]
  9. D.-H. Shin, B. Lee and E.-S. Kim, "Multi-direction-curved integral imaging with large depth by additional use of a large-aperture lens," Appl. Opt. 45, 7375-7381 (2006).
    [CrossRef] [PubMed]
  10. H. Arimoto and B. Javidi, "Integral three-dimensional imaging with digital reconstruction," Opt. Lett. 26, 157-159 (2001)
    [CrossRef]
  11. Y. Frauel and B. Javidi, "Digital three-dimensional image correlation by use of computer-reconstructed integral imaging," Appl. Opt. 41, 5488-5496 (2002).
    [CrossRef] [PubMed]
  12. S.-H. Hong, J.-S. Jang, and B. Javidi, "Three-dimensional volumetric object reconstruction using computational integral imaging," Opt. Express 12, 483-491 (2004).
    [CrossRef] [PubMed]
  13. S. -H. Hong and B. Javidi, "Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing," Opt. Express 12, 4579-4588 (2004)
    [CrossRef] [PubMed]
  14. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, "Resolution-enhanced computational integral imaging reconstruction using intermediate-view reconstruction technique," Opt. Eng. 45, 117004 (2006).
    [CrossRef]
  15. S. -H. Hong and B. Javidi, "Three-dimensional visualization of partially occluded objects using integral imaging," J. Display Technol. 1, 354 (2005).
    [CrossRef]
  16. B. Javidi, R. Ponce-Diaz, and S.-H. Hong, "Three-dimensional recognition of occluded objects using volumetric reconstruction," Opt. Lett. 31, 1106-1108 (2006).
    [CrossRef] [PubMed]
  17. W. K. Pratt, Digital Image Processing, (New York: Wiley, 1991).
  18. E. Meijering, "A Chronology of interpolation: From ancient astronomy to modern signal and image processing," Proc. IEEE 90, 319-342 (2002).
    [CrossRef]
  19. T. Blu, P. Thevenaz, and M. Unser, "Linear interpolation revitalized," IEEE Trans. Image Proc. 13, pp.710-719 (2004).
    [CrossRef]
  20. H. Yoo, "Closed-form least-squares technique for adaptive linear image interpolation," Elect. Lett. 43, pp. 210-212 (2007).
    [CrossRef]
  21. Keys, R.G , "Cubic convolution interpolation for digital image processing," IEEE Trans. Acoust. Speech Signal Process. 29, 1153-1160 (1981).
    [CrossRef]

2007 (1)

H. Yoo, "Closed-form least-squares technique for adaptive linear image interpolation," Elect. Lett. 43, pp. 210-212 (2007).
[CrossRef]

2006 (3)

2005 (3)

2004 (4)

2003 (2)

J.-S. Jang and B. Javidi, "Formation of orthoscopic three-dimensional real images in direct pickup one-stepintegral imaging," Opt. Eng. 42, 1869-1870 (2003).
[CrossRef]

A. Stern and B. Javidi, "Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging," Appl. Opt. 42, 7036-7042 (2003).
[CrossRef] [PubMed]

2002 (3)

2001 (1)

1997 (1)

1981 (1)

Keys, R.G , "Cubic convolution interpolation for digital image processing," IEEE Trans. Acoust. Speech Signal Process. 29, 1153-1160 (1981).
[CrossRef]

1908 (1)

G. Lippmann, "La photographic intergrale," Comptes-Rendus, Acad. Sci. 146, 446-451 (1908).

Acad. Sci. (1)

G. Lippmann, "La photographic intergrale," Comptes-Rendus, Acad. Sci. 146, 446-451 (1908).

Appl. Opt. (5)

Elect. Lett. (1)

H. Yoo, "Closed-form least-squares technique for adaptive linear image interpolation," Elect. Lett. 43, pp. 210-212 (2007).
[CrossRef]

ETRI Journal (1)

D.-H. Shin, M. Cho and E.-S. Kim, "Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets," ETRI Journal 27, 289-293 (2005).
[CrossRef]

IEEE Trans. Acoust. Speech Signal Process. (1)

Keys, R.G , "Cubic convolution interpolation for digital image processing," IEEE Trans. Acoust. Speech Signal Process. 29, 1153-1160 (1981).
[CrossRef]

IEEE Trans. Image Proc. (1)

T. Blu, P. Thevenaz, and M. Unser, "Linear interpolation revitalized," IEEE Trans. Image Proc. 13, pp.710-719 (2004).
[CrossRef]

J. Display Technol. (1)

Opt. Eng. (2)

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, "Resolution-enhanced computational integral imaging reconstruction using intermediate-view reconstruction technique," Opt. Eng. 45, 117004 (2006).
[CrossRef]

J.-S. Jang and B. Javidi, "Formation of orthoscopic three-dimensional real images in direct pickup one-stepintegral imaging," Opt. Eng. 42, 1869-1870 (2003).
[CrossRef]

Opt. Express (3)

Opt. Lett. (3)

Proc. IEEE (1)

E. Meijering, "A Chronology of interpolation: From ancient astronomy to modern signal and image processing," Proc. IEEE 90, 319-342 (2002).
[CrossRef]

Other (1)

W. K. Pratt, Digital Image Processing, (New York: Wiley, 1991).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1.

Principle of conventional CIIR method (a) Pickup (b) Display.

Fig. 2.
Fig. 2.

(a). Simple magnification in the conventional CIIR (b). Example of reconstructed image with the grid noise

Fig. 3.
Fig. 3.

Process of proposed method

Fig. 4.
Fig. 4.

Experimental structure for performance evaluation of Gaussian function.

Fig. 5.
Fig. 5.

Comparison of MSE according to three types of interpolation algorithm. Round mark (blue line): zero-order interpolation. Diamond mark (red line): linear interpolation. Star mark (black line): CCI (a) p=30 (b) p=40 (c) p=50 (d) p=60.

Fig. 6.
Fig. 6.

Examples of reconstructed images when p=30. (a) z/g=14 (b) z/g=15. Dot blue line: original Gaussian function. Dash red line : reconstructed image.

Fig. 7.
Fig. 7.

Experimental Structure

Fig. 8.
Fig. 8.

Images reconstructed at z=45 mm (z/g=15) by using three interpolation algorithms. (a) Zero-order interpolation (b) Linear interpolation (c) CCI

Fig. 9.
Fig. 9.

MSE results for 3D objects

Fig. 10.
Fig. 10.

(a). Structure of optical pickup (b). Pickuped elemental images

Fig. 11.
Fig. 11.

Experiments by optical pickup. (a) Conventional CIIR method. (b) Proposed CIIR method.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

f ( x ) = k = 0 N 1 f ( x k ) β ( x k ) ,
β 0 ( x ) = { 1 , 0 x < 0.5 0 , elsewhere .
β 1 ( x ) = { 1 x , 0 x ∣< 1 0 , elsewhere .
β 3 ( x ) = { 3 2 x 3 5 2 x 2 + 1 , 0 x < 1 1 2 x 3 + 5 2 x 2 4 x 2 , 1 x < 2 0 , elsewhere .
G ( x ) = e x 2 2
MSE = 1 N k = 1 N G ( x k ) R ( x k ) 2

Metrics