Abstract

We propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice lines by projective image transformation. The information of distortion in the acquired elemental image set is found by Hough transform algorithm. With this initial information of distortions, the acquired elemental image set is rectified automatically without the prior knowledge on the characteristics of pickup system by stratified image transformation procedure. Computer-generated elemental image sets with distortion on purpose are used for verifying the proposed rectification method. Experimentally-captured elemental image sets are optically reconstructed before and after the rectification by the proposed method. The experimental results support the validity of the proposed method with high accuracy of image rectification and lattice extraction.

©2010 Optical Society of America

1. Introduction

Integral imaging is one of the autostereoscopic three-dimensional (3D) displays, which was initially formulated by Lippmann in 1908 [1]. This technique produces full color 3D images with horizontal and vertical parallax. It also supports multiple simultaneous viewers with continuous viewing points. Integral imaging is composed of a pickup procedure for 3D information acquisition and a reconstruction procedure for 3D image display, as depicted in Fig. 1 . For the 3D information acquisition in the pickup procedure, an object is recorded as an elemental image set through a lens array by using an image sensor such as a charge-coupled device (CCD). In the reconstruction procedure, the elemental image set is loaded in a two-dimensional (2D) display panel and transformed into an integrated 3D image of the object through a suitable lens array [25].

 figure: Fig. 1

Fig. 1 Integral imaging which is sequentially processed in accordance with two procedures of (a) pickup and (b) reconstruction.

Download Full Size | PPT Slide | PDF

In recent years, numerous methods of 3D image data processing using elemental image sets have been introduced such as depth map calculation, object recognition and 3D structure reconstruction [69]. In order to effectively use the elemental image set in 3D image data processing, 3D information of the object in the elemental image set should be acquired without distortion and loss of data information. However, in the pickup procedure of a real object, the recorded elemental image set has spatial distortions, such as geometric distortions caused by translational and rotational misalignments between the lens array and the CCD plane and barrel or pincushion distortion due to the lens aberration. Another problem in the pickup procedure is non-integer ratios between the pitches of CCD pixel and elemental lens. This mismatch makes each single elemental image have non-equal sizes, which deteriorates an accurate data processing of 3D images. Geometrical errors in the elemental image set also make a problem in the reconstruction procedure, causing a spatial distortion in the optically reconstructed 3D image [10,11]. Recently, numerous studies have attempted to correct geometrical distortions in elemental image sets. Aggoun and Sgouros et al. used Hough transform for finding a tilt angle of the lens array to correct the rotational distortion [12,13]. Lee et al. attached surface markers on the lens array in order to find information on the geometrical distortion in the elemental image set and applied a linear transformation to correct the distortion [14]. These two methods, however, have some limitations: the former can only correct the rotational distortion, while the latter needs prior knowledge about the pickup circumstance and has data information loss due to the screening effects of the markers attached on the lens array.

In this paper, we propose a new method for rectifying a geometrical distortion in the elemental image set and extracting an accurate lens lattice with minimal prior knowledge on the characteristics of pickup system. The proposed method assumes that an elemental image set is inversely acquired with distortions for a projective image transformation from an undistorted, geometrically rectified elemental image set. To start with the projective image transformation, Hough transform [15] is used to find initial information on geometrical distortions in the acquired elemental image set. With this initial information of distortions, the acquired elemental image set is rectified into affine and metric images sequentially [16]. After rectifying distortions in the elemental image set, a similarity transform is finally applied to rearrange mismatches between the CCD pixel and lens pitches of the elemental image set and to extract the lens lattice. The proposed method is verified by comparing the transformation matrices extracted from the computer-generated elemental image set in the procedure of proposed method with those used for the generation of the distorted elemental image set on purpose. In addition, to demonstrate the validity of the proposed rectification method, the two optically reconstructed images, which are obtained from the experimentally-captured elemental image sets of two real objects in the pickup procedure, are compared before and after the rectification by the proposed method.

2. Proposed method for the projective image transformation

In the pickup procedure, an elemental image set is obtained on the focal plane in front of a lens array. When the CCD plane is misaligned with respect to this image plane, the elemental image set is mapped into a geometrically distorted image on the CCD plane. This procedure can be described using a projective transform. With points x on the CCD plane and points x′ on the focal plane of the lens array, the projective transform relation is represented by

x=Hx,
where H′ is the projective transformation matrix, and x and x′ are represented by using homogeneous coordinates. Note that barrel or pincushion distortion caused by the aberration of the lens array is ignored in Eq. (1). The purpose of the proposed method is to find undistorted elemental image points x′ from the acquired distorted elemental image points x. It is known that an inverse of the projective transformation matrix is also a projective transformation matrix [17]. Therefore, with H = (H′)−1, the main objective is to find the projective transformation matrix H that satisfies

x=(H)1x=Hx.

Generally, a projective transformation matrix has 8 degrees of freedom. Hence, if four or more point correspondences between x and x′ are known, the projective transformation can be directly estimated and thus the acquired image can be rectified [17]. In this paper, however, stratification of the projective transform is used instead of direct estimation to use length and angle information in the transformation instead of the known points.

The projective image transformation can be represented as a cascade process of similarity, affine and pure-projective transforms. With Hs, Ha, and Hp denoting the similarity, affine and pure-projective transformation matrixes, respectively, the projective transformation matrix H is decomposed into

H=HsHaHp,
Hs=[sRt0T1],Ha=[1/βα/β0010001],Hp=[100010l1l2l3],
where s is the isotropic scaling, R the rotation matrix, t the translation vector, (αiβ,1,0)the affine transformed circular points, andlv=(l1,l2,l3)the vanishing line in the projective-transformed image.

In this paper, the transformation matrix H is estimated by recovering the three unit transforms, Hs, Ha, and Hp, sequentially. First, pure-projective distortion is removed by applying Hp. A vanishing line lv=(l1,l2,l3), which connects vanishing points in the projective-transformed elemental image set, is detected and used to find a pure-projective transformation matrix Hp. Next, affine distortion is removed by applying Ha. The parameters α and β in Ha are estimated using prior knowledge on the lens array shape, i.e., length ratio and angle between two intersected boundaries of the lens array. Finally, the similarity transform is recovered by applying Hs that is estimated by detecting the rotation angle and elemental image size. The recovery of the translation t is ignored in the proposed rectification algorithm. Figure 2 illustrates the proposed stratification of a projective-transform elemental image set.

 figure: Fig. 2

Fig. 2 Stratification of the projective image transformation into a cascade process of pure-projective (Hp), affine (Ha), and similarity (Hs) transforms.

Download Full Size | PPT Slide | PDF

2.1 Preprocessing: detection of initial distortion information by Hough transform

In the proposed method, the shape of the lens array is used as a prior knowledge. If the lens array consists of regular square lenses, the undistorted elemental image set will have a 2D pattern of the lens boundary lattice with regular squares in it. Through a CCD capturing process, this 2D lattice pattern is distorted to an elemental image set of a tetragonal shape on the CCD plane by Eq. (1). For rectification of the elemental image set, information about its geometrical distortions can be obtained by detecting a lattice pattern from the acquired distorted elemental image set. More specifically, four side lines of the tetragonal lattice are detected, and the length ratios and angles between two adjacent side lines will be utilized in the later process.

For a reliable detection of the lattice pattern, a preprocessing is performed on the acquired elemental image set. Figures 3(a) and 3(b) show an example, respectively, of an object and its distorted elemental image set acquired purposely by an inverse projective image transformation from an undistorted, rectified elemental image set of the object. The acquired distorted elemental image set is converted into a gray scale image, and then segmented by using a gray-level histogram segmentation algorithm [18] which distinguishes an object area from a background space. The boundaries between the object area and the background are outlined by applying the Canny edge detection [19] to the segmented image. When the object and background have similar intensity profiles, edges will not be detected accurately due to the difficulties on the histogram segmentation. In this paper we assume that the object and background have different intensity profiles. In order to find a 2D lattice pattern with two transverse and two longitudinal lattice lines in the resultant edge image, we use the Hough transform method which is widely used in the image processing field for detecting straight lines in the image. For more accurate results, median filters are applied along transverse and longitudinal directions, respectively, before Hough transform is employed. In the Hough transform results, two maximum peak lines of Hough transform are selected for each of transverse and longitudinal directions. In our framework, Hough transform has a 0.1-degree accuracy and a skew angle due to the geometric distortion is assumed to remain under ±20 degrees.

 figure: Fig. 3

Fig. 3 Preprocessing images of (a) an object, (b) a distorted elemental image set acquired by known projective transformation matrices, and (c) a tetragonal edge image with four peak lines of Hough transform.

Download Full Size | PPT Slide | PDF

Figure 3(c) shows the resultant image of the edge detection with four peak lines of Hough transform. The detected four side lines of the lens lattice will be used to find the information of pure-projective distortion from the geometrically distorted elemental image set, as described in the following section 2.2. In section 2.3, the resultant affine image set with a parallelogram shape will be corrected to a metric image set with a rectangular shape using a length ratio between the adjacent side lines of the parallelogram.

2.2 Correction of pure-projective distortion

The pure-projective distortion correction is a procedure of recovering affine properties from the geometrically distorted elemental image set. Affine geometry has a property of parallelism, i.e., two peak lines in the transverse and longitudinal directions in the world coordinates are parallel in the affine image. The pure-projective distortion tends to shear these parallel lines to make them converge to vanishing points. Hence, by detecting these vanishing points, it is possible to recover the affine property, or equivalently identify the pure-projective transformation matrix Hp in Eq. (4). Actually, the vanishing points are formed by intersections of straight lines, called vanishing line lv=(l1,l2,l3), in the projective transformed image. It is well known that the pure-projective transformation matrix Hp can be represented by using these vanishing lines, as described in Eq. (4). Consequently, the recovery of the affine property or the correction of the pure-projective distortion can be performed by detecting the vanishing line in the distorted image. When the lens array consists of rectangular lenses, the lens lattice is horizontally and vertically parallel to one another in the world coordinates. This parallel lens lattice is distorted to converge at vanishing points in the acquired elemental image set. Since the lens lattice is detected in the previous step as shown in Fig. 3(c), it is possible to calculate the converging points, i.e., vanishing points, and thus the vanishing line lv=(l1,l2,l3)can be obtained. Using the obtained vanishing line, the pure-projective transformation matrix Hp is calculated by Eq. (4), and finally the pure-projective distortion is corrected by applying Hp to the acquired elemental image set. The corrected elemental image set rectified from the pure-projective distortion appeared in Fig. 3(c) is shown in Fig. 4(a) . It is observed in Fig. 4(a) that the transverse and longitudinal peak lines are now parallel to each other, confirming the recovery of the affine geometry property of parallelism.

 figure: Fig. 4

Fig. 4 Correction procedures by rectifications (a) of the pure-projective distortion into an affine geometry elemental image set with a parallelogram shape, and (b) of the affine distortion into a metric geometry elemental image set with a rectangular shape.

Download Full Size | PPT Slide | PDF

2.3 Correction of affine distortion

In this step, metric property is recovered from the obtained affine image. Two pairs of side lines remain parallel to each other in the affine image, but the angle and length ratio between non-parallel adjacent lines are not preserved. The recovery of metric property means correction of these angles and length ratios. It can be performed by finding an affine transformation matrix Ha in Eq. (4) and applying this matrix to the affine image, or the corrected image of pure-projective distortion, which was obtained in the previous step. The transformation matrix Ha has two degrees of freedom which are described by parameters α and β. These parameters define the image of the circular points on the metric geometric plane. Since Ha has two degrees of freedom, two independent constraints are required to estimate Ha. In this paper, a known length ratio and angle between two lines are used as constraints. The lens array is again assumed to be composed of square lenses. Horizontal and vertical lens boundary lines are always perpendicular to each other in the world coordinates. The length ratio between horizontal and vertical lines can also be calculated from the number of lenses and the length of each rectangular lens.

Figure 4(a) shows the elemental image set obtained by correction of pure-projective distortion, which has constraint parameters for correction of affine distortion. Since two lines, l 1 connecting two points (x 1, y 1) and (x 3, y 3) and l 2 connecting (x 1, y 1) and (x 2, y 2), represent the lens boundaries in this figure, the angle between them is 90° in the world coordinates. From this known angle constraints, it can be easily verified that the parameters α and β in Ha make a constraint circle with a center point

(cα,cβ)=(d1+d22,0),
and a radius
R=|d1d22|,
where d 1 = (x 1-x 2)/(y 1-y 2) = Δx 1y 1 and d 2 = (x 1-x 3)/(y 1-y 3) = Δx 2y 2.

If the length ratio in the world coordinates of two lines l 1 and l 2 is known to be r, the parameters α and β makes another circle with a center point

(cα,cβ)=(Δx2Δy2r2Δx1Δy1Δy22r2Δy12,0),
and a radius

R=|r(Δx1Δy2Δx2Δy1)Δy22r2Δy12|.

Since the size of each square lens in the world coordinates is known, the length ratio r can be determined if the number of lenses on the detected lines l 1 and l 2 is estimated. In order to find the number of lenses, a projective profile of the elemental image set obtained by compensation of pure-projective distortion in Fig. 4(a) is calculated after Canny edge detection and median filtering. Since the lens boundary is outlined by high edge values, this projective profile is expected to have high peak values where the lens boundaries are located. The affine property was already restored in the previous step. Hence, it can be assumed that the two pairs of lens boundaries are parallel to each other and have a uniform spacing. The number of lenses or the size of the lens in the compensated elemental image set can be estimated by maximizing the multiplication of the projective profile by an impulse train while varying the interval and offset of the impulse train. Also, interpolated projected profiles can be used for improving the sub-pixel accuracy. In the proposed system, the projective profile is interpolated by a factor of 10, minimizing errors from a discrete pixel structure. The projective profile of the compensated elemental image set along the longitudinal direction, l 2 direction, is shown in Fig. 5(a) . An impulse train of detected lens sizes is shown in Fig. 5(b) along with the projective profile interpolated by a factor of 10 for the longitudinal direction. Using the detected lens size, the number of lenses on the line l 1 and l 2 in Fig. 4(a) can be calculated and the length ratio r is obtained.

 figure: Fig. 5

Fig. 5 (a) A projective profile of the compensated elemental image set along the longitudinal direction in units of pixel and (b) an interpolated projective profile by a factor of 10 plotted with an impulse train of detected lens pitch

Download Full Size | PPT Slide | PDF

Now we know two constraint circles obtained from known angle and length ratio constraints. The parameters α and β are obtained by finding the intersections of these two circles, and thus Ha is calculated by Eq. (4). Figure 4(b) shows the elemental image set resulted from correction of the affine distortion, which is obtained by applying Ha to the affine elemental image set, i.e., the elemental image set compensated for pure-projective distortion, in Fig. 4(a). Figure 4(b) apparently shows that the angle information is recovered and the resultant image has metric geometry properties.

2.4 Correction of rotation and scale distortions along with extraction of lens lattice

The final step of the proposed method is correction of skew angle and scale by estimating a similarity transformation matrix Hs in Eq. (4). Since the affine property was recovered in the previous step, the angle and length ratio between the adjacent side lines are considered to be corrected. Skew angle is calculated by using the line direction vectors. The rotation matrix R in Hs is known from the calculated skew angle. The scaling factor s can also be determined from the lengths of the lines and the number of lenses on those lines so that the length of the lens is an integer multiple of the CCD pixel pitch. Figure 6 shows the elemental image set obtained by correction of rotation and scale distortions using the similarity transform matrix Hs.

 figure: Fig. 6

Fig. 6 A rectified elemental image set with extracted lens lattice lines on it, which is obtained by the proposed method of projective image transformation.

Download Full Size | PPT Slide | PDF

The geometric distortion of the original elemental image set appeared in Fig. 3(b) has been successfully removed by the proposed method of projective image transformation, as shown in Fig. 6. This figure also indicates that the detected lens lattices are located accurately on the boundary of the elemental images, enabling exact identification of each elemental image.

The transformation matrices extracted by the proposed method are listed in Table 1 in comparison with those simulated in generation of the distorted elemental image set. We measured peak signal to noise ratio (PSNR) between elemental image sets transformed by using simulated image transformation matrix and extracted image transformation matrix in Table 1. The results of PSNR are 27.59 dB, 22.77 dB, and 20.98 dB for three steps Hp, Ha, and Hs, respectively. PSNR between the final elemental image sets rectified by the simulated and extracted image transforms is 25.67 dB. This result shows that the error of the proposed method occurs in each procedure of the stratified transforms. However, from the PSNR result of the final rectified elemental image sets, it is confirmed that the proposed method can extract each transformation matrix with reasonably high accuracy.

Tables Icon

Table 1. Comparison of the transformation matrices between the simulated and extracted results

3. Experiments on rectified elemental and optically reconstructed images of real objects

In order to evaluate the proposed method of the projective image transformation for real pickup images of the objects, we have performed the experiments by using the elemental image sets which were optically picked up from the two different 3D objects which are textured box and letter ‘2’ candle of 50 mm and 30 mm size, respectively. In both the experiments, we used the lens array consisting of elemental lenses in regular square shape with a 1 mm lens pitch. Figure 7 shows the photograph and optical arrangement of the experimental setup to optically pickup the elemental image set of the object. In this figure, d object is a distance of the object from the lens array and θ x and θ y are the rotation angle of the camera from the x-axis and y-axis, respectively. In the experiment for the textured box, d object is 40 mm and both θ x and θ y are 7°, causing mismatch between the lens array and the camera CCD plane. In the experiment for the letter ‘2’ candle, d object is 30 mm and θ x and θ y are 2° and 3°, respectively. These two objects used in the experiments are shown in Fig. 8(a) , and their elemental image sets optically captured with geometrical distortions are shown in Fig. 8(b). The sizes of two acquired elemental image sets which pickup the object letter ‘2’ candle and textured box are 2247 by 2271 pixels and 2400 by 2300 pixels respectively. Lens distortions such as barrel or pincushion distortion in the acquired elemental image set are corrected by commercial software ‘PTlens’ before the proposed rectification method is applied [20].

 figure: Fig. 7

Fig. 7 Photograph and optical arrangement of the experimental setup for picking up the 3D object images.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8

Fig. 8 (a) 3D objects of a textured box (up) and a letter ‘2’ candle (down) used in the experiments, and (b) their corresponding elemental image sets optically captured with geometrical distortions on the CCD plane.

Download Full Size | PPT Slide | PDF

The transform results of the elemental image sets, which are corrected from the experimentally acquired ones in Fig. 8(b) by the projective image transformation, are sequentially presented in Fig. 9 for the texture box and in Fig. 10 for the letter ‘2’ candle, respectively. Both the figures illustrate the initial detection of the distorted lens lattice using the Hough transform peaks, the results of sequential recovery of affine and metric geometry, and finally the rectified elemental image set with an extracted lattice structure, respectively. The transform matrices Hp, Ha, and Hs extracted by the proposed method are summarized in Table 2 for the texture box images and in Table 3 for the letter ‘2’ candle images, respectively. We used MATLAB 7.8 for the implementation of the proposed rectification algorithm on a 2.40-GHz Core2 personal computer with 4GB of RAM. The computational times for the rectification are 56.38 seconds for letter ‘2’ candle and 62.67 seconds for textured box. From Figs. 9 and 10, it can be observed that the geometrically distorted elemental image sets are rectified effectively and the lens boundary lattices are accurately extracted by the proposed method of the projective image transformation.

 figure: Fig. 9

Fig. 9 Sequential corrections of distortions in the pickup procedure for the elemental image set captured optically from a textured box object: (a) initial distortion detection with Hough transform peaks, (b) affine geometry recovered image, (c) metric geometry recovered image, and (d) extracted lens lattice lines in the close-up image of an inserted upper left corner in (c).

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 Sequential corrections of distortions in the pickup procedure for the elemental image set captured optically from a letter ‘2’ candle object: (a) initial distortion detection with Hough transform peaks, (b) affine geometry recovered image, (c) metric geometry recovered image, and (d) extracted lens lattice lines in the close-up image of an inserted upper left corner in (c).

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Transform matrices extracted for the textured box images in the first experiment.

Tables Icon

Table 3. Transform matrices extracted for the letter ‘2’ candle images in the second experiment.

In order to confirm the effectiveness of the proposed transformation method for the correct rectification of the elemental image set of real objects, the rectified elemental image sets in Figs. 9(c) and 10(c) are optically reconstructed in the reconstruction procedure by using the lens arrays which have same lattice structure in Figs. 9(d) and 10(d). The resultant 3D images optically reconstructed for the two objects are shown in Fig. 11(b) . For comparison, Fig. 11(a) also shows the 3D images optically reconstructed in the same way from the original distorted elemental image sets in Fig. 8(b). As appeared in Fig. 11, the reconstructed 3D images using the original elemental image sets with distortions have spatial deforms and blurs. On the other hand, the 3D images reconstructed using the elemental image set rectified by the proposed method have no spatial distortions and deforms, showing quite clear images of the two objects used in the experiments.

 figure: Fig. 11

Fig. 11 Optically reconstructed 3D images (a) from the distorted elemental image sets and (b) from the elemental image sets rectified by the proposed method.

Download Full Size | PPT Slide | PDF

4. Conclusion

In this paper, we have proposed a new method for rectification of the geometrically distorted elemental image set and extraction of the lens boundary lattice structure. Since the distortion information in the elemental image set is found by Hough transform algorithm, the proposed method of the projective image transformation can rectify the geometrical distortion without the prior knowledge on the characteristics of the pickup system. By way of the stratified procedure of the image transformation for rectification, the proposed method can successfully recover the geometrical distortions in consecutive order. The transformation matrices extracted by the procedure of proposed method turn out to be in good agreements with those used for the generation of the computer-generated elemental image set with distortions. The experimental results for the optically-captured elemental image sets of the real 3D objects support the validity of the proposed method with high accuracy of image rectification and lattice extraction as well as reasonably high definition of their reconstructed 3D images.

Acknowledgment

This research was supported by Basic Research Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Education, Science and Technology (2009-0088705).

References and links

1. G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).

2. B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.

3. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009). [CrossRef]   [PubMed]  

4. M. C. Forman, N. Davies, and M. McCormick, “Continuous parallax in discrete pixilated integral three-dimensional displays,” J. Opt. Soc. Am. A 20(3), 411–420 (2003). [CrossRef]  

5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]   [PubMed]  

6. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004). [CrossRef]   [PubMed]  

7. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).

8. G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007). [CrossRef]   [PubMed]  

9. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008). [CrossRef]  

10. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008). [CrossRef]   [PubMed]  

11. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21(6), 951–958 (2004). [CrossRef]  

12. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. 2(4), 393–400 (2006). [CrossRef]  

13. N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14(22), 10403–10409 (2006). [CrossRef]   [PubMed]  

14. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009). [CrossRef]   [PubMed]  

15. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing using MATLAB (Prentice Hall, 2004), Chap. 10.

16. D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Santa Barbara, CA, USA, June 23–25, 1998), p.482.

17. R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).

18. J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007). [CrossRef]   [PubMed]  

19. J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986).

20. http://epaperpress.com/ptlens/.

References

  • View by:

  1. G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).
  2. B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.
  3. J.-H. Park, K. Hong, and B. Lee, “Recent progress in three-dimensional information processing based on integral imaging,” Appl. Opt. 48(34), H77–H94 (2009).
    [Crossref] [PubMed]
  4. M. C. Forman, N. Davies, and M. McCormick, “Continuous parallax in discrete pixilated integral three-dimensional displays,” J. Opt. Soc. Am. A 20(3), 411–420 (2003).
    [Crossref]
  5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997).
    [Crossref] [PubMed]
  6. J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004).
    [Crossref] [PubMed]
  7. J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).
  8. G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007).
    [Crossref] [PubMed]
  9. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” J. Opt. Soc. Korea 12(3), 131–135 (2008).
    [Crossref]
  10. M. Kawakita, H. Sasaki, J. Arai, F. Okano, K. Suehiro, Y. Haino, M. Yoshimura, and M. Sato, “Geometric analysis of spatial distortion in projection-type integral imaging,” Opt. Lett. 33(7), 684–686 (2008).
    [Crossref] [PubMed]
  11. J. Arai, M. Okui, M. Kobayashi, and F. Okano, “Geometrical effects of positional errors in integral photography,” J. Opt. Soc. Am. A 21(6), 951–958 (2004).
    [Crossref]
  12. A. Aggoun, “Pre-processing of integral images for 3-D displays,” J. Display Technol. 2(4), 393–400 (2006).
    [Crossref]
  13. N. P. Sgouros, S. S. Athineos, M. S. Sangriotis, P. G. Papageorgas, and N. G. Theofanous, “Accurate lattice extraction in integral images,” Opt. Express 14(22), 10403–10409 (2006).
    [Crossref] [PubMed]
  14. J.-J. Lee, D.-H. Shin, and B.-G. Lee, “Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction,” Opt. Express 17(20), 18026–18037 (2009).
    [Crossref] [PubMed]
  15. R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing using MATLAB (Prentice Hall, 2004), Chap. 10.
  16. D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Santa Barbara, CA, USA, June 23–25, 1998), p.482.
  17. R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).
  18. J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007).
    [Crossref] [PubMed]
  19. J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986).
  20. http://epaperpress.com/ptlens/ .

2009 (2)

2008 (2)

2007 (2)

G. Passalis, N. Sgouros, S. Athineos, and T. Theoharis, “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” Appl. Opt. 46(22), 5311–5320 (2007).
[Crossref] [PubMed]

J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007).
[Crossref] [PubMed]

2006 (2)

2004 (2)

2003 (1)

2002 (1)

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).

1997 (1)

1908 (1)

G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).

Aggoun, A.

Arai, J.

Athineos, S.

Athineos, S. S.

Choi, H.

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004).
[Crossref] [PubMed]

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).

Davies, N.

Delon, J.

J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007).
[Crossref] [PubMed]

Desolneux, A.

J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007).
[Crossref] [PubMed]

Forman, M. C.

Haino, Y.

Hong, K.

Hoshino, H.

Jung, S.

J.-H. Park, S. Jung, H. Choi, Y. Kim, and B. Lee, “Depth extraction by use of a rectangular lens array and one-dimensional elemental image modification,” Appl. Opt. 43(25), 4882–4895 (2004).
[Crossref] [PubMed]

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).

Kawakita, M.

Kim, E.-S.

Kim, Y.

Kobayashi, M.

Lee, B.

Lee, B.-G.

Lee, J.-J.

Lippmann, G.

G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).

Lisani, J.-L.

J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007).
[Crossref] [PubMed]

McCormick, M.

Okano, F.

Okui, M.

Papageorgas, P. G.

Park, J.-H.

Passalis, G.

Petro, A. B.

J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007).
[Crossref] [PubMed]

Sangriotis, M. S.

Sasaki, H.

Sato, M.

Sgouros, N.

Sgouros, N. P.

Shin, D.-H.

Suehiro, K.

Theofanous, N. G.

Theoharis, T.

Yoshimura, M.

Yuyama, I.

Appl. Opt. (4)

C.R. Acad. Sci Ser. IIc: Chim. (1)

G. Lippmann, “La photographie integrále,” C.R. Acad. Sci Ser. IIc: Chim. 146, 446–451 (1908).

IEEE Trans. Image Process. (1)

J. Delon, A. Desolneux, J.-L. Lisani, and A. B. Petro, “A nonparametric approach for histogram segmentation,” IEEE Trans. Image Process. 16(1), 253–261 (2007).
[Crossref] [PubMed]

J. Display Technol. (1)

J. Opt. Soc. Am. A (2)

J. Opt. Soc. Korea (1)

Opt. Express (2)

Opt. Lett. (1)

Opt. Mem. Neural Networks. (1)

J.-H. Park, S. Jung, H. Choi, and B. Lee, “Detection of the longitudinal and the lateral positions of a three-dimensional object using a lens array and joint transform correlator,” Opt. Mem. Neural Networks. 11, 181–188 (2002).

Other (6)

B. Lee, J.-H. Park, and S.-W. Min, Digital Holography and Three-Dimensional Display, T.-C. Poon, ed. (Springer US, 2006), Chap. 12.

R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing using MATLAB (Prentice Hall, 2004), Chap. 10.

D. Liebowitz, and A. Zisserman, “Metric Rectification for Perspective Images of Planes,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (Santa Barbara, CA, USA, June 23–25, 1998), p.482.

R. Hartley, and A. Zisserman, Multiple View Geometry in Computer Vision, second ed. (Cambridge University Press, Cambridge, 2000).

J. F. Canny, “A computational approach for edge detection,” Trans. Pat. Anal. Mach. Intell. 8, 679–698 (1986).

http://epaperpress.com/ptlens/ .

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Integral imaging which is sequentially processed in accordance with two procedures of (a) pickup and (b) reconstruction.
Fig. 2
Fig. 2 Stratification of the projective image transformation into a cascade process of pure-projective (Hp ), affine (Ha ), and similarity (Hs ) transforms.
Fig. 3
Fig. 3 Preprocessing images of (a) an object, (b) a distorted elemental image set acquired by known projective transformation matrices, and (c) a tetragonal edge image with four peak lines of Hough transform.
Fig. 4
Fig. 4 Correction procedures by rectifications (a) of the pure-projective distortion into an affine geometry elemental image set with a parallelogram shape, and (b) of the affine distortion into a metric geometry elemental image set with a rectangular shape.
Fig. 5
Fig. 5 (a) A projective profile of the compensated elemental image set along the longitudinal direction in units of pixel and (b) an interpolated projective profile by a factor of 10 plotted with an impulse train of detected lens pitch
Fig. 6
Fig. 6 A rectified elemental image set with extracted lens lattice lines on it, which is obtained by the proposed method of projective image transformation.
Fig. 7
Fig. 7 Photograph and optical arrangement of the experimental setup for picking up the 3D object images.
Fig. 8
Fig. 8 (a) 3D objects of a textured box (up) and a letter ‘2’ candle (down) used in the experiments, and (b) their corresponding elemental image sets optically captured with geometrical distortions on the CCD plane.
Fig. 9
Fig. 9 Sequential corrections of distortions in the pickup procedure for the elemental image set captured optically from a textured box object: (a) initial distortion detection with Hough transform peaks, (b) affine geometry recovered image, (c) metric geometry recovered image, and (d) extracted lens lattice lines in the close-up image of an inserted upper left corner in (c).
Fig. 10
Fig. 10 Sequential corrections of distortions in the pickup procedure for the elemental image set captured optically from a letter ‘2’ candle object: (a) initial distortion detection with Hough transform peaks, (b) affine geometry recovered image, (c) metric geometry recovered image, and (d) extracted lens lattice lines in the close-up image of an inserted upper left corner in (c).
Fig. 11
Fig. 11 Optically reconstructed 3D images (a) from the distorted elemental image sets and (b) from the elemental image sets rectified by the proposed method.

Tables (3)

Tables Icon

Table 1 Comparison of the transformation matrices between the simulated and extracted results

Tables Icon

Table 2 Transform matrices extracted for the textured box images in the first experiment.

Tables Icon

Table 3 Transform matrices extracted for the letter ‘2’ candle images in the second experiment.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

x = H x ,
x = ( H ) 1 x = H x .
H = H s H a H p ,
H s = [ s R t 0 T 1 ] , H a = [ 1 / β α / β 0 0 1 0 0 0 1 ] , H p = [ 1 0 0 0 1 0 l 1 l 2 l 3 ] ,
( c α , c β ) = ( d 1 + d 2 2 , 0 ) ,
R = | d 1 d 2 2 | ,
( c α , c β ) = ( Δ x 2 Δ y 2 r 2 Δ x 1 Δ y 1 Δ y 2 2 r 2 Δ y 1 2 , 0 ) ,
R = | r ( Δ x 1 Δ y 2 Δ x 2 Δ y 1 ) Δ y 2 2 r 2 Δ y 1 2 | .

Metrics