Abstract

This paper realizes a computational integral imaging reconstruction method via scale invariant feature transform (SIFT) and patch matching to improve the visual quality of reconstructed 3D view images. To our knowledge, the 3D view images reconstructed from the elemental images suffer from artifacts, which leads to degradations in the visual quality. To prevent image degradation, in this paper, we use the correct regions obtained from the view images taken directly from the original object or use patch matching to replace the distorted regions. However, the initial matching regions could not meet our requirements owing to the limitations of the equipment and the inevitable shortcomings of the experimental operation. To solve these problems, we adopt SIFT descriptors and perspective transform to get the satisfying correct regions. We present the simulation and experimental results of the 3D view images and the evaluation of the quality of the corresponding images to test the performance of the proposed method. The simulation and experimental results indicate that the proposed method can significantly improve the visual quality of the 3D view images and verify the feasibility and effectiveness of the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) imaging and display techniques have received a significant amount of interest owing to their various applications in scientific research, medical treatment, industry and so on. The authors in [1] proposed an ordered-dithering halftone algorithm to increase the gray levels of the time-multiplexing light field display. This team introduced a weighted average optimization to the image synthesis process [2] the following year, this method trades off the reconstructed depth range and the image sharpness for multi-projector-type light field displays. A simple and effective method for producing large-scale polymer microlens arrays using screen printing is proposed in [3]. The study in [4] utilized integral imaging to construct a flat-panel see-through 3D display. Integral imaging, proposed by Lippmann in 1908, is one of the most promising 3D display technologies because it provides full color parallax, continuous viewing points and operates with incoherent light [5–12]. There are two basic parts in a typical integral imaging system: acquisition and display. In the acquisition process, a 3D scene is picked up by a recording device such as a lenslet array with a CCD camera or a camera array. The 3D information from the different view points on the same plane or on the same spherical surface is recorded in the elemental images. In the display process, the elemental images are displayed by a projector array or on a display panel with a lenslet array to reconstruct the 3D scene. Display is the inverse process of acquisition. The display process introduced above denotes the conventional optical reconstruction. This method always requires special optical equipment and provides low-resolution 3D scene owing to the limitations of the devices. To overcome the disadvantages of optical reconstruction, computational integral imaging reconstruction (CIIR) is considered a feasible method for presenting the 3D scene more clearly. In addition, computational reconstruction can transform the elemental images to be observed on a 2D display panel, which makes image processing and information extraction easier. The CIIR method rebuilds a 3D scene from the elemental images by using a computer and previous researches have presented some methods [15–19]. The researchers in [13] proposed a monospectral II encryption approach with a monospectral camera array and realized an optical 3D images encryption and reconstruction by using the geometric calibration algorithm in the monospectral synthetic aperture integral imaging system [14]. In [15], the authors utilized image interpolation algorithms to obtain resolution-enhanced elemental images and magnified them to reconstruct the 3D scene to improve the visual quality. However, the interpolation process may cause extra pixel errors. Some researchers presented a method based on a technique for rearrangement of the pixels in the elemental image in [16]. Although this method can rapidly reconstruct a 3D image, the size of the reconstructed 3D image may be not incorrect. The researchers used ray tracing and auto-focusing to rebuild the computational 3D view image in [18]. This method solved the out-of-focus blur but performed poorly when the distance between the view point and lenslet array was large.

In this paper, we additionally utilized SIFT to adjust the view images. SIFT descriptors, proposed by Lowe, consist of location, scale, orientation and feature vectors [20]. The location and scale are determined through extreme point searching in the Gaussian image pyramid. The orientation is obtained according to the gradient distribution of the pixels in the feature point neighborhood. The feature vector is a 128-dimension vector that records the detailed gradient information of the local region at the feature point. Owing to the excellent performance in scale invariant feature searching, SIFT descriptors have been widely used in image retrieval [21,22], image registration [23–25], and object recognition [26–28].

In this paper, we propose a resolution-enhanced CIIR method based on SIFT and patch matching that can improve the visual quality of the reconstructed 3D image. This method utilizes the similarity between the view image at the reconstructed view point and the initial reconstructed 3D image, and adjusts the view image using a matching algorithm based on SIFT descriptors. After adjustment, the new view image is considered as a template for the later reconstruction process. The proposed CIIR method can provide high image quality to the reconstructed 3D scene. In the following sections, we explain the principle of the proposed method and verify its feasibility by simulation and experiments.

2. Proposed computational integral imaging reconstruction method

The proposed integral imaging pickup process and reconstruction process are shown in Fig. 1. In the pickup process, one path uses the conventional pickup method to obtain the elemental image array EA1 and the other path uses a recording device to obtain the view image I2 at (x0,y0,z0). Here, (x0,y0,z0) represent the coordinates of the view point to be reconstructed. EA1is used to reconstruct the initial view image I1 at (x0,y0,z0). SIFT descriptors are extracted from I1 and I2 to obtain the matching point pairs by using the matching method that compares Euclidean distances. The matching point pairs are filtered by random sample consensus (RANSAC) algorithm. Then, we use the final matching point pairs to calculate the perspective matrix H. We performed perspective transformation on image I2 according to H to obtain image I2', which has the same resolution as image I1. Image I2', which we call the template image, is used to compare with I1 to determine the distorted regions. We segment the template image I2' according to the locations of the distorted regions. The regions segmented from I2', called correct regions, are used to construct or guide the construction of the distorted regions. The reconstruction process is the reverse process of the partial pickup process, as shown in Fig. 1. More details are introduced in Section 2.2.

 

Fig. 1 Block diagram of the proposed CIIR method.

Download Full Size | PPT Slide | PDF

2.1. Optical analysis of integral imaging

From the above introduction, we consider I2' as the template image for the reconstruction because the 3D scene S' in the display process (as shown in Fig. 2) has similar spatial frequency distribution with the object S in the pickup process. We also think that once the reconstructed view point is decided, the view image I2 obtained from the object S at location P (the point corresponding to the decided view point) is exactly like the reconstructed image obtained from the 3D scene S' at the decided view point location P' when the conditions (ambient light, the location and the distance from P to S and P' to S', etc.) are the same. Figure 2 illustrates the schematic diagram of integral imaging.

 

Fig. 2 Schematic diagram of integral imaging.

Download Full Size | PPT Slide | PDF

As shown in Fig. 2, we analyzed the spatial frequency distribution of the original object and reconstructed 3D scene in the integral imaging system to prove the above introduction. In the spatial domain, the light waves scattered from the object S can be illustrated by

f(r,t)=ψ(k,v)exp[j2π(kr+vt)]dkdv,
where r is the location vector [29] of the 3D space and r=xx^+yy^+zz^ (the symbol^represents the unit vector), and k is the wave vector with amplitude 2π/λ and direction (α,β,γ)sok=2πλ(αx^+βy^+γz^), (γ=1α2β2). The light waves from S (herez=0) are represented by angular spectrum in Eq. (2).
f(x,y,z;λ;t)=ϕ(αλ,βλ;t)exp[j2π(αλx+βλy)]d(αλ)d(βλ).
In Eq. (2), ϕ(αλ,βλ;t) is the angular spectrum distribution off(x,y,z;λ;t), αλ and βλ are the continuous spectrum components. However, the information in the light waves from S recorded by the pickup device is discrete. Different elemental images represent different pickup orientations. Therefore, when we adopt an A×B lenslet array to obtain the elemental images, the light waves picked up by the pickup device can be represented by Eq. (3):
fab(x,y,z;λ;t)=b=1Ba=1Aϕab(αabλ,βabλ;t)exp(j2πλ1αab2βab2z)exp[j2π(αabλx+βabλy)],
where exp(j2πλ1αab2βab2z) represents an additional phase propagated from the object at distance z, which only changes the relative phase of each angular spectrum component. The intensities and colors of the light waves from object S are recorded on the elemental image array in the form of a 2D image. The elemental image array that contains some of the 3D information is then transmitted to the reconstruction process. Because the 2D image (the elemental image array) in the reconstruction process is almost the same as the 2D image in the pickup process, the 3D image reconstructed at S' still can be represented by Eq. (3). Apparently, Eq. (3) is the sampling of Eq. (1) which means the reconstructed 3D scene S' is a type of sampled object S. Thus, if the view point is determined, the view image to be reconstructed can be obtained from the object S.

2.2. Computational reconstruction of integral imaging based on SIFT and patch matching

The conventional method [30] for CIIR to periodically extract pixels from the elemental image array is shown in Fig. 3.

 

Fig. 3 Schematic of conventional method for CIIR.

Download Full Size | PPT Slide | PDF

The green parallel lines represent a reconstruction view and all green points in Fig. 3 compose the reconstructed image of the corresponding view. The resolution of the reconstructed image using the conventional method is the same as the number of lenslets in the lenslet array. Thus, the resolution of the reconstructed image by this method is significantly lower and cannot satisfy the demand for high visual quality. Moreover, periodically extracting pixels from the elemental image array demands that the distance between the viewer and the lenslet array be large enough to ensure that the angle between the viewer and each lenslet is equal. However, this demand is not suitable in the actual situation.

Owing to the disadvantage of the conventional method, we extract pixels non-periodically from the elemental image array to improve the conventional method, as shown in Fig. 4.

 

Fig. 4 Schematic of improved non-periodic pixels extracting method for CIIR.

Download Full Size | PPT Slide | PDF

The distance between the display plane and lenslet array g, the distance between the lenslet array and imaging plane l1, and the focal length f obey the Gaussian imaging formula. Hence, the mathematic relation ofg, l1, and fis

1f=1g+1l1.
According to the reversible principle of optical path and the lenslet imaging, the pixels on the imaging plane must originate from the light beams that are emitted from the display plane and pass through the center of the corresponding lenslet. The pixels on the imaging plane compose the reconstructed image of one view point and the light beams finally reach the viewer’s eyes. To improve the resolution, we consider the obtained pixel (the red point in Fig. 4) as the center and obtain a patch with lateral resolution c (the calculation for vertical resolution is the same). In our method, the maximum value of c is calculated by Eq. (5):
cmax=gl2PLREIl1(l1+l2+g)PEI,(l2g).
Here, l2 is the distance between the imaging plane and viewing plane, PL is the pitch of the lenslet array, PEI is the physical length of one elemental image (in general, PL and PEI have the same value), and REI is the lateral resolution of one elemental image. However, this method cannot avoid the distortions in reconstructed image because we extract the wrong pixels from the elemental images when the extraction position is far from the position of view point and the depth of the reconstructed image is not on the imaging plane. Considering the above reasons, we propose a 3D view image reconstruction method in computational integral imaging based on SIFT and patch matching to solve the distortion problem.

According to the introduction in Section 2.1, the reconstructed 3D image is the sampling of the original object, implying that if we record the original object at the same view point as in the reconstructed process, we will obtain a view image (we called template image) with no distortion. In addition, the pickup resolution of the recording device is high. Thus, the template image has high resolution. However, the view image obtained from the original object might be a little different from the template image because of the inappropriate focal length, slightly inaccurate distance between the camera and the original object, and some unavoidable problems in actual camera acquisition like shaking. To solve the above problems, we adjust the view image obtained from the original object according to the initial reconstructed image, which is obtained by the improved non-periodic pixels extracting method. The main steps of the adjustment process are introduced at the beginning of Section 2 and Fig. 1. After adjustment, the template image becomes a high definition ideal template. Then, we need to find the exact distortion region in the reconstructed image. In our method, we utilize the peak signal to noise ratio (PSNR) to find the distortion region. If the size of the elemental image array is A×B, the reconstructed image is composed of A×Bpatches. Therefore, we divide the template image into A×B patches and label the A×Bpatches of the reconstructed image from left to right and top to bottom, respectively, as {1,2,3,,A×B}and the template image is also labeled this way. Then, we calculate the PSNR of every pair of same labeled patches. In our method, if there is a patch in the reconstructed image with PSNR less than D, we consider this patch to be distorted. The value of D in our simulation was 25 and those in the experiments were 20, 25 and 30. Then, we record the label of this distorted patch. All labels of the distorted regions will be used to extract the corresponding patches in the template image. In our simulation and experiments, we designed two methods for the next processes: Method 1 directly transmits the patches of the template image that have the same labels as the distorted regions to the reconstruction process with the elemental images and labels of the distorted regions. Method 2 uses the patches of the template image in the distorted regions to find the good regions in the corresponding elemental image. Here, we calculate the value of PSNR of the patch from the template image and the same-size patch in the corresponding elemental image. The moving direction of the patch from the template image is shown in Fig. 5 (the yellow arrow) and the patch moves one pixel every time. In this process, we define the patch in elemental images with the maximum value of PSNR as a good region for reconstruction. Then, we record the location of this good region in elemental image. Finally, the elemental images, the labels of the distorted regions and locations of the good regions will be transmitted to the reconstruction process.

 

Fig. 5 Moving direction of the patch from template image.

Download Full Size | PPT Slide | PDF

In the reconstruction process, the improved non-periodic pixels extracting method for integral imaging computational reconstruction is used to get the initial reconstructed image. Then, the decoded labeled patches are used to cover the initial reconstructed image at the corresponding regions and the final reconstructed image are obtained. However, this method (Method 1) will cause a large transmission burden. Method 2 utilizes the locations of the good regions in the elemental images to find the good regions and cover the initial reconstructed image with the good regions according to the labels of the distorted regions. If the number of reconstructed images is large, Method 2 is a better choice.

3. Simulation and experimental results

In this section, the simulation and experimental results are presented to verify the improvement of our proposed method for CIIR.

3.1. Simulation results

The elemental images used in simulation were produced digitally through a virtual camera array. The camera array consisted of 50 × 50 cameras with 1.00 mm spacing and 3.00 mm focal length. Each elemental image consisted of 200 × 200 pixels. The object used in the simulation was a combination scene of one soccer ball and two Rubik's cubes located 21 mm away from the camera array, as shown in Fig. 6(a). The elemental images are shown in Fig. 6(b) and the details in the yellow box are shown in the upper right corner. The proposed method is applied to these elemental images. The parameters used in the computational reconstruction of the simulation are listed in Table 1. The viewing angle was approximately 4.5° in this simulation.

 

Fig. 6 Object and elemental images used in simulation: (a) object (b) elemental images. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Parameters used in the computational reconstruction of simulation

As shown in Fig. 7, each image contains three view images known as left view at (12, 25, 329), front view at (25, 25, 329), and right view at (38, 25, 329) from left to right. The scale of inset coordinates that present (x'y',z') spatial position is in the millimeter scale in Fig. 7. Owing to the limitations regarding the length of this paper, only three view images are shown for each set of simulation. The resolution of each view image reconstructed by conventional CIIR method [30] is 50px×50px in our simulation. Here, we up-sampled each reconstructed view image to 1500px×1500px for convenient comparison. As shown in Fig. 7(a), we can barely see the soccer ball and the sharpness of the Rubik's cubes is worse. The visual quality of Figs. 7(b) and 7(c) are almost the same and it is evident that the edge of the object away from the imaging plane has blocking artifacts. In comparison with Figs. 7(b) and 7(c), the regions in the same positions of Figs. 7(d) and 7(e) are smooth. In our simulation, we took 30 pixels for each patch, indicating that c=30. Therefore, the resolution of each view image in Fig. 7 is1500px×1500px. The details of the reconstructed images are presented in red boxes.

 

Fig. 7 Reconstructed results of left view, front view, and right view using the: (a) conventional CIIR method [30]; (b) CIIR method based on ray tracing and auto-focus [18]; (c) improved non-periodic pixels extracting CIIR method; (d) proposed CIIR method (Method 1); (e) proposed CIIR method (Method 2). Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Download Full Size | PPT Slide | PDF

The size of the compressed image patches via JPEG (in Method 1) of Fig. 7(d) is listed in Table 2, which is acceptable when the number of reconstructed images is small. If the image patches are compressed by JPEG2000 or HEVC, the size will become smaller for the same image quality. The size of the compressed location of the good regions in elemental images (in Method 2) of Fig. 7(e) is also listed in Table 2. The extra data size produced in the proposed Method 2 is evidently smaller than that in the proposed Method 1, and the visual quality of Method 1 and Method 2 are almost the same. Therefore, we prefer to adopt Method 2 when the number of reconstructed images is large. The number of distorted patches is also listed in Table 2. Here we add two sets of view data to make the simulation more convincing.

Tables Icon

Table 2. Number of the distorted patches, size of the compressed image patches via JPEG (in Method 1), and size of the compressed location of the good regions in elemental images (in Method 2)

The PSNR and structural similarity index (SSIM) of the reconstructed results at different view points are listed in Table 3. The PSNR and SSIM of the proposed method are evidently higher than those of the conventional method [30] and the method mentioned in [18]. The results obtained by the proposed method are good, considering both the quality of subjective vision and objective image quality, implying that the proposed method can improve the visual quality of the reconstructed 3D view image.

Tables Icon

Table 3. PSNR and SSIM of the reconstructed results at different view points

3.2. Experimental results

We design two systems to verify our proposed method in this section. The setups of System 1 and System 2 are shown in Figs. 8(a) and 8(b), respectively. In System 1, we used one camera to simulate the camera array by moving this camera 1mm each time while capturing. However, the cost of camera array is very high. Therefore, several acquisition processes can only be performed by one camera and lenslet array. In System 2, a camera and lenslet array were used to capture the elemental images. Although the elemental images captured through the lenslet array contained scattered light, the proposed method is still effective. Designing these two systems makes our experiment more convincing. The parameters used in computational reconstruction are listed in Table 4 and Table 5. Figures 9(a) and 9(b) show the elemental images used in System 1 and System 2, and the details are shown in the green and red boxes, respectively. In the acquisition process, the images captured by the camera were cut into appropriate sizes owing to the large acquisition resolution of the camera.

 

Fig. 8 Setups of experimental systems: (a) System 1 (b) System 2. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. Parameters used in the computational reconstruction of experiments (System 1)

Tables Icon

Table 5. Parameters used in the computational reconstruction of experiments (System 2)

 

Fig. 9 Elemental images used in experiments: (a) System 1 (b) System 2.

Download Full Size | PPT Slide | PDF

Figure 10 shows the experimental reconstructed results of System 1 at different view points. Similar to the simulation, the resolution of each view image reconstructed using the conventional method [30] was 50px×50px and was up-sampled to 1500px×1500px. As shown in Fig. 10(a), the view images are too blurry to see the edge of the Rubik's cube. The poor visual quality of Fig. 10(a) is because method [30] only extracts one pixel for each lenslet. Although up-sampling appears to improve the resolution, the view image has insufficient effective information from the elemental images for reconstruction. As shown in Figs. 10(b) and 10(c), these two sets of view images look similar. Due to the unavoidable errors in experimental operation while capturing, Figs. 10(b) and 10(c) have several distortions. However, the reason for distortions in Fig. 10(b) is a little different from Fig. 10(c). The ray tracing CIIR method in [18] is not effective when the view point is far away from the lenslet array (or the camera array) and the elemental images are close to the lenslet array (or camera array). Thus, many adjacent pixels in the reconstructed image extract the pixel at the same position from the elemental images. As for Fig. 10(c), the improved non-periodic pixels extracting CIIR method takes a patch from each elemental image that would easily cause blocking artifacts. This is because this method may take the wrong patch when the reconstruction position is far from the view point or the reconstructed imaging plane. Figures 10(d) and 10(e) show the reconstructed results of System 1 using the proposed reconstruction method (Method 1 and Method 2). In our experiments, the values of PSNR in the process of distorted region searching (the value of D in Section 2.2) are 20, 25 and 30. Owing to the limitations regarding this length of the paper, the value of PSNR is set to 20 in Figs. 10(d) and 10(e). The reconstructed view images that are demonstrated in Figs. 10(d) and 10(e) have fewer blocking artifacts than Fig. 10(b), and their visual quality is improved. Table 6 lists the extra size of data needed to be transmitted in System 1. The PSNR and SSIM of the reconstructed results in System 1 at different view point are listed in Table 7.

 

Fig. 10 Reconstructed results of left view, front view, and right view of System 1 using: (a) conventional CIIR method [30]; (b) CIIR method based on ray tracing and auto-focus [18]; (c) improved non-periodic pixels extracting CIIR method; (d) proposed CIIR method (Method 1); (e) proposed CIIR method (Method 2). Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Download Full Size | PPT Slide | PDF

Tables Icon

Table 6. Number of distorted patches, size of the compressed image patches via JPEG (in Method 1), and size of the compressed location of the good regions in elemental images (in Method 2)

Tables Icon

Table 7. PSNR and SSIM of the reconstructed results at different view points

Figure 11 illustrates the experimental reconstructed results of System 2 at different view points. The resolution of each image in Fig. 11 is 742px×742px. Figure 11(b) shows the reconstructed results using method [18], whose visual quality is worse than Fig. 11(c) because method [18] requires the elemental images to be highly accurate. However, in the actual operation of the experiment, the occurrence of errors is inevitable owing to human factors. The reconstructed view images in Fig. 11(d) are clearer but the experimental results in Fig. 11(e) are worse than those in Fig. 11(c) because the lenslet array used in our experiments disperses the light propagated from the object, which causes luminance of the elemental images and the template images captured by the camera are inconsistent. All these problems lead to mismatches when using the proposed Method 2. The extra size of data needed to be transmitted for System 2 is listed in Table 8. The PSNR and SSIM of the reconstructed results in our experiments at different view point are listed in Table 9.

 

Fig. 11 Reconstructed results of left view, front view, and right view of System 2 using: (a) conventional CIIR method [30]; (b) CIIR method based on ray tracing and auto-focus [18]; (c) improved non-periodic pixels extracting CIIR method; (d) proposed CIIR method (Method 1); (e) proposed CIIR method (Method 2). Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Download Full Size | PPT Slide | PDF

Tables Icon

Table 8. Number of distorted patches, size of the compressed image patches via JPEG (in Method 1), and size of the compressed location of the good regions in elemental images (in Method 2)

Tables Icon

Table 9. PSNR and SSIM of the reconstructed results at different view points

As shown in Fig. 10 and Fig. 11, the sharpness of the reconstructed results using the proposed method is better than those using method [30] and method [18], the artifacts are eliminated using the proposed method, and the color accuracy of proposed method is also better.

The reconstructed results of System 1 and System 2 are shown in Figs. 12 and 13 respectively, when the values of PSNR in the process of distorted region searching (the value of D in Section 2.2) are 20, 25 and 30. Higher value of PSNR means the reconstructed view image is more similar to the template image while using the proposed Method 1. For the proposed Method 2, when the value of PSNR is higher, the number of distorted patches is larger. However, because the good patches are obtained from the corresponding elemental images instead of template images, the visual quality of the good patches obtained by patch matching is limited. Thus, the PSNR and SSIM of the proposed Method 2 is lower than those of Method 1, as shown in Table 7 and Table 9. Therefore, the visual quality of the view images reconstructed using Method 2 will not be improved continuously when the value of PSNR in the process of distorted region searching increases.

 

Fig. 12 Reconstructed results using the proposed CIIR method of System 1 when the view point is at (25, 25, 4050) and the values of PSNR in the process of distorted region searching are 20, 25, and 30: (a) using the proposed Method 1; (b) using the proposed Method 2. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Download Full Size | PPT Slide | PDF

 

Fig. 13 Reconstructed results using the proposed CIIR method of System 2 when the view point is at (27, 27, 580) and the values of PSNR in the process of distorted region searching are 20, 25, and 30: (a) using the proposed Method 1; (b) using the proposed Method 2. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Download Full Size | PPT Slide | PDF

In the proposed method, the distortion region is repaired under the guidance of the template image. As mentioned in Section 2, the template image is adjusted according to the initial reconstructed image, which is reconstructed by the improved non-periodic pixels extracting CIIR method. This indicates that the initial reconstructed image should be similar to the view image captured from the original object. Otherwise, the template image may not be obtained owing to the high number of mismatches in the SIFT descriptors matching.

4. Conclusion

This paper presents a CIIR method based on SIFT that can reconstruct 3D view images with high visual quality. Unlike the conventional reconstruction method, the proposed method utilizes SIFT to adjust the view images of the original object which we called template images, and the view images of the reconstructed 3D scene are improved under the guidance of the template images. The proposed method has been proven effective in improving the visual quality of 3D view image reconstruction in computational integral imaging. The feasibility of the proposed method was verified through simulations and experiments. Moreover, due to the limitations of our experimental conditions, we can only provide computational results currently. Our next work will verify the 3D image quality of the proposed method on an optical display platform.

Funding

National Key Research and Development Plan of 13th five-year (2017YFB0404800); National Natural Science Foundation of China (NSFC) (61631009); Fundamental Research Funds for the Central Universities (2017TD-19).

References

1. C. Su, Q. Zhong, Y. Peng, L. Xu, R. Wang, H. Li, and X. Liu, “Grayscale performance enhancement for time-multiplexing light field rendering,” Opt. Express 23(25), 32622–32632 (2015). [CrossRef]   [PubMed]  

2. Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016). [CrossRef]  

3. X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016). [CrossRef]   [PubMed]  

4. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015). [CrossRef]   [PubMed]  

5. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]   [PubMed]  

6. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014). [CrossRef]   [PubMed]  

7. N. Chen, J. Yeom, J. H. Jung, J. H. Park, and B. Lee, “Resolution comparison between integral-imaging-based hologram synthesis methods using rectangular and hexagonal lens arrays,” Opt. Express 19(27), 26917–26927 (2011). [CrossRef]   [PubMed]  

8. H. Deng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011). [CrossRef]  

9. W. Li and Y. Li, “Generic camera model and its calibration for computational integral imaging and 3D reconstruction,” J. Opt. Soc. Am. A 28(3), 318–326 (2011). [CrossRef]   [PubMed]  

10. X. Xiao and B. Javidi, “3D photon counting integral imaging with unknown sensor positions,” J. Opt. Soc. Am. A 29(5), 767–771 (2012). [CrossRef]   [PubMed]  

11. J. Y. Jang, H. S. Lee, S. Cha, and S. H. Shin, “Viewing angle enhanced integral imaging display by using a high refractive index medium,” Appl. Opt. 50(7), B71–B76 (2011). [CrossRef]   [PubMed]  

12. X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019). [CrossRef]  

13. X. Li, M. Zhao, Y. Xing, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Optical encryption via monospectral integral imaging,” Opt. Express 25(25), 31516–31527 (2017). [CrossRef]   [PubMed]  

14. X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26(9), 11084–11099 (2018). [CrossRef]   [PubMed]  

15. D. H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007). [CrossRef]   [PubMed]  

16. M. Cho and B. Javidi, “Computational reconstruction of three-dimensional integral imaging by rearrangement of elemental image pixels,” J. Disp. Technol. 5(2), 61–65 (2009). [CrossRef]  

17. K. Inoue, M. Lee, B. Javidi, and M. Cho, “Improved 3D integral imaging reconstruction with elemental image pixel rearrangement,” J. Opt. 20(2), 025703 (2018). [CrossRef]  

18. Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017). [CrossRef]  

19. B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018). [CrossRef]  

20. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004). [CrossRef]  

21. J. Kalpana and R. Krishnamoorthi, “Color image retrieval technique with local features based on orthogonal polynomials model and SIFT,” Multimedia Tools Appl. 75(1), 49–69 (2016). [CrossRef]  

22. G. A. Montazer and D. Giveki, “Content based image retrieval system using clustered scale invariant feature transforms,” Optik (Stuttg.) 126(18), 1695–1699 (2015). [CrossRef]  

23. Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013). [CrossRef]  

24. G. Lv, S. W. Teng, and G. Lu, “Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors,” Pattern Recognit. Lett. 84, 156–162 (2016). [CrossRef]  

25. S. W. Teng, M. T. Hossain, and G. Lu, “Multimodal image registration technique based on improved local feature descriptors,” J. Electron. Imaging 24(1), 013013 (2015). [CrossRef]  

26. W. L. Zhao and C. W. Ngo, “Flip-invariant SIFT for copy and object detection,” IEEE Trans. Image Process. 22(3), 980–991 (2013). [CrossRef]   [PubMed]  

27. J. Yu, F. Zhang, and J. Xiong, “An innovative sift-based method for rigid video object recognition,” Math. Probl. Eng. 2014, 138927 (2014). [CrossRef]  

28. S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015). [CrossRef]  

29. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005), Chap. 3.

30. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. C. Su, Q. Zhong, Y. Peng, L. Xu, R. Wang, H. Li, and X. Liu, “Grayscale performance enhancement for time-multiplexing light field rendering,” Opt. Express 23(25), 32622–32632 (2015).
    [Crossref] [PubMed]
  2. Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016).
    [Crossref]
  3. X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
    [Crossref] [PubMed]
  4. Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015).
    [Crossref] [PubMed]
  5. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013).
    [Crossref] [PubMed]
  6. Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014).
    [Crossref] [PubMed]
  7. N. Chen, J. Yeom, J. H. Jung, J. H. Park, and B. Lee, “Resolution comparison between integral-imaging-based hologram synthesis methods using rectangular and hexagonal lens arrays,” Opt. Express 19(27), 26917–26927 (2011).
    [Crossref] [PubMed]
  8. H. Deng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011).
    [Crossref]
  9. W. Li and Y. Li, “Generic camera model and its calibration for computational integral imaging and 3D reconstruction,” J. Opt. Soc. Am. A 28(3), 318–326 (2011).
    [Crossref] [PubMed]
  10. X. Xiao and B. Javidi, “3D photon counting integral imaging with unknown sensor positions,” J. Opt. Soc. Am. A 29(5), 767–771 (2012).
    [Crossref] [PubMed]
  11. J. Y. Jang, H. S. Lee, S. Cha, and S. H. Shin, “Viewing angle enhanced integral imaging display by using a high refractive index medium,” Appl. Opt. 50(7), B71–B76 (2011).
    [Crossref] [PubMed]
  12. X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
    [Crossref]
  13. X. Li, M. Zhao, Y. Xing, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Optical encryption via monospectral integral imaging,” Opt. Express 25(25), 31516–31527 (2017).
    [Crossref] [PubMed]
  14. X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26(9), 11084–11099 (2018).
    [Crossref] [PubMed]
  15. D. H. Shin and H. Yoo, “Image quality enhancement in 3D computational integral imaging by use of interpolation methods,” Opt. Express 15(19), 12039–12049 (2007).
    [Crossref] [PubMed]
  16. M. Cho and B. Javidi, “Computational reconstruction of three-dimensional integral imaging by rearrangement of elemental image pixels,” J. Disp. Technol. 5(2), 61–65 (2009).
    [Crossref]
  17. K. Inoue, M. Lee, B. Javidi, and M. Cho, “Improved 3D integral imaging reconstruction with elemental image pixel rearrangement,” J. Opt. 20(2), 025703 (2018).
    [Crossref]
  18. Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017).
    [Crossref]
  19. B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018).
    [Crossref]
  20. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004).
    [Crossref]
  21. J. Kalpana and R. Krishnamoorthi, “Color image retrieval technique with local features based on orthogonal polynomials model and SIFT,” Multimedia Tools Appl. 75(1), 49–69 (2016).
    [Crossref]
  22. G. A. Montazer and D. Giveki, “Content based image retrieval system using clustered scale invariant feature transforms,” Optik (Stuttg.) 126(18), 1695–1699 (2015).
    [Crossref]
  23. Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013).
    [Crossref]
  24. G. Lv, S. W. Teng, and G. Lu, “Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors,” Pattern Recognit. Lett. 84, 156–162 (2016).
    [Crossref]
  25. S. W. Teng, M. T. Hossain, and G. Lu, “Multimodal image registration technique based on improved local feature descriptors,” J. Electron. Imaging 24(1), 013013 (2015).
    [Crossref]
  26. W. L. Zhao and C. W. Ngo, “Flip-invariant SIFT for copy and object detection,” IEEE Trans. Image Process. 22(3), 980–991 (2013).
    [Crossref] [PubMed]
  27. J. Yu, F. Zhang, and J. Xiong, “An innovative sift-based method for rigid video object recognition,” Math. Probl. Eng. 2014, 138927 (2014).
    [Crossref]
  28. S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015).
    [Crossref]
  29. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005), Chap. 3.
  30. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26(3), 157–159 (2001).
    [Crossref] [PubMed]

2019 (1)

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

2018 (3)

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26(9), 11084–11099 (2018).
[Crossref] [PubMed]

K. Inoue, M. Lee, B. Javidi, and M. Cho, “Improved 3D integral imaging reconstruction with elemental image pixel rearrangement,” J. Opt. 20(2), 025703 (2018).
[Crossref]

B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018).
[Crossref]

2017 (2)

Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017).
[Crossref]

X. Li, M. Zhao, Y. Xing, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Optical encryption via monospectral integral imaging,” Opt. Express 25(25), 31516–31527 (2017).
[Crossref] [PubMed]

2016 (4)

Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016).
[Crossref]

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

J. Kalpana and R. Krishnamoorthi, “Color image retrieval technique with local features based on orthogonal polynomials model and SIFT,” Multimedia Tools Appl. 75(1), 49–69 (2016).
[Crossref]

G. Lv, S. W. Teng, and G. Lu, “Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors,” Pattern Recognit. Lett. 84, 156–162 (2016).
[Crossref]

2015 (5)

S. W. Teng, M. T. Hossain, and G. Lu, “Multimodal image registration technique based on improved local feature descriptors,” J. Electron. Imaging 24(1), 013013 (2015).
[Crossref]

G. A. Montazer and D. Giveki, “Content based image retrieval system using clustered scale invariant feature transforms,” Optik (Stuttg.) 126(18), 1695–1699 (2015).
[Crossref]

S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015).
[Crossref]

Y. Takaki and Y. Yamaguchi, “Flat-panel see-through three-dimensional display based on integral imaging,” Opt. Lett. 40(8), 1873–1876 (2015).
[Crossref] [PubMed]

C. Su, Q. Zhong, Y. Peng, L. Xu, R. Wang, H. Li, and X. Liu, “Grayscale performance enhancement for time-multiplexing light field rendering,” Opt. Express 23(25), 32622–32632 (2015).
[Crossref] [PubMed]

2014 (2)

2013 (3)

Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013).
[Crossref]

W. L. Zhao and C. W. Ngo, “Flip-invariant SIFT for copy and object detection,” IEEE Trans. Image Process. 22(3), 980–991 (2013).
[Crossref] [PubMed]

X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013).
[Crossref] [PubMed]

2012 (1)

2011 (4)

2009 (1)

M. Cho and B. Javidi, “Computational reconstruction of three-dimensional integral imaging by rearrangement of elemental image pixels,” J. Disp. Technol. 5(2), 61–65 (2009).
[Crossref]

2007 (1)

2004 (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004).
[Crossref]

2001 (1)

Althoefer, K.

S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015).
[Crossref]

Arimoto, H.

Cha, S.

Chen, N.

Chen, Y.

Cheng, S.

Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013).
[Crossref]

Cho, B.

B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018).
[Crossref]

Cho, M.

B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018).
[Crossref]

K. Inoue, M. Lee, B. Javidi, and M. Cho, “Improved 3D integral imaging reconstruction with elemental image pixel rearrangement,” J. Opt. 20(2), 025703 (2018).
[Crossref]

M. Cho and B. Javidi, “Computational reconstruction of three-dimensional integral imaging by rearrangement of elemental image pixels,” J. Disp. Technol. 5(2), 61–65 (2009).
[Crossref]

Deng, H.

H. Deng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011).
[Crossref]

Giveki, D.

G. A. Montazer and D. Giveki, “Content based image retrieval system using clustered scale invariant feature transforms,” Optik (Stuttg.) 126(18), 1695–1699 (2015).
[Crossref]

Guo, B.

Guo, T.

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

Hossain, M. T.

S. W. Teng, M. T. Hossain, and G. Lu, “Multimodal image registration technique based on improved local feature descriptors,” J. Electron. Imaging 24(1), 013013 (2015).
[Crossref]

Inoue, K.

K. Inoue, M. Lee, B. Javidi, and M. Cho, “Improved 3D integral imaging reconstruction with elemental image pixel rearrangement,” J. Opt. 20(2), 025703 (2018).
[Crossref]

Jang, J. Y.

Javidi, B.

Jung, J. H.

Kalpana, J.

J. Kalpana and R. Krishnamoorthi, “Color image retrieval technique with local features based on orthogonal polynomials model and SIFT,” Multimedia Tools Appl. 75(1), 49–69 (2016).
[Crossref]

Kim, S. T.

Kopycki, P.

B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018).
[Crossref]

Krishnamoorthi, R.

J. Kalpana and R. Krishnamoorthi, “Color image retrieval technique with local features based on orthogonal polynomials model and SIFT,” Multimedia Tools Appl. 75(1), 49–69 (2016).
[Crossref]

Lee, B.

Lee, H. S.

Lee, M.

K. Inoue, M. Lee, B. Javidi, and M. Cho, “Improved 3D integral imaging reconstruction with elemental image pixel rearrangement,” J. Opt. 20(2), 025703 (2018).
[Crossref]

Li, D. H.

H. Deng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011).
[Crossref]

Li, H.

Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016).
[Crossref]

C. Su, Q. Zhong, Y. Peng, L. Xu, R. Wang, H. Li, and X. Liu, “Grayscale performance enhancement for time-multiplexing light field rendering,” Opt. Express 23(25), 32622–32632 (2015).
[Crossref] [PubMed]

Li, L.

Li, W.

Li, X.

Li, Y.

Liu, H.

S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015).
[Crossref]

Liu, X.

Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016).
[Crossref]

C. Su, Q. Zhong, Y. Peng, L. Xu, R. Wang, H. Li, and X. Liu, “Grayscale performance enhancement for time-multiplexing light field rendering,” Opt. Express 23(25), 32622–32632 (2015).
[Crossref] [PubMed]

Liu, Y.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Lowe, D. G.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004).
[Crossref]

Lu, G.

G. Lv, S. W. Teng, and G. Lu, “Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors,” Pattern Recognit. Lett. 84, 156–162 (2016).
[Crossref]

S. W. Teng, M. T. Hossain, and G. Lu, “Multimodal image registration technique based on improved local feature descriptors,” J. Electron. Imaging 24(1), 013013 (2015).
[Crossref]

Luo, S.

S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015).
[Crossref]

Lv, G.

G. Lv, S. W. Teng, and G. Lu, “Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors,” Pattern Recognit. Lett. 84, 156–162 (2016).
[Crossref]

Martinez-Corral, M.

B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018).
[Crossref]

X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [Invited],” Appl. Opt. 52(4), 546–560 (2013).
[Crossref] [PubMed]

Montazer, G. A.

G. A. Montazer and D. Giveki, “Content based image retrieval system using clustered scale invariant feature transforms,” Optik (Stuttg.) 126(18), 1695–1699 (2015).
[Crossref]

Mou, W.

S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015).
[Crossref]

Ngo, C. W.

W. L. Zhao and C. W. Ngo, “Flip-invariant SIFT for copy and object detection,” IEEE Trans. Image Process. 22(3), 980–991 (2013).
[Crossref] [PubMed]

Park, J. H.

Peng, R.

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

Peng, Y.

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016).
[Crossref]

C. Su, Q. Zhong, Y. Peng, L. Xu, R. Wang, H. Li, and X. Liu, “Grayscale performance enhancement for time-multiplexing light field rendering,” Opt. Express 23(25), 32622–32632 (2015).
[Crossref] [PubMed]

Shin, D. H.

Shin, S. H.

Stankovic, L.

Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013).
[Crossref]

Stankovic, V.

Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013).
[Crossref]

Stern, A.

Su, C.

Takaki, Y.

Teng, S. W.

G. Lv, S. W. Teng, and G. Lu, “Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors,” Pattern Recognit. Lett. 84, 156–162 (2016).
[Crossref]

S. W. Teng, M. T. Hossain, and G. Lu, “Multimodal image registration technique based on improved local feature descriptors,” J. Electron. Imaging 24(1), 013013 (2015).
[Crossref]

Wang, Q. H.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26(9), 11084–11099 (2018).
[Crossref] [PubMed]

X. Li, M. Zhao, Y. Xing, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Optical encryption via monospectral integral imaging,” Opt. Express 25(25), 31516–31527 (2017).
[Crossref] [PubMed]

H. Deng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011).
[Crossref]

Wang, R.

Wang, X.

Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017).
[Crossref]

Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014).
[Crossref] [PubMed]

Wang, Y.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

Xiao, X.

Xing, Y.

Xiong, J.

J. Yu, F. Zhang, and J. Xiong, “An innovative sift-based method for rigid video object recognition,” Math. Probl. Eng. 2014, 138927 (2014).
[Crossref]

Xu, L.

Yamaguchi, Y.

Yeom, J.

Yoo, H.

Yu, J.

J. Yu, F. Zhang, and J. Xiong, “An innovative sift-based method for rigid video object recognition,” Math. Probl. Eng. 2014, 138927 (2014).
[Crossref]

Yu, S.

Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017).
[Crossref]

Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014).
[Crossref] [PubMed]

Yuan, Y.

Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017).
[Crossref]

Zeng, X.

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

Zhang, F.

J. Yu, F. Zhang, and J. Xiong, “An innovative sift-based method for rigid video object recognition,” Math. Probl. Eng. 2014, 138927 (2014).
[Crossref]

Zhang, H. L.

Zhang, J.

Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017).
[Crossref]

Y. Chen, X. Wang, J. Zhang, S. Yu, Q. Zhang, and B. Guo, “Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel,” Opt. Express 22(15), 17897–17907 (2014).
[Crossref] [PubMed]

Zhang, Q.

Zhang, Y. A.

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

Zhao, M.

Zhao, W. L.

W. L. Zhao and C. W. Ngo, “Flip-invariant SIFT for copy and object detection,” IEEE Trans. Image Process. 22(3), 980–991 (2013).
[Crossref] [PubMed]

Zhong, Q.

Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016).
[Crossref]

C. Su, Q. Zhong, Y. Peng, L. Xu, R. Wang, H. Li, and X. Liu, “Grayscale performance enhancement for time-multiplexing light field rendering,” Opt. Express 23(25), 32622–32632 (2015).
[Crossref] [PubMed]

Zhou, X.

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

X. Li, M. Zhao, Y. Xing, H. L. Zhang, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Designing optical 3d images encryption and reconstruction using monospectral synthetic aperture integral imaging,” Opt. Express 26(9), 11084–11099 (2018).
[Crossref] [PubMed]

X. Li, M. Zhao, Y. Xing, L. Li, S. T. Kim, X. Zhou, and Q. H. Wang, “Optical encryption via monospectral integral imaging,” Opt. Express 25(25), 31516–31527 (2017).
[Crossref] [PubMed]

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

Zhu, Y.

Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013).
[Crossref]

ACS Appl. Mater. Interfaces (1)

X. Zhou, Y. Peng, R. Peng, X. Zeng, Y. A. Zhang, and T. Guo, “Fabrication of large-scale microlens arrays based on screen printing for integral imaging 3D display,” ACS Appl. Mater. Interfaces 8(36), 24248–24255 (2016).
[Crossref] [PubMed]

Appl. Opt. (2)

IEEE Sens. J. (1)

S. Luo, W. Mou, K. Althoefer, and H. Liu, “Novel tactile-sift descriptor for object shape recognition,” IEEE Sens. J. 15(9), 5001–5009 (2015).
[Crossref]

IEEE Trans. Image Process. (1)

W. L. Zhao and C. W. Ngo, “Flip-invariant SIFT for copy and object detection,” IEEE Trans. Image Process. 22(3), 980–991 (2013).
[Crossref] [PubMed]

Int. J. Comput. Vis. (1)

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004).
[Crossref]

J. Disp. Technol. (2)

M. Cho and B. Javidi, “Computational reconstruction of three-dimensional integral imaging by rearrangement of elemental image pixels,” J. Disp. Technol. 5(2), 61–65 (2009).
[Crossref]

Q. Zhong, Y. Peng, H. Li, and X. Liu, “Optimized image synthesis for multi-projector-type light field display,” J. Disp. Technol. 12(12), 1745–1751 (2016).
[Crossref]

J. Electron. Imaging (1)

S. W. Teng, M. T. Hossain, and G. Lu, “Multimodal image registration technique based on improved local feature descriptors,” J. Electron. Imaging 24(1), 013013 (2015).
[Crossref]

J. Opt. (1)

K. Inoue, M. Lee, B. Javidi, and M. Cho, “Improved 3D integral imaging reconstruction with elemental image pixel rearrangement,” J. Opt. 20(2), 025703 (2018).
[Crossref]

J. Opt. Soc. Am. A (2)

J. Soc. Inf. Disp. (1)

H. Deng, Q. H. Wang, L. Li, and D. H. Li, “An integral-imaging three-dimensional display with wide viewing angle,” J. Soc. Inf. Disp. 19(10), 679–684 (2011).
[Crossref]

J. Vis. Commun. Image Represent. (1)

Y. Zhu, S. Cheng, V. Stankovic, and L. Stankovic, “Image registration using BP-SIFT,” J. Vis. Commun. Image Represent. 24(4), 448–457 (2013).
[Crossref]

Math. Probl. Eng. (1)

J. Yu, F. Zhang, and J. Xiong, “An innovative sift-based method for rigid video object recognition,” Math. Probl. Eng. 2014, 138927 (2014).
[Crossref]

Multimedia Tools Appl. (1)

J. Kalpana and R. Krishnamoorthi, “Color image retrieval technique with local features based on orthogonal polynomials model and SIFT,” Multimedia Tools Appl. 75(1), 49–69 (2016).
[Crossref]

Opt. Commun. (1)

Y. Yuan, S. Yu, X. Wang, and J. Zhang, “Resolution enhanced 3D image reconstruction by use of ray tracing and auto-focus in computational integral imaging,” Opt. Commun. 404, 73–79 (2017).
[Crossref]

Opt. Express (6)

Opt. Lasers Eng. (2)

X. Li, Y. Wang, Q. H. Wang, Y. Liu, and X. Zhou, “Modified integral imaging reconstruction and encryption using an improved SR reconstruction algorithm,” Opt. Lasers Eng. 112, 162–169 (2019).
[Crossref]

B. Cho, P. Kopycki, M. Martinez-Corral, and M. Cho, “Computational volumetric reconstruction of integral imaging with improved depth resolution considering continuously non-uniform shifting pixels,” Opt. Lasers Eng. 111, 114–121 (2018).
[Crossref]

Opt. Lett. (2)

Optik (Stuttg.) (1)

G. A. Montazer and D. Giveki, “Content based image retrieval system using clustered scale invariant feature transforms,” Optik (Stuttg.) 126(18), 1695–1699 (2015).
[Crossref]

Pattern Recognit. Lett. (1)

G. Lv, S. W. Teng, and G. Lu, “Enhancing SIFT-based image registration performance by building and selecting highly discriminating descriptors,” Pattern Recognit. Lett. 84, 156–162 (2016).
[Crossref]

Other (1)

J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005), Chap. 3.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Block diagram of the proposed CIIR method.
Fig. 2
Fig. 2 Schematic diagram of integral imaging.
Fig. 3
Fig. 3 Schematic of conventional method for CIIR.
Fig. 4
Fig. 4 Schematic of improved non-periodic pixels extracting method for CIIR.
Fig. 5
Fig. 5 Moving direction of the patch from template image.
Fig. 6
Fig. 6 Object and elemental images used in simulation: (a) object (b) elemental images. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com
Fig. 7
Fig. 7 Reconstructed results of left view, front view, and right view using the: (a) conventional CIIR method [30]; (b) CIIR method based on ray tracing and auto-focus [18]; (c) improved non-periodic pixels extracting CIIR method; (d) proposed CIIR method (Method 1); (e) proposed CIIR method (Method 2). Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com
Fig. 8
Fig. 8 Setups of experimental systems: (a) System 1 (b) System 2. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com
Fig. 9
Fig. 9 Elemental images used in experiments: (a) System 1 (b) System 2.
Fig. 10
Fig. 10 Reconstructed results of left view, front view, and right view of System 1 using: (a) conventional CIIR method [30]; (b) CIIR method based on ray tracing and auto-focus [18]; (c) improved non-periodic pixels extracting CIIR method; (d) proposed CIIR method (Method 1); (e) proposed CIIR method (Method 2). Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com
Fig. 11
Fig. 11 Reconstructed results of left view, front view, and right view of System 2 using: (a) conventional CIIR method [30]; (b) CIIR method based on ray tracing and auto-focus [18]; (c) improved non-periodic pixels extracting CIIR method; (d) proposed CIIR method (Method 1); (e) proposed CIIR method (Method 2). Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com
Fig. 12
Fig. 12 Reconstructed results using the proposed CIIR method of System 1 when the view point is at (25, 25, 4050) and the values of PSNR in the process of distorted region searching are 20, 25, and 30: (a) using the proposed Method 1; (b) using the proposed Method 2. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com
Fig. 13
Fig. 13 Reconstructed results using the proposed CIIR method of System 2 when the view point is at (27, 27, 580) and the values of PSNR in the process of distorted region searching are 20, 25, and 30: (a) using the proposed Method 1; (b) using the proposed Method 2. Rubik’s Cube used by permission Rubik’s Brand Ltd. www.rubik’s.com

Tables (9)

Tables Icon

Table 1 Parameters used in the computational reconstruction of simulation

Tables Icon

Table 2 Number of the distorted patches, size of the compressed image patches via JPEG (in Method 1), and size of the compressed location of the good regions in elemental images (in Method 2)

Tables Icon

Table 3 PSNR and SSIM of the reconstructed results at different view points

Tables Icon

Table 4 Parameters used in the computational reconstruction of experiments (System 1)

Tables Icon

Table 5 Parameters used in the computational reconstruction of experiments (System 2)

Tables Icon

Table 6 Number of distorted patches, size of the compressed image patches via JPEG (in Method 1), and size of the compressed location of the good regions in elemental images (in Method 2)

Tables Icon

Table 7 PSNR and SSIM of the reconstructed results at different view points

Tables Icon

Table 8 Number of distorted patches, size of the compressed image patches via JPEG (in Method 1), and size of the compressed location of the good regions in elemental images (in Method 2)

Tables Icon

Table 9 PSNR and SSIM of the reconstructed results at different view points

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

f ( r , t ) = ψ ( k , v ) exp [ j 2 π ( k r + v t ) ] d k d v ,
f ( x , y , z ; λ ; t ) = ϕ ( α λ , β λ ; t ) exp [ j 2 π ( α λ x + β λ y ) ] d ( α λ ) d ( β λ ) .
f a b ( x , y , z ; λ ; t ) = b =1 B a = 1 A ϕ a b ( α a b λ , β a b λ ; t ) exp ( j 2 π λ 1 α a b 2 β a b 2 z ) exp [ j 2 π ( α a b λ x + β a b λ y ) ] ,
1 f = 1 g + 1 l 1 .
c max = g l 2 P L R E I l 1 ( l 1 + l 2 + g ) P E I , ( l 2 g ) .

Metrics