Abstract

Successful commercialization of holographic printers based on holographic stereograms requires a tool for their numerical replaying and quality assessment before the time-consuming and expensive process of holographic recording. A holographic stereogram encodes 2D images of a 3D scene that are incoherently captured from multiple perspectives and rearranged before recording. This study presents a simulator which builds a full parallax and full color white light viewable holographic stereogram from the perspective images captured by a virtual recentering camera with its further numerical reconstruction for any viewer location. By tracking all steps from acquisition to recording, the simulator allows for analysis of radial distortions caused by the optical elements used at the recording stage. Numerical experiments conducted at increasing degree of pincushion distortion proved its insignificant influence on the reconstructed images in all practical cases by using a peak signal-to-noise ratio and the structural similarity as an image quality metrics.

© 2014 Optical Society of America

1. Introduction

Holographic technology, which underwent rapid growth since invention of the holographic principle by Dennis Gabor [1] in 1948, can nowadays boasts with various sophisticated applied systems as holographic microscopes, displays and printers. Among them, the holographic stereogram printers are exemplary systems which have reached a certain degree of commercialization. The holographic stereogram (HS) as a holographic recording method was first introduced by Steven Bentons research group in MIT [2]. This approach for 3D imaging from sampled data by holographic means implies digital acquisition or computation of a set of discrete perspectives of a scene with further optical multiplexing of these perspectives in a holographic medium to build a stereoscopic pseudo 3D image under white light illumination. Over the years, various laboratories conducted studies related to printing HSs. Substantial progress, based on recent advances in computer-graphic modeling and spatial multiplexing of holograms, has been achieved by Masahiro Yamaguchi group in Tokyo Institute of Technology [3, 4] in 1990. They built an one-step printer for a full parallax white light viewable HS. The stereogram was composed by successive recording of multiple volume type elemental holograms from a sequence of multi-view perspective 2D images; for the purpose, the angular distribution of the light field for each elemental hologram was displayed on a spatial light modulator (SLM). Encoding of multi-view perspective images in a holographic emulsion as a volume reflection hologram enables viewer to perceive a natural 3D stereopsis at various angles similarly to a super multi-view high resolution display panel. The method was extended for recording of a full-color full-parallax HS [5]. Nowadays, advanced printers with CW or pulse laser illumination for white-light viewable large format digital color HSs are available, e.g. the printers designed by GEOLA for horizontal-parallax only holograms [6].

Successful commercialization of HS printers requires a simulation tool for numerical replaying of the stereogram before the time consuming and costly process of holographic recording. With such a tool one would be able to check the quality of reconstruction from the designed hologram before it is fed to the actual physical device and to avoid the unnecessary recordings for quality improvement. A desirable feature of the simulator should be its ability to incorporate in the model distortions which may occur during the recording. This will help to establish the critical issues for the printer performance and to identify parameters tolerance of the system. Despite the expected useful outcomes, development of such a simulator has not been properly addressed. Numerical reconstruction of a horizontal parallax only HS from a web camera video stream is proposed in [7] where a sequence of parallax related images is extracted from images acquired by a rotatable camera translated along a rail. In [8] we reported an algorithm for numerical reconstruction of a monochrome full-parallax HS. The capture of perspective images was done by a virtual recentering camera, and the reconstruction was numerically composed from the parallax related inputs with respect to the viewer location. The algorithm was verified by recording of a real HS built from the same 3D computer-graphic model which was used in the simulator.

This paper is a continuation of our efforts to build a versatile simulator for HS numerical replaying. As such, it solves two tasks, the first one being extention of the algorithm in [8] to model reconstruction of a full parallax and full color white light viewable HS recorded with radial distortion. The fact that the objective lens used to record parallax-related information onto the holographic emulsion is usually not a dedicated one dictates the need to consider this type of distortion at the recording stage. As a rule, this lens inevitably changes the angular distribution of the light field to be encoded into the hologram. If the changes are substantial, the perceived image deterioration occurs. That’s why, as a second task, we applied the developed algorithm to conduct numerical evaluation of distortion tolerance of the HS printing system for the pincushion model. A peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were chosen as image quality metrics. The manuscript is organized as follows: in Section 2 we described the method for acquisition of perspective images and briefly reviewed formation of parallax related images. The algorithm for numerical reconstruction of the HS with radial distortion is explained in Section 3. Section 4 gives the results of the numerical experiments and evaluates quality of reconstruction from distorted input data.

2. Holographic stereogram

To make a HS, a sequence of 2D images of a 3D object is incoherently acquired from multiple view points. Acquisition of these multi-view perspective images can be done by a simple camera or by a recentering camera [9]. In the first method, the camera is forward facing and experiences horizontal or vertical equidistant shifts in a plane to capture multiple images of the object. Such acquisition requires camera with a large field of view (FOV) and a large number of pixels. The main drawback is that each captured image contains large non-informative area of unused pixels. This not only decreases the resolution of the 3D object capture but increases computation time and space for saving data when the simple camera method is used for numerical modeling. The recentering camera configuration, shown in Fig. 1, removes the shortages of the simple camera because it positions the captured multiple views of the object always at the center of the acquired images. The plane of vertical and horizontal camera equidistant translation is parallel to the image plane [10]; the distance between both planes is D. For the acquisition process, the image plane intersects the camera convergence point which is usually at the center of the 3D object, and the camera optical axis is directed always towards this center. The exemplary images taken from four camera positions in Fig. 1 are depicted in Fig. 2. We applied in the paper the recentering camera acquisition method as more effective computationally.

 figure: Fig. 1

Fig. 1 Acquisition of perspective images by using the recentering camera.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2

Fig. 2 Perspectives acquired from multiple positions of the recentering camera.

Download Full Size | PPT Slide | PDF

If the perspective images were directly recorded onto a holographic emulsion in order of their capture, the observation plane where the viewers eyes are located should coincide with the hologram plane (Fig. 3) to observe distortion-free reconstruction. To separate these planes, directional information carried by the perspective images is processed to form parallax-related images [9]. Each parallax related image is displayed on the SLM and recorded in a focal plane of a lens onto a holographic photo-sensitive material as a volume type elemental hologram in a two-beam recording scheme. The parallax-related images modulate the intensity of the object beam. The whole hologram is divided into elemental holograms which are sequentially exposed to the parallax-related images. Illumination of the fringe pattern recorded in the hologram for reconstruction of the 3D scene ensures spatial multiplexing of the perspective views. For brevity, we adopted the name ”hogel” (holographic element) [10] for the elemental hologram as the minimal building block of the printed hologram and ”hogel image” to designate the parallax related image recorded in or reconstructed from a hogel. Due to the pixel rearrangement process, the hologram plane and the 3D object image plane coincide (Fig. 4).

 figure: Fig. 3

Fig. 3 The viewer position when perspective images are directly recorded onto the holographic emulsion

Download Full Size | PPT Slide | PDF

 figure: Fig. 4

Fig. 4 The viewer position, when the rearranged parallax-related images are recorded onto the holographic emulsion

Download Full Size | PPT Slide | PDF

A hogel image making process by rearranging the acquired perspective images is elucidated in Fig. 5. We used Pkl to denote the perspective image acquired at a camera position (k, l), where k = 1...K and l = 1...L are integers which give the number of equidistant camera shifts along the horizontal and vertical axes respectively. The total number of the perspective images is K × L, while the resolution of the perspective images is I × J with I and J being the number of pixels in a perspective image along horizontal and vertical directions. We used Hij to denote the (i, j)-th hogel image which is recorded as the (i, j)-th hogel; i = 1...I and j = 1...J. The hogel image Hij consists of all (i, j)-th pixels in the captured K × L perspective images, so the resolution of Hij is given by the number, K × L, of camera positions. The relationship between the perspective and hogel images is given by the expression below:

Hij(k,l)=Pkl(i,j);i=1I,j=1J,k=1K,l=1L

3. Numerical reconstruction of a holographic stereogram with radial distortion

At white light illumination, both eyes of the viewer see reconstructions of different perspectives of the object. The relationship between the HS and the viewer is shown in Fig. 6. The image perceived by the right eye in Fig. 6 is formed by the pixels indicated by ‘▵’ in the hogel images reconstructed from all hogels, while the left eye can see the image formed by the pixels indicated by ‘○’ in the same hogel images. The pixels ▵ and ○ in a given hogel image come from different perspective images captured from different camera positions. Thus the HS enables the viewer to observe different perspectives of the scene with both eyes and to experience sense of depth. The images observed by the viewer can be checked by numerical reconstruction before recording.

 figure: Fig. 5

Fig. 5 Rearrangement of the perspective images: (a) perspective images, (b) rearranged hogel images.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 Relationship between the holographic stereogram and the viewer

Download Full Size | PPT Slide | PDF

To suggest an algorithm for numerical reconstruction of the HS, we should determine the input provided by a given hogel to the image observed from a given viewing point. The relationship between this hogel on the hologram plane and the viewer located at a distance D in front of the hologram is schematized in the one-dimensional case in Fig. 7. Generalization to the two-dimensional case is straightforward. The hogel has a finite size, but it is so small in comparison to D that we can consider it as a point source which creates some angular distribution of intensities. We introduce a Cartesian coordinate system (x, z) whose origin coincides with the hogel, and the x axis runs through all hogels in the hologram plane. We introduce also a virtual observation plane which contains the reconstructed hogel image. The vertical z axis intersects this virtual plane at distance D behind the hologram. From the viewing point E(x, −D) with coordinates x and −D the viewer gets the intensity which corresponds to the point P at which the straight line through the viewing point and the hogel crosses the virtual observation plane. The point P within the hogel image is located at distance ξP from the vertical axis; here ξ denotes the axis in the virtual plane. Each hogel has a restricted FOV characterized by the angle φ, and the width of the hogel image at a distance D is given by W=2Dtan(φ2). For numerical reconstruction we consider the reconstructed hogel image as consisting of separate pixels with an interval between them given by Δξ = W/n where n is the number of samples of the intensity distribution encoded in the hogel along a single row. The algorithm picks up the index, np, of the pixel in the hogel image that is closest to the point P from

np=n2+Θ(ξpΔξ)
where the operator Θ rounds ξpΔξ to the nearest positive or negative integer value depending on the sign of ξp, and we assume that n is an even number. Similarly, at any viewer location, one easily composes an image from the intensities of rays coming from all hogels.

 figure: Fig. 7

Fig. 7 Relationship between the hogel images and the viewer position

Download Full Size | PPT Slide | PDF

The printer optical recording engine, which is used to focus the hogel images on the holographic emulsion, may introduce distortions in the recorded hogels that affect the perceived images. The most severe is the radial distortion caused by optical lenses because it changes the angular intensity distributions encoded in hogels. We describe this distribution in the numerical model by a limited number of rays that is equal to the number of the acquired perspective images. The intersection points of these rays with the virtual observation plane are uniformly distributed within the reconstructed undistorted hogel image. Along the ξ axis they are separated by Δξ from each other. Radial distortion leads to non-uniform angular distribution. To determine the coordinates of the intersection points in this case, we applied the following model of radial distortion:

ξu=ξd1+κrd2
ηu=ηd1+κrd2
rd=ξd2+ηd2
where (ξu, ηu) and (ξd, ηd) are the coordinates of an intersection point with a given intensity input in the undistorted and distorted hogel images; rd is the radius of distortion, and κ is the geometric distortion factor. Minus or plus sign of κ corresponds to barrel or pincushion distortion respectively. One observes barrel or pincushion distortion [11] depending on whether the image magnification decreases or increases when moving away from the center of distortion. Figure 8 illustrates the impact on the viewer caused by the non-uniform angular intensity distribution along a row located on ξ axis for a hogel image with pincushion distortion. In this case the shifts of the rays are only along this axis. Figure 8(a) presents the undistorted uniform distribution, and Fig. 8(b) shows the pincushion distorted distribution. The viewer eyes are located at positions EL and ER. In Fig. 8(a), the intensity inputs to viewer both eyes are carried by the rays which intersect the virtual observation plane at points PL and PR. However, in Fig. 8(b), due to the pincushion distortion, the intersection points of these rays and the corresponding intensity information are shifted to the outer part of the reconstructed hogel image. The viewer sees intensity information encoded at the points P′L and P′R which replace PL and PR and percieves a distorted 3D structure formed by unintended intensity inputs. Note that the contribution from the distorted hogel images is simulated also at Δξ.

 figure: Fig. 8

Fig. 8 Undistorted (a) and pincushion distorted (b) angular intensity distribution.

Download Full Size | PPT Slide | PDF

4. Numerical experiments and results

We used the developed algorithm to evaluate how the pincushion distorted hogel images affected the HS reconstruction. As a first step, we built computer-graphic models of two 3D scenes, and acquired successively 10, 000 perspective images of 100 × 100 views for each of them at equidistant locations along the vertical and horizontal axes using the recentring camera method. The used 3D models - a KETI logo and a monkey placed in front of a background -are presented in Fig. 9. The acquired images were rearranged into undistorted hogel images by applying Eq. (1). Some of the captured perspective images and the rearranged undistorted hogel images for the model in Fig. 9(a) are shown in Fig. 10. The numbers below the images in Fig. 10(a) give locations of the recentering camera view-points. The numbers below the images in Fig. 10(b) give locations of pixels taken from the perspective images to build the corresponding hogel image. Numerical reconstruction of the HSs for the two objects was made for a single viewer position. The simulation was performed separately for the R, G and B channels, but we assumed that the lens introduced the same pincushion distortion to the primary colors. We modeled observation of the hologram from a distance D which guarantees intensity inputs within the FOV θu of the undistorted hogel (Fig. 11). This was done to faciliate qualitative comparison between the undistorted and distorted hogel images. For the purpose we assume that the viewer’s eye position is at equal distance from the the four edges of the modeled square hologram. The distance D is so chosen that the straight lines from the eye to the hogels located at the ends of both diagonals of this square lie on the borders of the FOV θu. Thus, although the FOV θd of the distorted hogels exceeds θu and increases with κ, the area within the distorted hogel image that contributes to the reconstruction remains within the borders of the undistorted hogel image. For the results below D = 300 mm. The simulated holograms size was 300 mm by 300 mm. The viewing angle for both holograms was 90 degrees.

 figure: Fig. 9

Fig. 9 3D computer graphic models used in the numerical experiment

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 Perspective images acquired from different positions of the virtual camera (a) and some of the rearranged hogel images (b) for the KETI logo model.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11

Fig. 11 Viewing geometry at different distances from the hologram

Download Full Size | PPT Slide | PDF

We built distorted hogel images for the distortion coefficient κ increasing in a logarithmic scale; the exemplary hogel images for the model in Fig. 9(a) are shown in Fig. 12. For the same model, Fig. 13(a) shows the reconstructed image produced by the undistorted hogel images, whereas Figs. 13(b)–13(e) give reconstructions from the distorted hogel images at different values of κ.

 figure: Fig. 12

Fig. 12 (a) undistorted reference hogel image for the KETI logo model, (b)–(e) pincussion distorted hogel images at different values of the distortion coefficient

Download Full Size | PPT Slide | PDF

 figure: Fig. 13

Fig. 13 Numerically reconstructed images for the KETI logo model at different values of the distortion coefficient

Download Full Size | PPT Slide | PDF

We applied two metrics for objective quality assessment of distorted images corresponding to the used two 3D models. As a first metric we chose the widely used peak signal-to-noise ratio (PSNR). This metric estimates globally the similarity between two images by calculating PSNR as follows

PSNR=10log10(MAXI2MSE)=20log10(MAXIMSE)
MSE=1mni=0m1j=0n1|f(x,y)g(x,y)|2
where MAXI is the dynamic range of the pixel values (255 for 8-bit gray-scale images), f (x, y) is the undistorted image, g(x, y) is the distorted image and the mean square error (MSE) gives the average square of their difference; the parameters m and n give the number of pixels along the x and y axis. The undistorted hogel image and the undistorted reconstruction are used as reference images; for the KETI logo model they are shown in Fig. 12(a) and Fig. 13(a) respectively. To compute the MSE value we summed the entries in the R, G, B channels in the distorted and undistorted images and divided the result by 3 to ensure maximum dynamic range 255 of the pixel values. Because of its global character, the PSNR metric does not provide results consistent with human visual perception. That’s why we applied the structural similarity (SSIM) index as a local metric to establish structural changes in the images that occur due to pincussion distortion. According to definitions in [12] we built SSIM maps as a function of distortion using the following expression [12]:
SSIM(x,y)=(C1+2μfμg)(C2+2σfg)(C1+μf2+μg2)(C2+σf2+σg2)
where the means μf and μg and the standard deviations σf and σg are calculated within an 11 × 11 averaging window, w, which is sliding across the images f (x, y) and g(x, y) pixel-by-pixel; σfg is the correlation coefficient between f and g within the window, and C1 = (0.01 × MAXI)2 and C2 = (0.03 × MAXI)2 are constants. As in [12], we used a Gaussian window with a standard deviation 1.5 pixels. The resulting 2D map of the SSIM index can be viewed as the quality map of the distorted images. Finally, a mean SSIM index (MSSIM) is found as a mean value of the SSIM map to characterize globally the overall image similarity.

The PSNR results are shown in Fig. 14. They were obtained for the distorted hogel images reconstructed from the hogel (70,41) for the KETI logo model and one of the hogels for the monkey model as well as for the reconstructions of the two 3D models for the chosen viewing point. As is expected, the PSNR index degrades with the increasing distortion coefficient. We obtained rather close values for both models. According to the shown result, distortion affects stronger the hogel images than the reconstructions they compose. For distortions less than 0.0001 the numerical reconstruction from the distorted hogel images is comparable to the reference image.

 figure: Fig. 14

Fig. 14 PSNR for distorted hogel images and distorted numerical reconstructions as a function of distortion

Download Full Size | PPT Slide | PDF

The SSIM maps for the hogel images reconstructed from the hogel (70,41) for the KETI logo model and for the reconstructions shown in Fig. 13 are presented in Fig. 15. The size of all maps is 90 × 90 pixels and for all of them the SSIM index varies from −0.5 to 1. The value 1 shows full correlation. The maps in Fig. 15 were obtained by summation of digital images in the R, G and B channels of the compared color images as in the case of PSNR computation. As it should be expected, distortion leads to structural changes which are more expressed in the case of the hogel images. In addition, the SSIM maps built separately for the R, G and B channels in hogel images may differ substantially while the maps calculated for the reconstructed images show practically the same deterioration in the three channels (see Fig. 16). At large distortions only the central part of the image reconstructed from the HS is more or less intact. Simulation shows that these changes are acceptable up to κ = 0.0005. This is also confirmed by the plots of MSSIM in Fig. 17. The plots correspond to the distorted hogel images produced from the hogel (70,41) in Fig. 12 as well as to one of the hogels for the second model; the results for the reconstructed distorted images perceived at the viewing point for both models are also presented. The mean SSIM index for the distorted hogel images falls linearly with the logarithm of κ, while for the reconstruction it remains greater than 0.95 even at κ = 0.0001. Above this κ value the MSSIM is drastically decreasing. As in the case of PSNR, the results obtained for both models are rather close.

 figure: Fig. 15

Fig. 15 SSIM maps for distorted hogel images (top) and distorted numerical reconstructions (bottom) for the KETI logo model at increasing distortion

Download Full Size | PPT Slide | PDF

 figure: Fig. 16

Fig. 16 From left to right: SSIM maps for B, G and R channels of a distorted hogel image (70,41) for a KETI logo model (top) and a distorted numerical reconstruction (bottom) at a distortion coefficient 0.001.

Download Full Size | PPT Slide | PDF

 figure: Fig. 17

Fig. 17 MSSIM for distorted hogel images and distorted numerical reconstructions as a function of distortion.

Download Full Size | PPT Slide | PDF

5. Conclusion

The paper presents a simulator for numerical replaying of a full parallax and full color white light viewable HS recorded with a holographic printer in the presence of radial distortion. The latter is caused by a lens used to collect directional information coming from the 3D scene to a given point of the hologram or, from the point of view of technical implementation, to record a displayed on a SLM image with parallax information to an elemental hologram. The simulator builds numerical reconstruction from the parallax-related images composed from directional inputs which are extracted from multiple perspective images. The latter are acquired by a virtual re-centering camera which is translated along a virtual 2D camera track. Radial distortion is introduced in the model at the stage of picking-up the intensity inputs which formed the image seen by the viewer. The simulator was applied to evaluate, firstly, the change in the angular distribution of the light field that was caused by a non-ideal lens during the hologram recording and, secondly, the image deterioration at reconstruction as a result of this change. In general, the rectangular parallax related images acquire a pincushion shape when being reconstructed from pincushion distorted elemental holograms. The simulation, however, corresponded to a viewing position which required intensity inputs to the viewer’s eye that were restricted within a region coinciding with the undistorted image. This was done to facilitate comparison between the reference undistorted image and images with various degrees of distortions. The used image quality metrics - a peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) - revealed that pincushion distortion had much stronger effect on the parallax related images than on the image reconstructed from the HS. As the pincushion distortion becomes more pronounced, the structural elements in the reconstructed parallax images move away from the image center. At large shifts, it may occur that these structural elements send no contribution to the viewing zone of the viewer. As a whole, one can perceive high quality reconstruction if the distortion introduced by the objective lens of the holographic printer is not very severe. The synthesis of the HS from the acquired perspective images and its further reconstruction give the user an option to evaluate by computer means the quality of the 3D imaging provided by the captured 2D video content. Once the hogels are recorded, no distortion compensation of the reconstructed image is possible. However, we plan as a future task to check the possibility to form predistorted hogel images which, after being recorded as hogels, will yield compensated for radial distortion reconstruction. The predistortion should be based on evaluation of distortion introduced by the objective lens. Solution of such a task will certainly benefit from the methods for distortion compensation developed in integral imaging systems [13, 14].

Acknowledgments

This work was supported by the IT R&D program of MSIP. [Fundamental technology development for digital holographic contents]

References and links

1. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef]   [PubMed]  

2. S. A. Benton, “Survey of holographic stereograms,” in 26th Annual Technical Symposium (International Society for Optics and Photonics, 1983), pp. 15–19.

3. M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic 3-D printer,” in OE/LASE’90, 14–19 Jan., Los Angeles, CA (International Society for Optics and Photonics, 1990), pp. 84–92.

4. M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic three-dimensional printer: new method,” Appl. Opt. 31, 217–222 (1992). [CrossRef]   [PubMed]  

5. S. Maruyama, Y. Ono, and M. Yamaguchi, “High-density recording of full-color full-parallax holographic stereogram,” in Integrated Optoelectronic Devices 2008 (International Society for Optics and Photonics, 2008), 69120N.

6. H. Bjelkhagen and D. Brotherton-Ratcliffe, Ultra-realistic Imaging: Advanced Techniques in Analogue and Digital Colour Holography (CRC Press, 2013). [CrossRef]  

7. S. Zacharovas, A. Nikolskij, and J. Kuchin, “Dyi digital holography,” in SPIE OPTO (International Society for Optics and Photonics, 2011), 79570A.

8. J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012). [CrossRef]  

9. M. W. Halle, “The generalized holographic stereogram,” Ph.D. thesis, Massachusetts Institute of Technology (1993).

10. M. Halle, “Multiple viewpoint rendering,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1998), pp. 243–254.

11. A. J. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology (International Society for Optics and Photonics, 1993), pp. 36–48.

12. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]   [PubMed]  

13. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [invited],” Appl. Opt. 52, 546–560 (2013). [CrossRef]   [PubMed]  

14. H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948).
    [Crossref] [PubMed]
  2. S. A. Benton, “Survey of holographic stereograms,” in 26th Annual Technical Symposium (International Society for Optics and Photonics, 1983), pp. 15–19.
  3. M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic 3-D printer,” in OE/LASE’90, 14–19 Jan., Los Angeles, CA (International Society for Optics and Photonics, 1990), pp. 84–92.
  4. M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic three-dimensional printer: new method,” Appl. Opt. 31, 217–222 (1992).
    [Crossref] [PubMed]
  5. S. Maruyama, Y. Ono, and M. Yamaguchi, “High-density recording of full-color full-parallax holographic stereogram,” in Integrated Optoelectronic Devices 2008 (International Society for Optics and Photonics, 2008), 69120N.
  6. H. Bjelkhagen and D. Brotherton-Ratcliffe, Ultra-realistic Imaging: Advanced Techniques in Analogue and Digital Colour Holography (CRC Press, 2013).
    [Crossref]
  7. S. Zacharovas, A. Nikolskij, and J. Kuchin, “Dyi digital holography,” in SPIE OPTO (International Society for Optics and Photonics, 2011), 79570A.
  8. J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
    [Crossref]
  9. M. W. Halle, “The generalized holographic stereogram,” Ph.D. thesis, Massachusetts Institute of Technology (1993).
  10. M. Halle, “Multiple viewpoint rendering,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1998), pp. 243–254.
  11. A. J. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology (International Society for Optics and Photonics, 1993), pp. 36–48.
  12. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref] [PubMed]
  13. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in three-dimensional integral imaging: sensing, display, and applications [invited],” Appl. Opt. 52, 546–560 (2013).
    [Crossref] [PubMed]
  14. H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
    [Crossref] [PubMed]

2013 (1)

2012 (2)

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
[Crossref] [PubMed]

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

1992 (1)

1948 (1)

D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948).
[Crossref] [PubMed]

Benton, S. A.

S. A. Benton, “Survey of holographic stereograms,” in 26th Annual Technical Symposium (International Society for Optics and Photonics, 1983), pp. 15–19.

Bjelkhagen, H.

H. Bjelkhagen and D. Brotherton-Ratcliffe, Ultra-realistic Imaging: Advanced Techniques in Analogue and Digital Colour Holography (CRC Press, 2013).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Brotherton-Ratcliffe, D.

H. Bjelkhagen and D. Brotherton-Ratcliffe, Ultra-realistic Imaging: Advanced Techniques in Analogue and Digital Colour Holography (CRC Press, 2013).
[Crossref]

Docherty, T.

A. J. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology (International Society for Optics and Photonics, 1993), pp. 36–48.

Gabor, D.

D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948).
[Crossref] [PubMed]

Halle, M.

M. Halle, “Multiple viewpoint rendering,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1998), pp. 243–254.

Halle, M. W.

M. W. Halle, “The generalized holographic stereogram,” Ph.D. thesis, Massachusetts Institute of Technology (1993).

Honda, T.

M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic three-dimensional printer: new method,” Appl. Opt. 31, 217–222 (1992).
[Crossref] [PubMed]

M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic 3-D printer,” in OE/LASE’90, 14–19 Jan., Los Angeles, CA (International Society for Optics and Photonics, 1990), pp. 84–92.

Hong, S.

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

Hong, S.-I.

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
[Crossref] [PubMed]

Javidi, B.

Jeong, K.-M.

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
[Crossref] [PubMed]

Jo, N.-Y.

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
[Crossref] [PubMed]

Jung, K.

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

Kang, H.

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

Kim, H.-S.

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
[Crossref] [PubMed]

Koch, R.

A. J. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology (International Society for Optics and Photonics, 1993), pp. 36–48.

Kuchin, J.

S. Zacharovas, A. Nikolskij, and J. Kuchin, “Dyi digital holography,” in SPIE OPTO (International Society for Optics and Photonics, 2011), 79570A.

Lee, S.

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

Martinez-Corral, M.

Maruyama, S.

S. Maruyama, Y. Ono, and M. Yamaguchi, “High-density recording of full-color full-parallax holographic stereogram,” in Integrated Optoelectronic Devices 2008 (International Society for Optics and Photonics, 2008), 69120N.

Nikolskij, A.

S. Zacharovas, A. Nikolskij, and J. Kuchin, “Dyi digital holography,” in SPIE OPTO (International Society for Optics and Photonics, 2011), 79570A.

Ohyama, N.

M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic three-dimensional printer: new method,” Appl. Opt. 31, 217–222 (1992).
[Crossref] [PubMed]

M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic 3-D printer,” in OE/LASE’90, 14–19 Jan., Los Angeles, CA (International Society for Optics and Photonics, 1990), pp. 84–92.

Ono, Y.

S. Maruyama, Y. Ono, and M. Yamaguchi, “High-density recording of full-color full-parallax holographic stereogram,” in Integrated Optoelectronic Devices 2008 (International Society for Optics and Photonics, 2008), 69120N.

Park, J.

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

Park, J.-H.

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
[Crossref] [PubMed]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Stern, A.

Stoykova, E.

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Woods, A. J.

A. J. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology (International Society for Optics and Photonics, 1993), pp. 36–48.

Xiao, X.

Yamaguchi, M.

M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic three-dimensional printer: new method,” Appl. Opt. 31, 217–222 (1992).
[Crossref] [PubMed]

S. Maruyama, Y. Ono, and M. Yamaguchi, “High-density recording of full-color full-parallax holographic stereogram,” in Integrated Optoelectronic Devices 2008 (International Society for Optics and Photonics, 2008), 69120N.

M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic 3-D printer,” in OE/LASE’90, 14–19 Jan., Los Angeles, CA (International Society for Optics and Photonics, 1990), pp. 84–92.

Zacharovas, S.

S. Zacharovas, A. Nikolskij, and J. Kuchin, “Dyi digital holography,” in SPIE OPTO (International Society for Optics and Photonics, 2011), 79570A.

3D Res. (1)

J. Park, E. Stoykova, H. Kang, S. Hong, S. Lee, and K. Jung, “Numerical reconstruction of full parallax holographic stereograms,” 3D Res. 3, 1–6 (2012).
[Crossref]

Appl. Opt. (2)

IEEE Trans. Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref] [PubMed]

Nature (1)

D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948).
[Crossref] [PubMed]

Opt Express (1)

H.-S. Kim, K.-M. Jeong, S.-I. Hong, N.-Y. Jo, and J.-H. Park, “Analysis of image distortion based on light ray field by multi-view and horizontal parallax only integral imaging display,” Opt Express 20, 23755–23768 (2012).
[Crossref] [PubMed]

Other (8)

S. A. Benton, “Survey of holographic stereograms,” in 26th Annual Technical Symposium (International Society for Optics and Photonics, 1983), pp. 15–19.

M. Yamaguchi, N. Ohyama, and T. Honda, “Holographic 3-D printer,” in OE/LASE’90, 14–19 Jan., Los Angeles, CA (International Society for Optics and Photonics, 1990), pp. 84–92.

S. Maruyama, Y. Ono, and M. Yamaguchi, “High-density recording of full-color full-parallax holographic stereogram,” in Integrated Optoelectronic Devices 2008 (International Society for Optics and Photonics, 2008), 69120N.

H. Bjelkhagen and D. Brotherton-Ratcliffe, Ultra-realistic Imaging: Advanced Techniques in Analogue and Digital Colour Holography (CRC Press, 2013).
[Crossref]

S. Zacharovas, A. Nikolskij, and J. Kuchin, “Dyi digital holography,” in SPIE OPTO (International Society for Optics and Photonics, 2011), 79570A.

M. W. Halle, “The generalized holographic stereogram,” Ph.D. thesis, Massachusetts Institute of Technology (1993).

M. Halle, “Multiple viewpoint rendering,” in Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques (ACM, 1998), pp. 243–254.

A. J. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” in IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology (International Society for Optics and Photonics, 1993), pp. 36–48.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1
Fig. 1 Acquisition of perspective images by using the recentering camera.
Fig. 2
Fig. 2 Perspectives acquired from multiple positions of the recentering camera.
Fig. 3
Fig. 3 The viewer position when perspective images are directly recorded onto the holographic emulsion
Fig. 4
Fig. 4 The viewer position, when the rearranged parallax-related images are recorded onto the holographic emulsion
Fig. 5
Fig. 5 Rearrangement of the perspective images: (a) perspective images, (b) rearranged hogel images.
Fig. 6
Fig. 6 Relationship between the holographic stereogram and the viewer
Fig. 7
Fig. 7 Relationship between the hogel images and the viewer position
Fig. 8
Fig. 8 Undistorted (a) and pincushion distorted (b) angular intensity distribution.
Fig. 9
Fig. 9 3D computer graphic models used in the numerical experiment
Fig. 10
Fig. 10 Perspective images acquired from different positions of the virtual camera (a) and some of the rearranged hogel images (b) for the KETI logo model.
Fig. 11
Fig. 11 Viewing geometry at different distances from the hologram
Fig. 12
Fig. 12 (a) undistorted reference hogel image for the KETI logo model, (b)–(e) pincussion distorted hogel images at different values of the distortion coefficient
Fig. 13
Fig. 13 Numerically reconstructed images for the KETI logo model at different values of the distortion coefficient
Fig. 14
Fig. 14 PSNR for distorted hogel images and distorted numerical reconstructions as a function of distortion
Fig. 15
Fig. 15 SSIM maps for distorted hogel images (top) and distorted numerical reconstructions (bottom) for the KETI logo model at increasing distortion
Fig. 16
Fig. 16 From left to right: SSIM maps for B, G and R channels of a distorted hogel image (70,41) for a KETI logo model (top) and a distorted numerical reconstruction (bottom) at a distortion coefficient 0.001.
Fig. 17
Fig. 17 MSSIM for distorted hogel images and distorted numerical reconstructions as a function of distortion.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

H i j ( k , l ) = P k l ( i , j ) ; i = 1 I , j = 1 J , k = 1 K , l = 1 L
n p = n 2 + Θ ( ξ p Δ ξ )
ξ u = ξ d 1 + κ r d 2
η u = η d 1 + κ r d 2
r d = ξ d 2 + η d 2
PSNR = 10 log 10 ( MAX I 2 MSE ) = 20 log 10 ( MAX I MSE )
MSE = 1 m n i = 0 m 1 j = 0 n 1 | f ( x , y ) g ( x , y ) | 2
SSIM ( x , y ) = ( C 1 + 2 μ f μ g ) ( C 2 + 2 σ f g ) ( C 1 + μ f 2 + μ g 2 ) ( C 2 + σ f 2 + σ g 2 )

Metrics