Abstract

A resolution-enhanced integral imaging microscope that uses lens array shifting is proposed in this study. The lens shift method maintains the same field of view of the reconstructed orthographic view images with increased spatial density. In this study, multiple sets of the elemental images were captured with horizontal and vertical shifts of the micro lens array and combined together to form a single set of the elemental images. From the combined elemental images, orthographic view images and depth slice images of the microscopic specimen were generated with enhanced resolution.

© 2009 OSA

1. Introduction

Since its invention, the optical microscope has been the most widely used device in observing the microscopic images of the object in a wide range of applications [1]. However, optical microscopes have fundamental limitations in that it provides only a two-dimensional(2-D) information of the object, not a full three-dimensional(3-D) information. The 3-D information of the specimen is a significant factor in the point that it can improve the accuracy of the recognition and identification of the specimen. Many 3-D related studies have been conducted, including pickup, display, and image processing, and some of the results have been applied to the field of biomedical imaging recently [24].

Most of the 3-D bio-medical imaging applications are based on stereoscopy where two different images are displayed and divided by a filter in order to be directed into the corresponding eyes [5]. However stereoscopy has disadvantages that are known as the cardboard and the puppet theater effect, which can cause eye fatigue and image distortion [6]. Another technique for bio-medical 3-D imaging is holography [7]. Holography provides full volumetric information of the object without any loss. Practical use of the holography, however, has been limited by difficulties in capturing the full color images and implementing coherent optics.

Integral imaging(II), which captures and reproduces directional ray distribution by using lens array, provides real-time full color images with full parallax [817]. Because of these advantages, there has been great interest in integral imaging for the past two decades. The first application of the II to the microscope was explored by M. Levoy et al. [3]. They used a micro lens array to capture the elemental images. Similar to a confocal microscope, various view images and different depth slices of the microscopic specimen were reconstructed. A 3-D structure reconstruction was also demonstrated by using deconvolution based on point spread function analysis. However, II microscope(IIM) has a low-resolution problem, which is inherent in II due to the limited ray sampling rate of the lens array.

Recently, many studies that would enhance the resolution of II have been proposed and implemented. L. Erdmann et al.reported about a scanning micro lens array that uses digital integral photography [18]. The resolution of the elemental image was enhanced by sequentially capturing the multiple sets of the elemental images with a scanning micro lens array and a stationary pinhole array. J.-S. Jang et al. proposed an resolution-enhanced II pickup and display system that uses synchronously moving lens arrays [19]. S. Kishk et al.reported about the improved resolution 3-D object sensing and recognition method that uses time multiplexed computational integral imaging [20]. They used a non-stationary micro lens array that overcame the Nyquist upper limit of the resolution. However, the results of these studies have not been applied to microscopy applications.

In this paper, the resolution-enhanced IIM that uses lens array shifting is proposed. In the proposed method, non-stationary lens array technique is applied to the microscope applications, which, to the authors' knowledge, has not been done before. In Section 2, the principles of conventional IIM and 3-D visualization methods from elemental images, i.e. orthographic view image reconstruction method and depth slice reconstruction method, which is also called computational integral imaging reconstruction(CIIR), are explained. In Section 3, the resolution limitation of IIM is explained. The relationship between the resolution and field of view(FOV) is also presented. In Section 4, the resolution-enhancement of the reconstructed view images by the proposed lens array shifting are explained. In Sections 5 and 6, the experimental results of the resolution enhanced view image reconstruction and conclusions are explained.

2. Principle of IIM and 3-D visualization

The structure of the IIM, which is based on an infinity corrected optical system, is shown in Fig. 1 .

 

Fig. 1 Structure of the IIM

Download Full Size | PPT Slide | PDF

The IIM consists of the objective lens, tube lens, micro lens array, and charge coupled device(CCD). The light rays that emanate from the specimen located at the focal plane of the objective lens are collimated by the objective lens and focused on the intermediate plane by the tube lens. Suppose a specimen point is located at (x, y, z). The lateral coordinates (u, v) of its intermediate image point is given by

(u,v)=(f1f2z(lf2)f12x,f1f2z(lf2)f12y),
where f 1 and f 2 are the focal lengths of the objective lens and the tube lens, z is the axial distance of the specimen point from the focal plane of the objective lens, and l is the distance between the back focal plane of the objective lens and the tube lens. This intermediate image point is imaged by a number of elemental lenses to form elemental image points. For [kx, ky]-th elemental lens, the coordinates of the elemental image point (s, t) on the CCD is given by
(s,t)=(kxϕg(z(lf2)f12)gf1f2xz(d(lf2)f22)df12,kyϕg(z(lf2)f12)gf1f2yz(d(lf2)f22)df12),
where ϕ is the size of each elemental lens, g is the distance between the micro lens array and the elemental image plane, and d is the distance between the focal plane of the tube lens and the micro lens array. The local position difference of these elemental image points in neighboring elemental images, i.e. disparity, can be calculated as

dIIM=ϕg(z(lf2)f12)z(d(lf2)f22)df12.

Equation (3) indicates that the disparity between the elemental images is dependent only on the specimen depth z. Therefore, 3-D information of the specimen is acquired by IIM in a form of the disparity between the elemental images. Since the depth of the specimen is encoded as a disparity between the elemental images, the minimum depth deviation that causes barely detectable disparity change can be a measure of the longitudinal resolution of the IIM. Figure 2 shows some numerical values of the disparity calculated by Eq. (3). By explicitly analyzing, or implicitly using, the disparity in the elemental images, 3-D information of the specimen can be processed.

 

Fig. 2 Disparity between the neighboring elemental images. f 1 = 20mm, f 2 = 200mm, l = 105mm, g = 2.6087mm, d = 30mm.

Download Full Size | PPT Slide | PDF

Two useful 3-D visualization methods that use elemental images are the orthographic view image reconstruction and the depth slice reconstruction. Since the 3-D information of the specimen is encoded as a disparity between the elemental images, it is possible to reconstruct 3-D information by extracting the disparity map explicitly using image processing techniques. In this report, however, we use much simpler methods. In these methods, the disparity map is not extracted explicitly, but is used implicitly by collecting and remapping the pixels at a specific position in every elemental image. Since the disparity is in essence the local pixel position difference, the depth slice images or orthographic view images can be reconstructed by using local position dependent process without any massive image processing.

The concept of the orthographic view reconstruction method is shown in Fig. 3 . The pixels at the same local position in every elemental image are collected to form an orthographic view image of a specific view direction. Since a single pixel is extracted from each elemental image, the pixel count of the generated orthographic view image is the same as the number of the elemental images. The pixel count of each elemental image is the same as the number of the generated orthographic view images. Therefore if there are i × j elemental images and each of them have x × y pixels, as shown in Fig. 3(a), x × y orthographic view images can be generated with i × j resolution for each. Each orthographic view image corresponds to a specific view direction. Figure 3(b) shows two orthographic view images for two objects at different depths. In one orthographic view image, the two objects have a 4-pixel separation, while it is reduced to 2 pixels in another orthographic view image, which reflects an apparent view change according to the different view direction. Therefore, by observing a number of orthographic view images, it is possible to obtain a better understanding of the 3-D structure of the specimen.

 

Fig. 3 Concept of orthographic view reconstruction method (a) pixel mapping (b) apparent view difference between reconstructed orthographic views

Download Full Size | PPT Slide | PDF

Depth slice reconstruction that uses CIIR is another 3-D visualization method in II. As shown in Fig. 4 , for a given depth plane, each point in the depth plane corresponds to multiple elemental image points. CIIR reconstructs depth slice images by averaging these corresponding elemental image points for each point in the depth plane. The elemental image points will have the same value only when the corresponding point in the depth plane is actually occupied by the specimen. Therefore, by CIIR, only the in-plane part of the specimen is focused while the out-of-plane part is blurred, which also helps the user understand the 3-D structure of the specimen.

3. Resolution limitation of IIM

The resolution of IIM is fundamentally determined by the spatial density of the elemental lenses in the lens array. Since each elemental lens captures the angular distribution of the light rays at its principal point, high-density elemental lenses enable a high spatial sampling rate of the light rays, resulting in high resolution during the reconstruction. More specifically, in the orthographic view reconstruction, the number of the elemental lens directly determines the pixel count of the reconstructed view image, since a single pixel is extracted from each elemental image as explained before. The sampling interval in the reconstructed view image is also determined by the size of the elemental lens. In general, the pixel count and the spatial sampling interval of the reconstructed view image are not satisfactory. For example, suppose that IIM consists of an objective lens of 10 × magnification with a 0.28 numerical aperture(NA), and a lens array of 100 × 100 circular elemental lenses with 125µm diameter. For a wavelength λ = 600nm, the resolving power, i.e. spatial sampling interval, indicated by the NA of the objective lens, is λ/NA = 2.14µm. Due to the elemental lens limitation, however, the spatial sampling interval of the reconstructed orthographic view image is restricted to 125µm/10 = 12.5µm, which is almost 6 times worse than the theoretical resolving power of the objective lens, which is 2.14µm. Also, the pixel count 100 × 100 of the reconstructed orthographic view image is generally not enough.

An immediate remedy for this resolution limitation would be to increase the number of the elemental lenses that have a decreased elemental lens size, as shown in Fig. 5 . In this case, however, the FOV of each elemental lens is also decreased by the same factor, which limits the angular range of the orthographic view reconstruction. Therefore, reducing the elemental lens size is not a meaningful solution.

 

Fig. 5 Relationship between the spatial density and the FOV of the elemental images (a) FOV = a˚, (b) FOV = 1/2 a˚.

Download Full Size | PPT Slide | PDF

The resolution of the depth slice reconstruction is determined by the number of the corresponding elemental image points that participate in the averaging process, which is also limited by the total number and FOV of elemental lenses [21]. Consequently, in order to enhance the resolution of the view images and depth slice images, it is critical to increase the spatial density of the elemental lenses while maintaining the same FOV.

4. Resolution-enhanced IIM by lens shifting

Lens shift method is a useful technique in increasing the spatial density while maintaining the same FOV. Figure 6 shows the concept of the lens shift method. Multiple sets of the elemental images are captured sequentially with a lens array shift and combined to form a single set of high spatial density elemental images. If the resolution-enhancing factor of the reconstructed view image is m, m 2 sets of the elemental images are captured with a step of ϕ/m shift along horizontal and vertical directions.

 

Fig. 6 Principle of lens shift method (a) stationary micro lens array and the captured elemental images (b) lens array shifted to horizontal direction by ϕ/m and captured elemental images (c) high spatial density elemental images combined from the m 2 sets of the elemental images captured with lens array shifts (d) x × y orthographic view images reconstructed from the synthesized high spatial density elemental images.

Download Full Size | PPT Slide | PDF

As shown in Figs. 6(a) and (b), let Ekxkypxpy denote [px, py]-th elemental image in the set captured with, (ϕ/m)kx and (ϕ/m)ky lens array shift. After capturing m 2 sets of the elemental images, they are synthesized sequentially, as shown in Fig. 6(c). Elemental images of the same index in each set are gathered to be located next to each other in the synthesized set of the elemental images. By this method, the spatial density is increased by a factor of m in each direction without any loss of FOV. Figure 6(d) shows a set of the reconstructed orthographic view images from Fig. 6(c). Each view image now has m-times enhanced [mi, mj] pixel count, while the number of view images remains unchanged to [x, y]. The resolution of the depth slice reconstruction is also enhanced due to the increased number of the elemental images.

Note that, in the proposed method, the resolution of the reconstructed images is still fundamentally limited by NA of the objective lens. The contribution of the lens shifting is to make better use of the resolving power of the objective lens. As discussed in previous section 3, the coarse sampling interval of the lens array is the dominant factor limiting the resolution of the images in the conventional IIM. However, the sampling interval can be decreased by m-times by the lens shifting technique in the proposed method, being comparable to the resolving power of the objective lens. Therefore, ignoring any image degradation caused by the use of the lens array, the image quality of the proposed method can reach that of the usual 2-D microscope with the same NA objective lens by increasing m accordingly.

5. Experimental results

The experimental setup was composed of an objective lens, video microscope unit, micro lens array, Piezo-actuator, and CCD. The magnification of the objective lens was 10 × , the NA was 0.28, and the focal length was 20mm. The video microscope unit had 1 × tube lens and telecentric reflective illumination. The focal length and the pitch size of the micro lens array were 2.4mm and 125µm, respectively. The Piezo-actuator was used for bi-directional scanning along horizontal and vertical directions. The elemental images formed by the micro lens array were relayed by 1:1 macro lens and captured by CCD. The implemented experimental setup and its schematics are shown in Fig. 7 .

 

Fig. 7 Experimental setup: (a) experimental setup and (b) its schematic.

Download Full Size | PPT Slide | PDF

A dayfly located around the focal length of the objective lens was used as an object. Figure 8 shows 2-D image of the object captured without micro lens array. Size of the object was 1.3 mm. The micro lens array was mounted on the Piezo-actuator. In order to enhance the resolution of the reconstructed view image by 5 times, i.e. m = 5, the Piezo-actuator was controlled in order to move the micro lens array with 25µm step along horizontal and vertical directions in order to capture 25 sets of the elemental images. Experimental results are shown in Figs. 9 -12 . Figure 9(a) shows one set of the captured elemental images. There are 45 × 45 elemental images, each of which has 15 × 15 pixel count. Figure 9(b) is a set of the orthographic view images that are reconstructed from the elemental images shown in Fig. 9(a). Each reconstructed orthographic view image has 45 × 45 pixel count and the number of views is 15 × 15. The brightness variation across the reconstructed view images came from non-uniform intensity distribution in each elemental image, which can be remedied by enhancing the illumination optics. Figure 10(a) shows the high spatial density elemental images synthesized from 25 sets of the captured elemental images that use the proposed method. The pixel count of each elemental image is still 15 × 15. The number of the elemental images, however, increased to 225 × 225, making the total pixel count of all elemental images reach 3375 × 3375. The view images generated from the high spatial density elemental images are shown in Fig. 10(b). Figure 11 shows side-by-side comparison between the magnified view images from Fig. 9(b) and Fig. 10(b). Note the number of the pixels of the generated view image is increased from 45 × 45 in Fig. 11(a) to 225 × 225 in Fig. 11(b). It can be observed that the proposed method shows much enhanced resolution with 5 times larger pixel count along each direction. The diagonal mesh effect shown in Fig. 11(b) was caused by imperfect calibration of the lens array in the experimental setup and is believed to be eliminated by aligning the lens array more precisely.

 

Fig. 8 2-D image of the object captured without micro lens array

Download Full Size | PPT Slide | PDF

 

Fig. 9 Experimental results (a) single set of the elemental image captured with stationary lens array (b) reconstructed orthographic view images. Pixel count of each orthographic view is 45 × 45 and the number of view images is 15x15.

Download Full Size | PPT Slide | PDF

 

Fig. 12 Movie of depth slice images generated from (a) (Media 3) one set of the elemental images captured by a stationary lens array (Fig. 7) and (b) (Media 4) high density elemental images obtained by using lens array shifting (Fig. 8).

Download Full Size | PPT Slide | PDF

 

Fig. 10 Experimental results (a) high spatial density elemental images combined from 25 sets of the elemental images that were captured by a lens array shift (b) reconstructed orthographic view images. The pixel count of each orthographic view is 225 × 225 and the number of view images is15x15.

Download Full Size | PPT Slide | PDF

 

Fig. 11 Movie of the orthographic view images that was reconstructed by using (a) (Media 1) one set of the elemental images that was captured with a stationary lens array (Fig. 9) and (b) (Media 2) high density elemental images obtained by using lens array shifting (Fig. 10).

Download Full Size | PPT Slide | PDF

Depth slice images were also reconstructed by CIIR. Figure 12 shows the generated depth slice images from one set (675 × 675 pixel count) of the elemental images and a synthesized set (3375 × 3375 pixel count) of the elemental images. The resolution enhancement by the proposed method was also apparent.

6. Conclusion

A resolution-enhanced integral imaging microscope that uses lens array shifting was proposed. The micro lens array that was connected to the Piezo-actuator shifted along horizontal and vertical directions. The CCD captured multiple sets of the elemental images in synchronization with the lens array shift, and captured multiple sets were combined to form a single set of high spatial density elemental images. From these combined elemental images, resolution-enhanced orthographic view images were reconstructed without sacrificing angular ranges of the view reconstruction. Depth slice images were also reconstructed with an improved resolution. The experimental results showed the resolution enhancement in the reconstructed view images and depth slice images clearly, which demonstrates the feasibility of the proposed method.

Acknowledgements

This work was partly supported by the grant of the Korean Ministry of Education, Science and Technology (The Regional Core Research Program / Chungbuk BIT Research-Oriented University Consortium). This work was also partly supported by the Korea Research Foundation Grant funded by the Korean Government (KRF-2008-313-D00727).

References and links

1. S. Inoue and R. oldenbourg, Handbook of Optics(McGrawHill, 1995), Chap. 17.

2. J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004). [CrossRef]   [PubMed]  

3. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).

4. H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004). [CrossRef]   [PubMed]  

5. R. Ostnes, V. Abbottand, and S. Lavender, “Visualisation techniques: An overview - Part 1,” The Hydrographic Journal (113), 3–7 (2004).

6. H. Yamanoue, M. Okui, and F. Okano, “Geometrical analysis of puppet-theater and cardboard effects in stereoscopic HDTV images,” IEEE Trans. Circuits Syst. Video Technol. 16(6), 744–752 (2006). [CrossRef]  

7. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

8. G. Lippmmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

9. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29(23), 2734–2736 (2004). [CrossRef]   [PubMed]  

10. Y. Kim, H. Choi, S.-W. Cho, Y. Kim, J. Kim, G. Park, and B. Lee, “Three-dimensional integral display using plastic optical fibers,” Appl. Opt. 46(29), 7149–7154 (2007). [CrossRef]   [PubMed]  

11. J. Kim, S.-W. Min, Y. Kim, and B. Lee, “Analysis on viewing characteristics of an integral floating system,” Appl. Opt. 47(19), D80–D86 (2008). [CrossRef]   [PubMed]  

12. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-6-1067. [CrossRef]   [PubMed]  

13. Y. Kim, J. Kim, J.-M. Kang, J.-H. Jung, H. Choi, and B. Lee, “Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array,” Opt. Express 15(26), 18253–18267 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-26-18253. [CrossRef]   [PubMed]  

14. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15(8), 4814–4822 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-8-4814. [CrossRef]   [PubMed]  

15. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008), http://www.opticsinfobase.org/abstract.cfm?URI=oe-16-12-8800. [CrossRef]   [PubMed]  

16. J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20(6), 996–1004 (2003). [CrossRef]  

17. H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005). [CrossRef]   [PubMed]  

18. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40(31), 5592–5599 (2001). [CrossRef]  

19. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002). [CrossRef]  

20. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11(26), 3528–3541 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=oe-11-26-3528. [CrossRef]   [PubMed]  

21. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D Object using a depth conversion technique,” J. Opt. Soc. K. 12(3), 131–135 (2008). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. S. Inoue and R. Oldenbourg, Handbook of Optics(McGrawHill, 1995), Chap. 17.
  2. J.-S. Jang and B. Javidi, “Three-dimensional integral imaging of micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004).
    [CrossRef] [PubMed]
  3. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).
  4. H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
    [CrossRef] [PubMed]
  5. R. Ostnes, V. Abbottand, and S. Lavender, “Visualisation techniques: An overview - Part 1,” The Hydrographic Journal (113), 3–7 (2004).
  6. H. Yamanoue, M. Okui, and F. Okano, “Geometrical analysis of puppet-theater and cardboard effects in stereoscopic HDTV images,” IEEE Trans. Circuits Syst. Video Technol. 16(6), 744–752 (2006).
    [CrossRef]
  7. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
    [CrossRef] [PubMed]
  8. G. Lippmmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).
  9. J.-H. Park, H.-R. Kim, Y. Kim, J. Kim, J. Hong, S.-D. Lee, and B. Lee, “Depth-enhanced three-dimensional-two-dimensional convertible display based on modified integral imaging,” Opt. Lett. 29(23), 2734–2736 (2004).
    [CrossRef] [PubMed]
  10. Y. Kim, H. Choi, S.-W. Cho, Y. Kim, J. Kim, G. Park, and B. Lee, “Three-dimensional integral display using plastic optical fibers,” Appl. Opt. 46(29), 7149–7154 (2007).
    [CrossRef] [PubMed]
  11. J. Kim, S.-W. Min, Y. Kim, and B. Lee, “Analysis on viewing characteristics of an integral floating system,” Appl. Opt. 47(19), D80–D86 (2008).
    [CrossRef] [PubMed]
  12. H. Liao, M. Iwahara, N. Hata, and T. Dohi, “High-quality integral videography using a multiprojector,” Opt. Express 12(6), 1067–1076 (2004), http://www.opticsinfobase.org/abstract.cfm?URI=oe-12-6-1067 .
    [CrossRef] [PubMed]
  13. Y. Kim, J. Kim, J.-M. Kang, J.-H. Jung, H. Choi, and B. Lee, “Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array,” Opt. Express 15(26), 18253–18267 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-26-18253 .
    [CrossRef] [PubMed]
  14. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15(8), 4814–4822 (2007), http://www.opticsinfobase.org/abstract.cfm?URI=oe-15-8-4814 .
    [CrossRef] [PubMed]
  15. J.-H. Park, G. Baasantseren, N. Kim, G. Park, J.-M. Kang, and B. Lee, “View image generation in perspective and orthographic projection geometry based on integral imaging,” Opt. Express 16(12), 8800–8813 (2008), http://www.opticsinfobase.org/abstract.cfm?URI=oe-16-12-8800 .
    [CrossRef] [PubMed]
  16. J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20(6), 996–1004 (2003).
    [CrossRef]
  17. H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005).
    [CrossRef] [PubMed]
  18. L. Erdmann and K. J. Gabriel, “High-resolution digital integral photography by use of a scanning microlens array,” Appl. Opt. 40(31), 5592–5599 (2001).
    [CrossRef]
  19. J.-S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27(5), 324–326 (2002).
    [CrossRef]
  20. S. Kishk and B. Javidi, “Improved resolution 3D object sensing and recognition using time multiplexed computational integral imaging,” Opt. Express 11(26), 3528–3541 (2003), http://www.opticsinfobase.org/abstract.cfm?URI=oe-11-26-3528 .
    [CrossRef] [PubMed]
  21. D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D Object using a depth conversion technique,” J. Opt. Soc. K. 12(3), 131–135 (2008).
    [CrossRef]

2008 (3)

2007 (3)

2006 (1)

H. Yamanoue, M. Okui, and F. Okano, “Geometrical analysis of puppet-theater and cardboard effects in stereoscopic HDTV images,” IEEE Trans. Circuits Syst. Video Technol. 16(6), 744–752 (2006).
[CrossRef]

2005 (1)

2004 (5)

2003 (2)

2002 (1)

2001 (1)

1948 (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[CrossRef] [PubMed]

1908 (1)

G. Lippmmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Abbottand, V.

R. Ostnes, V. Abbottand, and S. Lavender, “Visualisation techniques: An overview - Part 1,” The Hydrographic Journal (113), 3–7 (2004).

Adams, A.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).

Arai, J.

J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20(6), 996–1004 (2003).
[CrossRef]

Baasantseren, G.

Cho, S.-W.

Choi, H.

Dohi, T.

Erdmann, L.

Footer, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).

Gabor, D.

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[CrossRef] [PubMed]

Gabriel, K. J.

Hata, N.

Hong, J.

Horowitz, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).

Hoshino, H.

J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20(6), 996–1004 (2003).
[CrossRef]

Iwahara, M.

Jang, J.-S.

Javidi, B.

Jung, J.-H.

Kang, J.-M.

Kim, E.-S.

D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D Object using a depth conversion technique,” J. Opt. Soc. K. 12(3), 131–135 (2008).
[CrossRef]

Kim, H.-R.

Kim, J.

Kim, N.

Kim, Y.

Kishk, S.

Koike, T.

Lavender, S.

R. Ostnes, V. Abbottand, and S. Lavender, “Visualisation techniques: An overview - Part 1,” The Hydrographic Journal (113), 3–7 (2004).

Lee, B.

Lee, S.-D.

Levoy, M.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).

Liao, H.

Lippmmann, G.

G. Lippmmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

Min, S.-W.

Nakajima, S.

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[CrossRef] [PubMed]

Ng, R.

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).

Okano, F.

H. Yamanoue, M. Okui, and F. Okano, “Geometrical analysis of puppet-theater and cardboard effects in stereoscopic HDTV images,” IEEE Trans. Circuits Syst. Video Technol. 16(6), 744–752 (2006).
[CrossRef]

J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20(6), 996–1004 (2003).
[CrossRef]

Okui, M.

H. Yamanoue, M. Okui, and F. Okano, “Geometrical analysis of puppet-theater and cardboard effects in stereoscopic HDTV images,” IEEE Trans. Circuits Syst. Video Technol. 16(6), 744–752 (2006).
[CrossRef]

J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20(6), 996–1004 (2003).
[CrossRef]

Ostnes, R.

R. Ostnes, V. Abbottand, and S. Lavender, “Visualisation techniques: An overview - Part 1,” The Hydrographic Journal (113), 3–7 (2004).

Park, G.

Park, J.-H.

Sakuma, I.

H. Liao, M. Iwahara, T. Koike, N. Hata, I. Sakuma, and T. Dohi, “Scalable high-resolution integral videography autostereoscopic display with a seamless multiprojection system,” Appl. Opt. 44(3), 305–315 (2005).
[CrossRef] [PubMed]

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[CrossRef] [PubMed]

Shin, D.-H.

D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D Object using a depth conversion technique,” J. Opt. Soc. K. 12(3), 131–135 (2008).
[CrossRef]

Yamanoue, H.

H. Yamanoue, M. Okui, and F. Okano, “Geometrical analysis of puppet-theater and cardboard effects in stereoscopic HDTV images,” IEEE Trans. Circuits Syst. Video Technol. 16(6), 744–752 (2006).
[CrossRef]

ACM Trans. Graph. (1)

M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” ACM Trans. Graph. 25(3), 31–42 (•••).

Appl. Opt. (4)

C. R. Acad. Sci. (1)

G. Lippmmann, “La photographie integrale,” C. R. Acad. Sci. 146, 446–451 (1908).

IEEE Trans. Circuits Syst. Video Technol. (1)

H. Yamanoue, M. Okui, and F. Okano, “Geometrical analysis of puppet-theater and cardboard effects in stereoscopic HDTV images,” IEEE Trans. Circuits Syst. Video Technol. 16(6), 744–752 (2006).
[CrossRef]

IEEE Trans. Inf. Technol. Biomed. (1)

H. Liao, N. Hata, S. Nakajima, M. Iwahara, I. Sakuma, and T. Dohi, “Surgical navigation by autostereoscopic image overlay of integral videography,” IEEE Trans. Inf. Technol. Biomed. 8(2), 114–121 (2004).
[CrossRef] [PubMed]

J. Opt. Soc. Am. (1)

J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects of focusing on the resolution characteristics of integral photography,” J. Opt. Soc. Am. 20(6), 996–1004 (2003).
[CrossRef]

J. Opt. Soc. K. (1)

D.-H. Shin and E.-S. Kim, “Computational integral imaging reconstruction of 3D Object using a depth conversion technique,” J. Opt. Soc. K. 12(3), 131–135 (2008).
[CrossRef]

Nature (1)

D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948).
[CrossRef] [PubMed]

Opt. Express (5)

Opt. Lett. (3)

The Hydrographic Journal (1)

R. Ostnes, V. Abbottand, and S. Lavender, “Visualisation techniques: An overview - Part 1,” The Hydrographic Journal (113), 3–7 (2004).

Other (1)

S. Inoue and R. Oldenbourg, Handbook of Optics(McGrawHill, 1995), Chap. 17.

Supplementary Material (4)

» Media 1: MOV (494 KB)     
» Media 2: MOV (544 KB)     
» Media 3: MOV (671 KB)     
» Media 4: MOV (764 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1

Structure of the IIM

Fig. 2
Fig. 2

Disparity between the neighboring elemental images. f 1 = 20mm, f 2 = 200mm, l = 105mm, g = 2.6087mm, d = 30mm.

Fig. 3
Fig. 3

Concept of orthographic view reconstruction method (a) pixel mapping (b) apparent view difference between reconstructed orthographic views

Fig. 4
Fig. 4

Concept of CIIR

Fig. 5
Fig. 5

Relationship between the spatial density and the FOV of the elemental images (a) FOV = a˚, (b) FOV = 1/2 a˚.

Fig. 6
Fig. 6

Principle of lens shift method (a) stationary micro lens array and the captured elemental images (b) lens array shifted to horizontal direction by ϕ/m and captured elemental images (c) high spatial density elemental images combined from the m 2 sets of the elemental images captured with lens array shifts (d) x × y orthographic view images reconstructed from the synthesized high spatial density elemental images.

Fig. 7
Fig. 7

Experimental setup: (a) experimental setup and (b) its schematic.

Fig. 8
Fig. 8

2-D image of the object captured without micro lens array

Fig. 9
Fig. 9

Experimental results (a) single set of the elemental image captured with stationary lens array (b) reconstructed orthographic view images. Pixel count of each orthographic view is 45 × 45 and the number of view images is 15x15.

Fig. 12
Fig. 12

Movie of depth slice images generated from (a) (Media 3) one set of the elemental images captured by a stationary lens array (Fig. 7) and (b) (Media 4) high density elemental images obtained by using lens array shifting (Fig. 8).

Fig. 10
Fig. 10

Experimental results (a) high spatial density elemental images combined from 25 sets of the elemental images that were captured by a lens array shift (b) reconstructed orthographic view images. The pixel count of each orthographic view is 225 × 225 and the number of view images is15x15.

Fig. 11
Fig. 11

Movie of the orthographic view images that was reconstructed by using (a) (Media 1) one set of the elemental images that was captured with a stationary lens array (Fig. 9) and (b) (Media 2) high density elemental images obtained by using lens array shifting (Fig. 10).

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

(u,v)=(f1f2z(lf2)f12x,f1f2z(lf2)f12y),
(s,t)=(kxϕg(z(lf2)f12)gf1f2xz(d(lf2)f22)df12,kyϕg(z(lf2)f12)gf1f2yz(d(lf2)f22)df12),
dIIM=ϕg(z(lf2)f12)z(d(lf2)f22)df12.

Metrics