Abstract

A large open aperture in an optical system can capture high-resolution images but yields a shallow depth of field. To overcome this issue, we investigated a low-cost, readily available method for retrofitting microscopy imaging systems to achieve 3D focus scanning in this study. Specifically, a procedure for fabricating variable focus spinners with dissimilar plates was introduced, and a sequence of 12 images was captured in different focal planes. The image scale and phase were corrected, and the in-focus pixels were abstracted by employing the Laplacian operator. Finally, an all-in-focus sharp image was generated, and a depth map was obtained.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Lenses are essential elements in optical systems, as they are commonly used to transmit and/or bend light beams. Traditionally, lenses are made of solid materials, such as glass and plastic, and two or more lenses are employed and moved mechanically over specific distances to realize focus or defocus functions [1]. Back-and-forth movement can be driven by the rack and pinion motion of a motor. However, continual forward and reverse rotation is a high-power-consumption task, because positive and negative current are used alternately to control the motor, which quickly triggers a safety stop to prevent overheating. Moreover, it is quite difficult to achieve high-speed responses in such conditions. A crank mechanism converts the circular motion of a motor into a reciprocating motion, but problems such as vibration, noise, and the dead center remain unsolved. This method is used to control the distance between the solid lenses in a system for focal length adjustment. Recently, variable focus lenses, where a single lens cell can realize different focal lengths by itself, have been developed as alternatives [2–9]. Different types of variable focus lenses as well as some compact optical systems with variable focus lenses have been reported on [10–15]. However, related experience is necessary to develop variable focus lens systems and the lens devices are not easy to handle [5–9].

In addition, although high-resolution images can be captured by using a large open aperture, the region outside the shallow depth of field (DoF) will be blurry and information will be lost. Using a small open aperture can increase the DoF and ensure that the outside of the object remains in focus but will decrease the image resolution [1]. Hence, there is a trade-off relationship between the DoF and resolution. A series of images focused in different focal plane can be captured and combined to generate a high-resolution image [10,16], but such capturing systems are difficult to realize. For example, a liquid lens must be built and an image sequence must be obtained.

To overcome these issues, a low-cost, readily available variable focus system is proposed and its fabrication method is introduced in this paper. High-speed focal scanning along the optical axis proved to be feasible by using a prototype of this system, so that a series of images focused in different focal planes could be acquired. Furthermore, an all-in-focus image and a depth map were generated. In this case, a target (Arduino Mini chip) was placed at 45° with respect to the optical axis and observed by using a microscopy lens kit and mounting transparent plates with various thicknesses (ranging from 0 mm to 11 mm) on a circular spinner. The change in the focal length was confirmed, and an all-in-focus image as well as a corresponding 3D depth map were obtained. The image processing algorithm enabled robust correction to the rough precision of the proposed system.

The remainder of this paper is organized as follows. Section 2 explains the design principle of a variable focus unit. Section 3 outlines the fabrication of the prototype (Section 3.1), confirms the variation of the focal length changing by analyzing the gradient changing of the pixels (Section 3.2), and explains the image processing method used to generate an all-in-focus image and depth map. Section 4 discusses advantages and disadvantages. Section 5 concludes this paper.

2. Variable focus system design

When a beam passes through two materials with different refractive indices, the optical path is refracted by a certain angle. There is a triangular relationship between the observed target and the last lens in the imaging system. Thus, when a transparent plate is placed in front of a camera, the focusing plane of the original system is shifted along the optical axis. In this section, the design principle of such a system is introduced and a simulation that was conducted using this design are described.

2.1. Design principle

According to the law of refraction (known as Snell’s law), which is used to describe the relationship between the angles of incidence and refraction when a light beam encounters a boundary between two different isotropic media, the ratio between the sines of the angles of incidence and refraction is equal to the ratio between the indices of refraction of the two media:

sinθ1sinθ2=N2N1,
where θ1 is the angle of incidence, θ2 is the angle of refraction, and N1 and N2 are the refractive indices of the materials. This scenario is illustrated in Fig. 1(a).

According to the law of refraction, when a light beam passes through a transparent plate with a certain index of refraction, the beam is refracted twice: when it hits the upper boundary and when it exits the lower boundary. The final light beam is parallel to the incident beam but shifted away from it by a certain degree, given that the refractive index N2 of Medium 2 is greater than the refractive index N1 of Medium 1. This situation is illustrated in Fig. 1(b).

In Fig. 1(c), a lens cell in Medium 1 has a focal length of f1. An isosceles triangle is formed by the diameter of the lens and the focal point, and the height of this triangle is considered to be the back focal length from the last surface to the focal point. According to the law of refraction, a certain transparent plate is placed between the lens and the original focal point, supposing that N2 > N1. The angle of incidence is greater than the angle of refraction at point A. At point B, the angle of incidence is equal to the former angle of refraction and the angle of refraction is equal to the former angle of incidence. Thus, the hypotenuse is shifted parallel to the original one, resulting in a top point movement. The following calculations can be performed to determine the extent of this movement.

As shown in Fig. 1(c), the length AB¯ in the right triangle ΔABO′ can be calculated by using

AB¯=tcosθ2.

 figure: Fig. 1

Fig. 1 Illustration of the law of refraction. (a) Basic refraction scenario, where the ratio between the sines of the angles of incidence and refraction is equal to the ratio between the indices of refraction of the two media. (b) A light beam passes through a transparent plate with a certain index of refraction and is refracted twice: when it hits the upper boundary and when it passes through the lower boundary. The final light beam is parallel to the incident beam but is shifted by a certain amount. (c) Extension of the focal length from f1 to f2 by placing Medium 2 in front of a lens, given that N2 > N1.

Download Full Size | PPT Slide | PDF

Similarly, in the right triangle ΔABO, the parallel shift σ can be expressed in terms of AB¯ as follows:

σ=AB¯×sin(θ1θ2).
Combining equations (1), (2), and (3) yields
σ=(1N1N2×cosθ1cosθ2)×t×sinθ1.
The movement of the focus d = |f1f2| can be calculated by using the right triangle ΔCDE as follows:
d=(1N1N2×cosθ1cosθ2)×t.
According to Eq. (5), five parameters, which are θ1, θ2, N1, N2, and the plate thickness t, define the focal point movement. Therefore, the focal length of the system becomes dynamically controllable when one of these parameters is tunable. Normally, when adding a window in the air, N1 < N2. Hence, the focal length is extended compared to its original value.

2.2. Simulation

In this study, we used Zemax (optical CAD software) to simulate an imaging system including an ideal lens (set as paraxial lens) with a focal length of 100 mm and a BK7 glass plate (refractive index 1.5168) set 37 mm away from the lens, as shown in Fig. 2. When the glass plate was 0 mm thick, the total axial length remained at its original value of 100 mm. When the glass plate was 5 mm thick, the total axial length became 101.70359 mm. Finally, the total axial length could be extended to 103.74789 mm when plate was 11 mm thick.

The results of the CAD simulation confirmed the relationship shown in Eq. (5). The glass plate should be set between the lens and the focal point, but it does not define the focal point movement. In general, when an experiment is conducted, Medium 1 in Eq. (5) can be considered to be air. Hence, the main factors are the refractive index and thickness of Medium 2.

 figure: Fig. 2

Fig. 2 Three screenshots of the Zemax simulation. An ideal lens (set as paraxial lens) was employed on the first surface with a focal length of 100 mm. Only green light was considered, and the refractive index of the BK7 glass on green light was n = 1.5168. (a). The first lens was set as an ideal lens (set as paraxial lens). When the thickness of the glass plate was 0 mm, the original total axial length was 100 mm. (b) When the thickness of the glasses plate was 5 mm, the total axial length became 101.70359 mm. (c) When the thickness of the plate was 11 mm, the total axial length could be extended to 103.74789 mm.

Download Full Size | PPT Slide | PDF

3. Experiment

3.1. Experimental setup

A sketch of the experimental setup is provided in Fig. 3(a), and an experimental image is shown in Fig. 3(b). According to the calculations and simulation results, the focal length changed on the order of millimetres, so a microscopy vision lens was employed. The microlens (EO-55-359, Edmund Optics) was mounted on a high-speed camera (acA1300-200uc, Basler), and a DoF 5-15 target (Edmund Optics, EO-54-440) was utilized. The distance from the last lens to the bottom of the DoF target was 127 mm, and the surface of the target was inclined at 45° with respect to the optical axis to confirm the changing of the focal plane.

 figure: Fig. 3

Fig. 3 Experimental setup. (a) Sketch of the setup of the variable focus imaging system. (b) Photograph of the experimental setup. The variable focus spinner had 12 apertures that were mounted with different transparent plates.

Download Full Size | PPT Slide | PDF

A circular disk (diameter = 150 mm) with 12 apertures in a clock-like arrangement was prepared. Transparen plates (refractive index 1.5170) with dimensions of 20 mm × 20 mm and thicknesses ranging from 1 mm to 11 mm were fabricated and were sequentially fixed in the holes of the disk. The disk was mounted on a DC motor (Orient Motor, BMU230A-A-1), and the axis of the motor was parallel to the axis of the imaging system, so the round disk was perpendicular to the optical axis. The center of one of the 12 holes was aligned with the axis. A photoelectric laser sensor (Panasonic, EX-L211-P) was set on the opposite hole against the imaging axis. The output signal was connected to a camera trigger, so that when the disk ran, an acquisition command could be generated. One rotation of the disk could produce 12 frames focused in 12 different focal planes.

3.2. Experimental results

3.2.1. Confirmation of changing focal length

Twelve images were captured while changing the plate thickness from 0 mm to 11 mm. According to the theory corresponding to equation [5], the focal length of the system should be changed gradually. Six of the images obtained are presented in Fig. 4. The imaged black and white bars can be found on the left side of each photo. Each photo had a resolution of 1024 × 1280 pixels. When an image is a matrix, its gradient can be calculated by using

F=Fxi^+Fyj^.

This equation returns the x and y components of the 2D numerical gradient. As indicated in Fig. 4, it was necessary to calculate the vertical 1024 pixels along the marked blue lines to determine the gradient [5]. The gradients were calculated by using Matlab, and all of the resulting plots are provided in Fig. 4(b). Matlab code and the raw images could be found in Code 1 [17]. The higher peak-to-valley (P-V) values correspond to the in-focus area. When the thickness is 0 mm, the focus is on the far right in the plot. Since the horizontal axis represents the pixel location along the blue line, this finding simply confirms that the focal point was originally close to the bottom of the image. With increasing thickness of the transparent plate, the focal point dynamically moves from right to left in the plots, reflecting the motion of the focal point from the bottom to the top of the image. Thus, the focal length of the imaging system could be extended by adding the transparent plate.

 figure: Fig. 4

Fig. 4 Results obtained by using transparent plates of various thicknesses. (a) Six images acquired by using transparent plates with thicknesses of 0, 2, 4, 6, 8, and 10 mm. The movement of the focus point is clearly observable. The 1024 pixels along the blue line marked on the photographs were calculated to determine the focus. (b) Gradients calculated by using all 12 of the obtained images. The region in each plot with higher P–V values corresponds to the in-focus area, so shift of this region from right to left confirms that the focus changed gradually. Matlab code and the raw images could be found in Code 1 [17].

Download Full Size | PPT Slide | PDF

3.2.2. All-in-focus and depth images

Twelve images corresponding to different focal lengths were captured sequentially during one rotation of the motor and were utilized to generate a depth map. The portion of the target in the focal plane was in focus in each case, while the part outside the focal plane was out of focus. Because the focal length was scanned along the optical axis, the focal plane was changed dynamically. The in-focus pixels in these images were extracted, and an all-in-focus image and a depth map were generated by image processing.

A related method called shape from focus was previously investigated [16, 18]. However, it was necessary to align the processed image sequences beforehand. When the phases of the images were not aligned, the pixels of the in-focus region in each image could be detected, but dislocation occurred when stacking the pixels from different layers to obtain the final all-in-focus images.

Two main factors could account for the phase disorder. Firstly, the focal length of the system was changed to perform 3D focal scanning, so the field of view and consequently the scale of the captured images were changed simultaneously. Secondly, the flat surface of the transparent plate should be vertical relative to the optical axis; however, a slight tilt was inevitable due to hand assembly. Thus, prism-like image shifting occurred, resulting in image phase shifting.

To improve this situation, the series of images should be phase-aligned first to make the system more robust against image phase disorder. After phase correction, the in-focus pixels could be analyzed and extracted for combination into an all-in-focus image. Finally, a depth map image could be generated according to the specific focal length information.

In this study, an Arduino Mini Chip was placed at 45° with respect to the optical axis and served as the imaging target, because it had more texture and various depth information. Matlab code and the raw images could be found in Code 2 [19].

  • - The capture motion of the camera was synchronized with the motor rotation by using a photoelectric laser sensor. The sensor was set on the hole opposite to the imaging axis and against it, so the sensor was triggered when a plate passed through the sensor. The sensor trigger was connected to the camera trigger, so that capture motion occurred simultaneously when a plate was located correctly on the optical axis. Twelve images were captured during each turn of the motor. The captured raw images are presented in Fig. 5(a), demonstrating that the focal range was at the bottom when the plate thickness was 0 mm and moved upward with increasing plate thickness.
  • - The 12 images were taken at different times and with different transparent plate thicknesses. The brightness of these images varied slightly, due to different transmission levels of the plates. Therefore, the brightness of each image was made the same, giving the pixels in the final fused all-in-focus image uniform brightness. The brightness of the first image was used for reference, and the brightness of each of the following images was adjusted to match the reference.
  • - The imaging scale was different because the field of view was changed, due to the focal length variation. Therefore, the sizes of the images were rescaled [16]. Specifically, all of the pixels were scaled by applying the scale factors Su and Sv to the u and v coordinates, respectively. The affine transformation matrix T was obtained as follows, and Su and Sv were calculated using Eq. (5):
    [xy1]=[uv1]=[uv1][Su000Sv0001],
    where Su=Sv=100100+(11N)t, N is the refractive index of the inserted plate, and t is the thickness of the plate in millimetres. The 12 images were wrapped and rescaled by performing a 2D affine geometric transformation using the aforementioned factors, and the processed images are presented in Fig. 5(b).
  • - Because the tilt of the flat plate, pixel translation occurred during imaging. Hence, it was necessary to determine the offset factors to conduct phase correction. Here, the phase only correction method [20–22] was employed for image shift correction. The fast Fourier transforms of two images were calculated, and the normalized cross-spectrum was computed. As in the image rescaling process described above, image wrapping was conducted.
    [xy1]=[uv1]=[uv1][100010TuTv1],
    where Tu and Tv were detected by using the phase correction algarithm. At this point, the phase correction of the 12 image sequence was finished. The size gradually decreased with increasing focal length, as shown in Fig. 5(b).
  • - The in-focus pixels had clear edges and could be detected by using an edge detector. In this case, a Laplacian edge detector was employed to find the edges in the target images [23]. The Laplacian operator is defined by
    Laplace(f)=2fx2+2fy2.
    Edge detection was performed by conducting a Laplace transformation across each image by calculating the second derivatives. The 12 processed images obtained after conducting edge detection using this method are shown in Fig. 5(b), where the white areas correspond to the in-focus pixels.
  • - An image matrix was generated to produce the all-in-focus image, and an index table for the well-focused pixels in the corresponding images was compiled. By using this method, the in-focus pixels from each image were marked and the corresponding image numbers were recorded. The all-in-focus sharp image was generated and is shown in Fig. 5(c). The image index table contained the focal length information. The depth information was obtained, and a medium blur filter was used to smooth the discrete mapping, yielding the map shown in Fig. 5(d).

4. Discussion

The original imaging system included a microlens (EO-55-359, Edmund Optics), and its optical performance was designed to minimize the optical aberration. However, when the dissimilar windows were added to retrofit the original system, these elements were considered to form a new imaging system, causing the optical aberration to be altered.

The added plates were plano-plano surfaces, which experience chromatic aberration due to the different refraction of different wavelengths. Thus, while the original system was designed to minimize the chromatic aberration, the addition of such plates could increase the chromatic aberration. Furthermore, although the added components should be extremely flat and perpendicular to the optical axis, the precision might be not high due to mechanical error. In particular, spherical aberration and prism-like eccentricity error could be introduced. Consequently, the optical performance of the developed system with extended focal lengths could be decreased.

To confirm the optical performance of the proposed system, a USAS 1951 resolution chart was employed as an imaging target. Images were taken without a plate (thickness = 0 mm) and with an 11-mm-thick plate added and are presented in Figs. 6(a) and 6(b), respectively. Compared to the original system, the system with extended focal length produced strong chromatic aberration and a lower resolution.

As discussed above that the image scale and phase correction procedure is very important. If the alignment was not appropriately conducted, the dislocation would occur when stacking the images. A comparison result of the processed all-in-focus images was shown in Figs. 7(a) and 7(b). When the system was without scale and phase correction, the dislocation could be found when stacking the pixels from different layers, such as the area marked in the yellow frame in (a). On the other side, the location was well matched in (b), when the system conducted the scale and phase correction.

 figure: Fig. 5

Fig. 5 Images obtained by changing the plate thickness from 0 mm to 11 mm. (a) Raw image sequence. (b) Images obtained after rescaling, phase correction, and Laplacian edge detection. (c) All-in-focus sharp image generated by merging the in-focus pixels. (d) Depth map produced by using the index numbers of the images, which contained depth information. Matlab code and the raw images could be found in Code 2 [19].

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 Images of a USAF 1951 resolution chart. (a) Image obtained with no plate (thickness = 0 mm). (b) Image obtained with an 11-mm-thick plate inserted.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 Comparison of the processed all-in-focus result (a) without scale and phase correction and (b) with correction. As shown in (a), when the phases of the images were not aligned, dislocation occurred when stacking the pixels from different layers, such as the area marked in the yellow frame.

Download Full Size | PPT Slide | PDF

5. Conclusion

Although a large open aperture can provide high-resolution images, the DoF is shallow; meanwhile, a small open aperture can increase the DoF, while decreasing the image resolution. Therefore, we proposed a low-cost, readily available method for retrofitting imaging systems that can enable 3D focus scanning and provide all-in-focus, high-resolution images as well as corresponding depth maps. The developed method is based on the law of refraction, according to which the back focal length (working distance) of an imaging system can be adjusted by adding a transparent plate. The focal length extension changes with the plate thickness according to Eq. (5).

To test the proposed method, a variable-focus spinner was prepared with 12 apertures that were arranged in a clock-like manner and mounted with transparent plates of 12 different thicknesses ranging from 0 mm to 11 mm. The spinner was driven directly by a motor, and its rotation was synchronized with the capture motion of a camera. A target was placed at 45° with respect to the optical axis so that its DoF information could be utilized to confirm the imaging performance. Variable focal length scanning along the depth axis was conducted during one rotation of the variable focus spinner. In the image processing section, image scale and phase correction were performed according to this particular system. After the phase correction, the in-focus pixels in the images were abstracted by employing the Laplacian operator detector. Finally, an all-in-focus sharp image was merged and a depth map was built. The results indicated that the thickness distribution of the plates and the quantiles of the plate numbers should be re-assigned according to the specific application. Moreover, even though the assembly and fabrication of the prototype were not highly precise, for instance due to the tilt of the plate, the proposed image processing method could correct the phase error and provide acceptable results.

Nonetheless, the proposed system could be improved in some ways. For instance, a more detailed thickness distribution could enable more precise focal plane adjustment, which could be effective in high-resolution microscopy due to the shallow DoF. In addition, the update frequency could be adjusted. The maximum rotation of the motor in this study was 6,000 rpm (rotation per minute), or 100 rps (rotations per second), making the maximum update frequency for every 3D scan 100 fps, while a high-speed camera could provide a sampling frequency of 1,200 fps. A higher update frequency could be achieved easily by dividing the spinner into smaller increments for real-time vivo imaging, although this approach would decrease the exposure time, resulting in darker images, so more intense illumination would be necessary. Finally, some cost reduction is possible. While the transparent plates that were combined to fabricate the plates of different thicknesses each cost less than 1 USD and the spinner disk was processed in the workshop, the motor and sensor cost about 200 USD each. Thus, the total cost of the spinner was less than 500 USD, but we have plans to conduct further studies for cost reduction by using a cheaper motor and sensor. We are currently studying how to integrate the image processing into the graphic processing unit (GPU) to accelerate the compute speed. In the future, however, another important challenge is miniaturization; we believe that this device could be integrated into a microscopy system.

The basic principle of the developed system is useful as a reference, and users could easily build their own spinners and modify and redesign the parameters for specific purposes. This design has a wide variety of potential applications, including in biomedical imaging, chip inspection, 3D sensing, and 3D printing, among other areas.

Funding

Accelerated Innovation Research Initiative Turning Top Science and Ideas into High-Impact Values Grant (JST) (JPMJAC15F1); KAKENHI Grant-in-Aid for Young Scientists by Japan Society for the Promotion of Science (JSPS) (15K16035).

Acknowledgments

This work was partial supported from Konica Minolta Inc.

References

1. J. E. Greivenkamp, Field guide to geometrical optics vol. 1 (SPIE Publications, Washington, 2004). [CrossRef]  

2. H. Ren and S.-T. Wu, Introduction to adaptive lenses vol. 75 (John Wiley & Sons, New York City, 2012). [CrossRef]  

3. L. Wang, H. Oku, and M. Ishikawa, “Variable-focus lens with 30 mm optical aperture based on liquid–membrane–liquid structure,” Appl. Phys. Lett. 102, 131111 (2013). [CrossRef]  

4. L. Wang, H. Oku, and M. Ishikawa, “Paraxial ray solution for liquid-filled variable focus lenses,” Jpn. J. Appl. Phys. 56, 122501 (2017). [CrossRef]  

5. H. Oku and M. Ishikawa, “High-speed liquid lens with 2 ms response and 80.3 nm root-mean-square wavefront error,” Appl. Phys. Lett. 94, 221108 (2009). [CrossRef]  

6. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3, 159–163 (2000). [CrossRef]  

7. H. Ren, S. Xu, and S.-T. Wu, “Liquid crystal pump,” Lab Chip. 13, 100–105 (2013). [CrossRef]  

8. T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam–berry phase lenses,” Opt. Express 26, 4863–4872 (2018). [CrossRef]   [PubMed]  

9. Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017). [CrossRef]  

10. H. Oku and M. Ishikawa, “High-speed liquid lens for computer vision,” in Proceedings of IEEE International Conference on Robotics and Automation, (IEEE, 2010), pp. 2643–2648.

11. L. Wang, H. Oku, and M. Ishikawa, “An improved low-optical-power variable focus lens with a large aperture,” Opt. Express 22, 19448–19456 (2014). [CrossRef]   [PubMed]  

12. L. Wang, T. Hayakawa, and M. Ishikawa, “Dielectric-elastomer-based fabrication method for varifocal microlens array,” Opt. express 25, 31708–31717 (2017). [CrossRef]   [PubMed]  

13. S. Xu, Y. Liu, H. Ren, and S.-T. Wu, “A novel adaptive mechanical-wetting lens for visible and near infrared imaging,” Opt. express 18, 12430–12435 (2010). [CrossRef]   [PubMed]  

14. G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006). [CrossRef]   [PubMed]  

15. L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006). [CrossRef]   [PubMed]  

16. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell 16, 824–831 (1994). [CrossRef]  

17. L. Wang, “Matlab code for computing image gradient,” Code 1 https://figshare.com/s/b2d180383f6658a18195.

18. G. Wolberg, Digital Image Wrapping vol. 1 (Wiley-IEEE Computer Society Press, Los Alamitos, 1990).

19. L. Wang, “Matlab code for computing all-in-focus and depth map,” Code 1 https://figshare.com/s/6de82ebdd94d73469697.

20. K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

21. M. A. Muquit, T. Shibahara, and T. Aoki, “A high-accuracy passive 3d measurement system using phase-based image matching,” IEICE Trans. Fundamentals 89, 686–697 (2006). [CrossRef]  

22. B. S. Reddy and B. N. Chatterji, “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5, 1266–1271 (1996). [CrossRef]   [PubMed]  

23. OpenCV Documentation, “Laplace operator,” https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/laplace_operator/laplace_operator.html.

References

  • View by:
  • |
  • |
  • |

  1. J. E. Greivenkamp, Field guide to geometrical optics vol. 1 (SPIE Publications, Washington, 2004).
    [Crossref]
  2. H. Ren and S.-T. Wu, Introduction to adaptive lenses vol. 75 (John Wiley & Sons, New York City, 2012).
    [Crossref]
  3. L. Wang, H. Oku, and M. Ishikawa, “Variable-focus lens with 30 mm optical aperture based on liquid–membrane–liquid structure,” Appl. Phys. Lett. 102, 131111 (2013).
    [Crossref]
  4. L. Wang, H. Oku, and M. Ishikawa, “Paraxial ray solution for liquid-filled variable focus lenses,” Jpn. J. Appl. Phys. 56, 122501 (2017).
    [Crossref]
  5. H. Oku and M. Ishikawa, “High-speed liquid lens with 2 ms response and 80.3 nm root-mean-square wavefront error,” Appl. Phys. Lett. 94, 221108 (2009).
    [Crossref]
  6. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3, 159–163 (2000).
    [Crossref]
  7. H. Ren, S. Xu, and S.-T. Wu, “Liquid crystal pump,” Lab Chip. 13, 100–105 (2013).
    [Crossref]
  8. T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam–berry phase lenses,” Opt. Express 26, 4863–4872 (2018).
    [Crossref] [PubMed]
  9. Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
    [Crossref]
  10. H. Oku and M. Ishikawa, “High-speed liquid lens for computer vision,” in Proceedings of IEEE International Conference on Robotics and Automation, (IEEE, 2010), pp. 2643–2648.
  11. L. Wang, H. Oku, and M. Ishikawa, “An improved low-optical-power variable focus lens with a large aperture,” Opt. Express 22, 19448–19456 (2014).
    [Crossref] [PubMed]
  12. L. Wang, T. Hayakawa, and M. Ishikawa, “Dielectric-elastomer-based fabrication method for varifocal microlens array,” Opt. express 25, 31708–31717 (2017).
    [Crossref] [PubMed]
  13. S. Xu, Y. Liu, H. Ren, and S.-T. Wu, “A novel adaptive mechanical-wetting lens for visible and near infrared imaging,” Opt. express 18, 12430–12435 (2010).
    [Crossref] [PubMed]
  14. G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
    [Crossref] [PubMed]
  15. L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006).
    [Crossref] [PubMed]
  16. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell 16, 824–831 (1994).
    [Crossref]
  17. L. Wang, “Matlab code for computing image gradient,” Code 1 https://figshare.com/s/b2d180383f6658a18195.
  18. G. Wolberg, Digital Image Wrapping vol. 1 (Wiley-IEEE Computer Society Press, Los Alamitos, 1990).
  19. L. Wang, “Matlab code for computing all-in-focus and depth map,” Code 1 https://figshare.com/s/6de82ebdd94d73469697.
  20. K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).
  21. M. A. Muquit, T. Shibahara, and T. Aoki, “A high-accuracy passive 3d measurement system using phase-based image matching,” IEICE Trans. Fundamentals 89, 686–697 (2006).
    [Crossref]
  22. B. S. Reddy and B. N. Chatterji, “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5, 1266–1271 (1996).
    [Crossref] [PubMed]
  23. OpenCV Documentation, “Laplace operator,” https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/laplace_operator/laplace_operator.html .

2018 (1)

2017 (3)

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

L. Wang, H. Oku, and M. Ishikawa, “Paraxial ray solution for liquid-filled variable focus lenses,” Jpn. J. Appl. Phys. 56, 122501 (2017).
[Crossref]

L. Wang, T. Hayakawa, and M. Ishikawa, “Dielectric-elastomer-based fabrication method for varifocal microlens array,” Opt. express 25, 31708–31717 (2017).
[Crossref] [PubMed]

2014 (1)

2013 (2)

L. Wang, H. Oku, and M. Ishikawa, “Variable-focus lens with 30 mm optical aperture based on liquid–membrane–liquid structure,” Appl. Phys. Lett. 102, 131111 (2013).
[Crossref]

H. Ren, S. Xu, and S.-T. Wu, “Liquid crystal pump,” Lab Chip. 13, 100–105 (2013).
[Crossref]

2010 (1)

2009 (1)

H. Oku and M. Ishikawa, “High-speed liquid lens with 2 ms response and 80.3 nm root-mean-square wavefront error,” Appl. Phys. Lett. 94, 221108 (2009).
[Crossref]

2006 (3)

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006).
[Crossref] [PubMed]

M. A. Muquit, T. Shibahara, and T. Aoki, “A high-accuracy passive 3d measurement system using phase-based image matching,” IEICE Trans. Fundamentals 89, 686–697 (2006).
[Crossref]

2003 (1)

K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

2000 (1)

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3, 159–163 (2000).
[Crossref]

1996 (1)

B. S. Reddy and B. N. Chatterji, “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5, 1266–1271 (1996).
[Crossref] [PubMed]

1994 (1)

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell 16, 824–831 (1994).
[Crossref]

Agarwal, A. K.

L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006).
[Crossref] [PubMed]

Aoki, T.

M. A. Muquit, T. Shibahara, and T. Aoki, “A high-accuracy passive 3d measurement system using phase-based image matching,” IEICE Trans. Fundamentals 89, 686–697 (2006).
[Crossref]

K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

Äyräs, P.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Beebe, D. J.

L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006).
[Crossref] [PubMed]

Berge, B.

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3, 159–163 (2000).
[Crossref]

Chatterji, B. N.

B. S. Reddy and B. N. Chatterji, “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5, 1266–1271 (1996).
[Crossref] [PubMed]

Dong, L.

L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006).
[Crossref] [PubMed]

Gauza, S.

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Giridhar, M.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Gou, F.

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Greivenkamp, J. E.

J. E. Greivenkamp, Field guide to geometrical optics vol. 1 (SPIE Publications, Washington, 2004).
[Crossref]

Haddock, J. N.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Hayakawa, T.

Higuchi, T.

K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

Honkanen, S.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Ishikawa, M.

L. Wang, T. Hayakawa, and M. Ishikawa, “Dielectric-elastomer-based fabrication method for varifocal microlens array,” Opt. express 25, 31708–31717 (2017).
[Crossref] [PubMed]

L. Wang, H. Oku, and M. Ishikawa, “Paraxial ray solution for liquid-filled variable focus lenses,” Jpn. J. Appl. Phys. 56, 122501 (2017).
[Crossref]

L. Wang, H. Oku, and M. Ishikawa, “An improved low-optical-power variable focus lens with a large aperture,” Opt. Express 22, 19448–19456 (2014).
[Crossref] [PubMed]

L. Wang, H. Oku, and M. Ishikawa, “Variable-focus lens with 30 mm optical aperture based on liquid–membrane–liquid structure,” Appl. Phys. Lett. 102, 131111 (2013).
[Crossref]

H. Oku and M. Ishikawa, “High-speed liquid lens with 2 ms response and 80.3 nm root-mean-square wavefront error,” Appl. Phys. Lett. 94, 221108 (2009).
[Crossref]

H. Oku and M. Ishikawa, “High-speed liquid lens for computer vision,” in Proceedings of IEEE International Conference on Robotics and Automation, (IEEE, 2010), pp. 2643–2648.

Jiang, H.

L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006).
[Crossref] [PubMed]

Kippelen, B.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Kobayashi, K.

K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

Lee, Y.-H.

T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam–berry phase lenses,” Opt. Express 26, 4863–4872 (2018).
[Crossref] [PubMed]

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Li, G.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Liu, G.

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Liu, Y.

Mathine, D. L.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Meredith, G. R.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Muquit, M. A.

M. A. Muquit, T. Shibahara, and T. Aoki, “A high-accuracy passive 3d measurement system using phase-based image matching,” IEICE Trans. Fundamentals 89, 686–697 (2006).
[Crossref]

Nakagawa, Y.

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell 16, 824–831 (1994).
[Crossref]

Nayar, S. K.

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell 16, 824–831 (1994).
[Crossref]

Oku, H.

L. Wang, H. Oku, and M. Ishikawa, “Paraxial ray solution for liquid-filled variable focus lenses,” Jpn. J. Appl. Phys. 56, 122501 (2017).
[Crossref]

L. Wang, H. Oku, and M. Ishikawa, “An improved low-optical-power variable focus lens with a large aperture,” Opt. Express 22, 19448–19456 (2014).
[Crossref] [PubMed]

L. Wang, H. Oku, and M. Ishikawa, “Variable-focus lens with 30 mm optical aperture based on liquid–membrane–liquid structure,” Appl. Phys. Lett. 102, 131111 (2013).
[Crossref]

H. Oku and M. Ishikawa, “High-speed liquid lens with 2 ms response and 80.3 nm root-mean-square wavefront error,” Appl. Phys. Lett. 94, 221108 (2009).
[Crossref]

H. Oku and M. Ishikawa, “High-speed liquid lens for computer vision,” in Proceedings of IEEE International Conference on Robotics and Automation, (IEEE, 2010), pp. 2643–2648.

Peng, F.

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Peseux, J.

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3, 159–163 (2000).
[Crossref]

Peyghambarian, N.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Reddy, B. S.

B. S. Reddy and B. N. Chatterji, “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5, 1266–1271 (1996).
[Crossref] [PubMed]

Ren, H.

H. Ren, S. Xu, and S.-T. Wu, “Liquid crystal pump,” Lab Chip. 13, 100–105 (2013).
[Crossref]

S. Xu, Y. Liu, H. Ren, and S.-T. Wu, “A novel adaptive mechanical-wetting lens for visible and near infrared imaging,” Opt. express 18, 12430–12435 (2010).
[Crossref] [PubMed]

H. Ren and S.-T. Wu, Introduction to adaptive lenses vol. 75 (John Wiley & Sons, New York City, 2012).
[Crossref]

Sasaki, Y.

K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

Schwiegerling, J.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Shibahara, T.

M. A. Muquit, T. Shibahara, and T. Aoki, “A high-accuracy passive 3d measurement system using phase-based image matching,” IEICE Trans. Fundamentals 89, 686–697 (2006).
[Crossref]

Tabiryan, N. V.

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Takita, K.

K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

Tan, G.

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Valley, P.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Wang, L.

L. Wang, T. Hayakawa, and M. Ishikawa, “Dielectric-elastomer-based fabrication method for varifocal microlens array,” Opt. express 25, 31708–31717 (2017).
[Crossref] [PubMed]

L. Wang, H. Oku, and M. Ishikawa, “Paraxial ray solution for liquid-filled variable focus lenses,” Jpn. J. Appl. Phys. 56, 122501 (2017).
[Crossref]

L. Wang, H. Oku, and M. Ishikawa, “An improved low-optical-power variable focus lens with a large aperture,” Opt. Express 22, 19448–19456 (2014).
[Crossref] [PubMed]

L. Wang, H. Oku, and M. Ishikawa, “Variable-focus lens with 30 mm optical aperture based on liquid–membrane–liquid structure,” Appl. Phys. Lett. 102, 131111 (2013).
[Crossref]

Weng, Y.

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Williby, G.

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Wolberg, G.

G. Wolberg, Digital Image Wrapping vol. 1 (Wiley-IEEE Computer Society Press, Los Alamitos, 1990).

Wu, S.-T.

T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam–berry phase lenses,” Opt. Express 26, 4863–4872 (2018).
[Crossref] [PubMed]

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

H. Ren, S. Xu, and S.-T. Wu, “Liquid crystal pump,” Lab Chip. 13, 100–105 (2013).
[Crossref]

S. Xu, Y. Liu, H. Ren, and S.-T. Wu, “A novel adaptive mechanical-wetting lens for visible and near infrared imaging,” Opt. express 18, 12430–12435 (2010).
[Crossref] [PubMed]

H. Ren and S.-T. Wu, Introduction to adaptive lenses vol. 75 (John Wiley & Sons, New York City, 2012).
[Crossref]

Xu, S.

Zhan, T.

T. Zhan, Y.-H. Lee, and S.-T. Wu, “High-resolution additive light field near-eye display by switchable pancharatnam–berry phase lenses,” Opt. Express 26, 4863–4872 (2018).
[Crossref] [PubMed]

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Appl. Phys. Lett. (2)

L. Wang, H. Oku, and M. Ishikawa, “Variable-focus lens with 30 mm optical aperture based on liquid–membrane–liquid structure,” Appl. Phys. Lett. 102, 131111 (2013).
[Crossref]

H. Oku and M. Ishikawa, “High-speed liquid lens with 2 ms response and 80.3 nm root-mean-square wavefront error,” Appl. Phys. Lett. 94, 221108 (2009).
[Crossref]

Eur. Phys. J. E (1)

B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3, 159–163 (2000).
[Crossref]

IEEE Trans. Image Process. (1)

B. S. Reddy and B. N. Chatterji, “An fft-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5, 1266–1271 (1996).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell (1)

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell 16, 824–831 (1994).
[Crossref]

IEICE Trans. Fundamentals (2)

K. Takita, T. Aoki, Y. Sasaki, T. Higuchi, and K. Kobayashi, “High-accuracy subpixel image registration based on phase-only correlation,” IEICE Trans. Fundamentals 86, 1925–1934 (2003).

M. A. Muquit, T. Shibahara, and T. Aoki, “A high-accuracy passive 3d measurement system using phase-based image matching,” IEICE Trans. Fundamentals 89, 686–697 (2006).
[Crossref]

Jpn. J. Appl. Phys. (1)

L. Wang, H. Oku, and M. Ishikawa, “Paraxial ray solution for liquid-filled variable focus lenses,” Jpn. J. Appl. Phys. 56, 122501 (2017).
[Crossref]

Lab Chip. (1)

H. Ren, S. Xu, and S.-T. Wu, “Liquid crystal pump,” Lab Chip. 13, 100–105 (2013).
[Crossref]

Nature (1)

L. Dong, A. K. Agarwal, D. J. Beebe, and H. Jiang, “Adaptive liquid microlenses activated by stimuli-responsive hydrogels,” Nature 442, 551 (2006).
[Crossref] [PubMed]

Opt. Data Process. Storage (1)

Y.-H. Lee, G. Tan, T. Zhan, Y. Weng, G. Liu, F. Gou, F. Peng, N. V. Tabiryan, S. Gauza, and S.-T. Wu, “Recent progress in pancharatnam–berry phase optical elements and the applications for virtual/augmented realities,” Opt. Data Process. Storage 3, 79–88 (2017).
[Crossref]

Opt. Express (2)

Proc. Natl. Acad. Sci. U.S.A. (1)

G. Li, D. L. Mathine, P. Valley, P. Äyräs, J. N. Haddock, M. Giridhar, G. Williby, J. Schwiegerling, G. R. Meredith, B. Kippelen, S. Honkanen, and N. Peyghambarian, “Switchable electro-optic diffractive lens with high efficiency for ophthalmic applications,” Proc. Natl. Acad. Sci. U.S.A. 103, 6100–6104 (2006).
[Crossref] [PubMed]

Other (7)

L. Wang, “Matlab code for computing image gradient,” Code 1 https://figshare.com/s/b2d180383f6658a18195.

G. Wolberg, Digital Image Wrapping vol. 1 (Wiley-IEEE Computer Society Press, Los Alamitos, 1990).

L. Wang, “Matlab code for computing all-in-focus and depth map,” Code 1 https://figshare.com/s/6de82ebdd94d73469697.

J. E. Greivenkamp, Field guide to geometrical optics vol. 1 (SPIE Publications, Washington, 2004).
[Crossref]

H. Ren and S.-T. Wu, Introduction to adaptive lenses vol. 75 (John Wiley & Sons, New York City, 2012).
[Crossref]

H. Oku and M. Ishikawa, “High-speed liquid lens for computer vision,” in Proceedings of IEEE International Conference on Robotics and Automation, (IEEE, 2010), pp. 2643–2648.

OpenCV Documentation, “Laplace operator,” https://docs.opencv.org/2.4/doc/tutorials/imgproc/imgtrans/laplace_operator/laplace_operator.html .

Supplementary Material (2)

NameDescription
» Code 1       Raw images and the Matlab code for computing image gradient.
» Code 2       Raw images and Matlab code for computing all-in-focus image and depth map.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Illustration of the law of refraction. (a) Basic refraction scenario, where the ratio between the sines of the angles of incidence and refraction is equal to the ratio between the indices of refraction of the two media. (b) A light beam passes through a transparent plate with a certain index of refraction and is refracted twice: when it hits the upper boundary and when it passes through the lower boundary. The final light beam is parallel to the incident beam but is shifted by a certain amount. (c) Extension of the focal length from f1 to f2 by placing Medium 2 in front of a lens, given that N2 > N1.
Fig. 2
Fig. 2 Three screenshots of the Zemax simulation. An ideal lens (set as paraxial lens) was employed on the first surface with a focal length of 100 mm. Only green light was considered, and the refractive index of the BK7 glass on green light was n = 1.5168. (a). The first lens was set as an ideal lens (set as paraxial lens). When the thickness of the glass plate was 0 mm, the original total axial length was 100 mm. (b) When the thickness of the glasses plate was 5 mm, the total axial length became 101.70359 mm. (c) When the thickness of the plate was 11 mm, the total axial length could be extended to 103.74789 mm.
Fig. 3
Fig. 3 Experimental setup. (a) Sketch of the setup of the variable focus imaging system. (b) Photograph of the experimental setup. The variable focus spinner had 12 apertures that were mounted with different transparent plates.
Fig. 4
Fig. 4 Results obtained by using transparent plates of various thicknesses. (a) Six images acquired by using transparent plates with thicknesses of 0, 2, 4, 6, 8, and 10 mm. The movement of the focus point is clearly observable. The 1024 pixels along the blue line marked on the photographs were calculated to determine the focus. (b) Gradients calculated by using all 12 of the obtained images. The region in each plot with higher P–V values corresponds to the in-focus area, so shift of this region from right to left confirms that the focus changed gradually. Matlab code and the raw images could be found in Code 1 [17].
Fig. 5
Fig. 5 Images obtained by changing the plate thickness from 0 mm to 11 mm. (a) Raw image sequence. (b) Images obtained after rescaling, phase correction, and Laplacian edge detection. (c) All-in-focus sharp image generated by merging the in-focus pixels. (d) Depth map produced by using the index numbers of the images, which contained depth information. Matlab code and the raw images could be found in Code 2 [19].
Fig. 6
Fig. 6 Images of a USAF 1951 resolution chart. (a) Image obtained with no plate (thickness = 0 mm). (b) Image obtained with an 11-mm-thick plate inserted.
Fig. 7
Fig. 7 Comparison of the processed all-in-focus result (a) without scale and phase correction and (b) with correction. As shown in (a), when the phases of the images were not aligned, dislocation occurred when stacking the pixels from different layers, such as the area marked in the yellow frame.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

sin θ 1 sin θ 2 = N 2 N 1 ,
AB ¯ = t cos θ 2 .
σ = AB ¯ × sin ( θ 1 θ 2 ) .
σ = ( 1 N 1 N 2 × cos θ 1 cos θ 2 ) × t × sin θ 1 .
d = ( 1 N 1 N 2 × cos θ 1 cos θ 2 ) × t .
F = F x i ^ + F y j ^ .
[ x y 1 ] = [ u v 1 ] = [ u v 1 ] [ S u 0 0 0 S v 0 0 0 1 ] ,
[ x y 1 ] = [ u v 1 ] = [ u v 1 ] [ 1 0 0 0 1 0 T u T v 1 ] ,
Laplace ( f ) = 2 f x 2 + 2 f y 2 .

Metrics