In this paper we present a new approach for obtaining all-optical axial super-resolving imaging by using a non-diffractive binary phase mask inserted at the entrance pupil of an imaging lens. The designed element is tested numerically and experimentally on various practical testing benches and eventually is inserted into the lens of a cellular phone camera.
©2006 Optical Society of America
Extending the depth of focus of imaging systems is a very important core technology that can be incorporated into various camera aided and ophthalmic applications. Several previous approaches were suggested involving digital post processing [1–5], or aperture apodization by absorptive mask [6–10], or diffraction optical phase elements such as multi focal lenses or spatially dense distributions that suffer from significant divergence of energy into regions that are not the regions of interest [11–13]. Additional interesting technologies involved tailoring of modulation transfer functions with high focal depth  and usage of logarithmic asphere lenses .
The technology described in this paper proposes an extended depth of focus (EDOF) element configured as a phase-affecting, binary optical element defining a spatially low frequency phase transition that codes the lens aperture . This element exhibits several important features: since this optical element contains low spatial frequencies, it is not sensitive to wavelengths and dispersion (as other diffractive optical elements do) and it does not scatter energy towards the outer regions of the field of view. In addition its fabrication is simple and cheap. The optical element is a phase only element and thus it does not cause apodization and its energetic efficiency is very high. The energetic efficiency is high and close to 100% also due to the fact that the element has no spatial high frequencies and thus there is no un-used energy directed to diffraction orders. Since the optical element does not require digital post processing it is adequate for ophthalmic applications.
The optical element is a mask constructed out of transparent areas and binary phase lines (e.g., grid) and/or one or more binary phase circles that modulate the entrance pupil of the imaging lens. Under spatially incoherent illumination the out of focus effect is expressed as a quadratic phase distortion added to the OTF (Optical Transfer Function). The positions of those binary phase transitions are appropriately selected to generate invariance to these quadratic phase distortions. The developed element is also not sensitive to its transversal, as well as longitudinal position, and is thus very suitable to be placed on eye glasses. For obtaining transversal insensitivity low spatial frequency, periodic replication of the mask contours (the contour or transition regions) is generated.
The position of the binary phase transitions is computed using iterative algorithm in which M positions are examined and eventually those that provide maximal contrast of the OTF under a set of out of focus locations are chosen. The meaning of OTF’s contrast optimization is actually having the out of focused OTF bounded as much as possible away from zero.
This technology  was tested experimentally and showed operation under severe defocusing conditions (see mathematical definition later on).
Please note that the difference of the presented approach in comparison to other EDOF technologies is not in the optical configuration but rather in the design of an all optical approach having binary and spatially low resolution optical element.
Section 2 presents the theoretical background and the mathematical derivation. In section 3 one may see some numerical simulations and the experimental results are discussed in section 4. The paper is concluded in section 5.
2. Theoretical derivation
The OTF of an imaging system can be expressed as an auto-correlation operation between the pupil function of the lens :
While in focus, P(x,y) is the binary circle pupil function, which is “1” within the pupil, and “0” outside. When aberrations are introduced, the generalized pupil function can be described as:
Where W(x,y) is the wave aberration and k=2π/λ while λ is the optical wavelength. If the aberrations are caused only by defocusing, W(x,y) has the form of:
where b is the radius of the aperture P. The coefficient Wm determines the severity of the error. The coefficient Wm is also denoted as:
where ψ is a phase factor represents the severity of out of focus:
where 2b is the diameter of the lens, λ is the wavelength, Zo is the distance between the imaging lens and the object, Zi the distance between the imaging lens and the sensor and F is the focal length. When imaging condition is fulfilled:
and thus the distortion phase ψ equals zero. For the sake of simplicity we will perform a 1-D analysis. We assume that a phase mask consisting out of a set of phase shifts is attached to the entrance pupil of a lens and thus it multiplies the generalized pupil function (CTF plane). The OTF which is the auto correlation of the CTF [see Eq. (1)] will thus be:
where an are binary coefficients equal either to zero or to a certain phase modulation depth: an = (0, Δϕ) of the phase only element that we design. Δϕ is the phase depth of modulation. Δx represents the spatial segments of the element. Since we do not want to create a diffractive optical element, i.e. spatial high frequency periodicity (such that there will be no wavelength dependence) we force Δx≫λ. The mathematical formulation for the optimization criteria will be as follows: Compute a phase only element that will provide maximum for the minimal value of the OTF within desired spectral region of interest while the OTF is composed out of two terms. The first is the OTF having strong defocusing deformation with parameter of Wm and the second term is the in-focused OTF, i.e.:
In the Appendix we perform an approximated derivation. The final outcome obtained from Eq. A6 is:
Using the trigonometric relation of:
The first term is the regular term obtained due to getting out of focus and the second term is related to the influence of the binary phase only element. If our phase mask will contain several phase transitions rather than one as was derived in the mathematical analysis, then a summation will appear in front of the second term of Eq. (11). The last expression allows to extract easily both the derivative of H in respect to Δx or Δϕ as well as to see the maximum of the minimum of H as required by the optimization criteria of Eq. (8).
which equals to:
Observing Eqs. (11) and (13) makes the optimization according to criteria of Eq. (8) very simple. One may derive the expression according to Δϕ or just to plot them versus a range of values chosen for Δϕ and see when the criterion of maximized minimum is fulfilled [Eq. (8)]. Plotting such a graph reveals that an optimum is obtained for Δϕ≈π/2. As previously mentioned the value of Δx is chosen such that it will be much larger than the optical wavelength in order to avoid chromatic distortions and dispersion. In the performed simulations we chose Δx to be 1/8 of the lens aperture.
Although the mathematical derivation presents one phase transition the cross section of the element that we have used for the simulations and the experiments included two transitions. A cross section of the element is depicted in Fig. 1(a).
3. Numerical simulations
For performing the numerical simulation of the suggested EDOF element we have used the Code-V lens design software. We have designed an imaging module consisting out of three lenses and added the EDOF element in the entrance pupil. The simulation is performed in polychromatic illumination conditions and for field of view of ±25 degrees. TO obtain sufficient simulation accuracy we applied more than 60 rays across the diameter of the module. In Fig. 1(b) one may see the OTF of the lens without the EDOF element versus spatial frequency in cycles/mm. The dashed line indicates the diffraction limit. The chart of Fig. 1(b) is obtained for object positioned at 50cm distance from the lens which is the in-focus plane. When this lens is imaged on object at distance of 15cm it is much defocused. Lets us now see the effect of the EDOF element. In Figs. 1(c)–1(e) one may see the OTF versus spatial frequency for object positioned at 50cm, 15cm and infinity respectively. As one may see in all cases the OTF does not go below contrast of 15% for frequency of up to 80 cycles/mm. Thus, due to the addition of the EDOF the lens can perform focused imaging for objects from 15cm to infinity while before that the focusing depth was for object at 50cm to infinity. In Figs. 1(f)–1(h) one may see the through focus OTF charts. In Fig. 1(f) one may see the through focus OTF chart for the lens without the EDOF element for spatial frequency of 80 cycles/mm. In Figs. 1(g) and 1(h) one may see the through focus OTF chats for the lens with the EDOF element for spatial frequency of 80 cycles/mm while the object is positioned at 15cm and at infinity respectively. In those charts the detector is positioned at zero mm coordinate. Thus, from the charts one may see that the shift of the object position from 15cm to infinity shifts the OTF by 135 microns. The width of the OTF with the EDOF is around 175 microns while without the element, it is less than 100 microns. Thus, without the EDOF element the object can not be in focus both for position of 15cm and for infinity and with the EDOF it can. As to be seen in the next sections the numerical results were validated by experimental verification.
4. Experimental verification
Several experiments involving various configurations were performed in order to verify our EDOF approach. All of them were conveyed under a polychromatic and spatially incoherent illumination.
In the first experiment a regular 4-F imaging system was constructed. Two lenses with focal length of 90mm were used. In order to demonstrate the EDOF effect, the distance of object to first lens was modified to obtain out of focus equivalent to ψ=17.
The aperture of the lenses was D=16mm. In the results presented in Fig. 2 we have imaged a colored object containing features as well as letters. In Fig. 2(a) the images were captured when the object is in focus. Left part is without the EDOF element and the right part is with it. Figure 2(b) shows the results when the object is positioned in an out of focus plane in which ψ=17 (shift of +2mm aside from the in-focus plane).
In Fig. 3 we used the former setup with the EDOF element to observe 2-D test grating positioned at out of focus planes of ψ=±13 (shift of ±1.5mm aside from the in-focus plane). Figures 3(a) and 3(b) show the captured images without the element for out of focus equal to ψ=13 and ψ=-13 respectively. Figures 3(c) and 3(d) show the images obtained with the EDOF element inserted into the entrance pupil plane of the imaging system.
In Fig. 4 one may see another experiment performed with another 2-D grating target. In Fig. 4(a) the 2-D test grating is positioned at out of focus plane of ψ=15 and the imaging is performed without the EDOF element. In Fig. 4(b) the same image is taken but this time with the element. One may see the improvement. In Figs. 4(c) and 4(d) the object was positioned at focus and the images were taken without and with the EDOF element respectively.
Note that all the presented experiments were all-optical and no digital post processing was applied on the images.
In the following we used our EDOF element with an imaging module of a cellular phone camera. The EDOF element was placed in the entrance pupil of the imaging lens. The camera has focal length of around 4.8mm and F number of 3. The device operates such that it can have a relatively focused image starting from 50cm up to infinity. By utilizing the EDOF element we tried to reduce the minimal imaging distance down to 15cm, and yet have good focused image at large distances. The optical setup can be seen in Fig. 5(a). Figure 5(b)–5(g) shows the results obtained in such an all-optical experiment, where a common resolution chart was used as a target. In Fig. 5(b) we present the image seen when the resolution target is placed at 15cm and no EDOF element is in use. Figure 5(c) shows the central part of Fig. 5(b). One may see that the maximal resolution obtained equals to less than 300 television lines. Figure 5(d) is the result obtained when the EDOF element is inserted into the entrance pupil of the imaging module of the cellular phone camera. The object is still at 15cm. Figure 5(e) is the zoom of Fig. 5(d). One may see that now the finest resolved feature is 800 television lines. In Figs. 5(f) and 5(g) we show the images obtained respectively without and with the EDOF element, while the resolution chart was positioned 150cm away. One may see the sharpening of the writings on the blackboard as well as the features seen in the television resolution target.
We have performed another experiment with the same system of cellular camera. This time a business card was positioned 15cm away from the camera while various high resolution features were placed at a distance of 150cm. The aim of this experiment is to demonstrate that the focused images of near (the card) and far (the background shapes) fields are achieved simultaneously. Figure 6(a) presents the results obtained without the EDOF element. As one may see the business card can not be read. When the EDOF element was added to the system, we obtained images showed in Fig. 6(b). The business card is now readable. In order to sharpen the results we have applied sharpening and de-noising algorithms over the image. After such post-processing, the image showed in Fig. 6(c) was obtained (which is no longer all-optical). However the processing algorithms that we used were of low computational complexity. The sharpening algorithm was the following: We denote by f(x) the captured EDOF image. The procedure for the sharpening filter is as follows:
where a is a coefficient and K(x) the kernel that is used for the sharpening algorithm. The kernel K depends on the point spread function of the optics. The coefficient a is approximately 0.6 and the size of the kernel is 7 by 7 pixels while it is angularly symmetric and separable.
After that a de-noising algorithm was applied while we used the separable Bilateral filtering operation . Basically the operation is similar to the following: running with a window all over the image and checking the standard deviation within the window and other higher order moments. When the moments exceed certain range of values, the central pixel of the window is replaced. The bilateral filter is similar but performs soft rather than sharp decision whether to replace the central pixel or not. The operation is basically as follows:
where x’ are the values within the window and x is the central value that will be replaced after the computation. So, we take this kernel multiply it by the original image and sum the values within the window. If certain level is exceeded the central value of the window is replaced. The sliding window scans the entire image. The size of the window can be 7 by 7. Note that this operation makes smoothing only in the uniform regions which is what we actually want in the de-noising operation. σD and σR are the two standard deviations of the two Gaussian functions.
Note that the signal processing was not tailored to the point spread function of the defocusing blurring. The signal processing basically contained a high pass filter or actually the subtraction of the average from the image in order to enhance its contrast. This is basically an all optical approach and no point spread function de-convolution is performed.
In paper we have demonstrated a novel approach providing significantly increased depth of focus i.e. axial super resolving imaging while a binary phase only element with low spatial frequency is used in the entrance pupil of the imaging lens. The proposed approach not only increases the energetic efficiency but also reduces the sensitivity to wavelengths. The element was theoretically as well as experimentally investigated. A real cellular based imaging system was constructed and tested. The results provided by the element are done in an all-optical way and thus it may fit to ophthalmic applications as well .
Without any mask on the lens aperture the one-dimensional OTF equals to :
where A(μ) is the lens aperture size. This can be approximated as:
The approximation is true for μ values which are not too large in comparison to b/λZi and therefore A(μ) can be approximated as A(0) or for large Wm values causing the argument of the exponent to oscillate rapidly. Next we assume that the phase element carries only one stripe of non-zero phase. When such an element is attached to the aperture one obtains:
For μ large enough (but not too large in comparison to b/λZi otherwise the approximation of Eq. (A2) will not be valid) one may approximate:
Thus the expression for the OTF becomes:
which yields the following expression:
where ⊗ represents the convolution operation.
References and links
1. W. T. Cathy and E. R. Dowski, “Apparatus and method for extending depth of field in image projection system,” US patent 6069738 (May 2000).
2. W. T. Cathy and E. R. Dowski, “Extended depth of field optical systems,” PCT publication WO 99/57599 (November 1999).
3. W. T. Cathy, “Extended depth field optics for human vision,” PCT publication WO 03/052492 (June 2003).
6. C. M. Hammond, “Apparatus and method for reducing imaging errors in imaging systems having an extended depth of field,” US patent 6097856 (August 2000).
7. D. Miller and E. Blanko, “System and method for increasing the depth of focus of the human eye,” US patent 6554424 (April 2003).
8. N. Atebara and D. Miller, “Masked intraocular lens and method for treating a patient with cataracts,” US patent 4955904 (September 1990).
9. J. O. Castaneda, E. Tepichin, and A. Diaz, “Arbitrary high focal depth with a quasi optimum real and positive transmittance apodizer,” Appl. Opt. 28, 2666–2669 (1989). [CrossRef]
10. J. O. Castaneda and L. R. Berriel-Valdos, “Zone plate for arbitrary high focal depth,” Appl. Opt. 29, 994–997 (1990). [CrossRef]
11. E. Ben-Eliezer, Z. Zalevsky, E. Marom, N. Konforti, and D. Mendlovic, “All optical extended depth of field imaging system,” PCT publication WO 03/076984 (September 2003).
12. E. Ben-Eliezer, Z. Zalevsky, E. Marom, and N. Konforti, “All-optical extended depth of field imaging system,” J. Opt. A: Pure Appl. Opt. 5, S164–S169 (2003). [CrossRef]
15. W. Chi and N. George, “Electronic imaging using a logarithmic asphere,” Opt. Lett. 26, 875–877 (2001). [CrossRef]
16. Z. Zalevsky, “Optical method and system for extended depth of focus,” US patent application 10/97494 (August 2004).
17. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York1996) pp.126–151.
18. T. Q. Pham and L. J. van Vliet, “Separable bilateral filtering for fast video processing,” http://www.qi.tnw.tudelft.nl/~lucas.