Depth from defocus (DFD) based on optical methods is an effective method for depth reconstruction from 2D optical images. However, due to optical diffraction, optical path deviation occurs, which results in blurring imaging. Blurring, in turn, results in inaccurate depth reconstructions using DFD. In this paper, a nanoscale depth reconstruction method using defocus with optical diffraction is proposed. A blurring model is proposed by considering optical diffraction, leading to a much higher accuracy in depth reconstruction. Firstly, Fresnel diffraction in an optical system is analyzed, and a relationship between intensity distribution and depth information is developed. Secondly, a blurring imaging model with relative blurring and heat diffusion is developed through curving fitting of a numerical model. In this way, a new DFD method with optical diffraction is proposed. Finally, experimental results show that this new algorithm is more effective for depth reconstruction on the nanoscale.
© 2014 Optical Society of America
Depth from defocus (DFD), as introduced by Pentland , is known to be effective for depth reconstruction from 2D optical images. DFD is widely used in many fields such as remote sensing, robotics, and materials science [2, 3].
Traditional DFD methods calculate depth information with a blurring degree measurement of two blurred images based on geometrical optics [4, 5], where optical light is assumed to travel in straight lines. However, the use of geometrical optics is inaccurate in high resolution depth reconstruction because of several factors. For example, 1) depth calculation of traditional DFD is based on the assumption that optical diffraction can be ignored during optical imaging processes. Diffraction is a standard property of all wave phenomena. For most optical systems, the imaging beam is restricted to a round hole by a diaphragm, so optical diffraction exists in these systems. When a small object is observed, an optical system with a high magnification factor is needed. However, when the size of the object and the size of some elements in the optical system are close to the wave length of imaging, optical diffraction is more obvious. 2) Depth reconstruction precision is strongly related to defocus measurements. In traditional DFD, if the camera parameters of an optical imaging system are fixed, the defocus phenomenon is assumed to be the result of depth variation. In fact, although the focused image-forming conditions are fulfilled, the intensity distribution of a point on the image plane does not converge at a point because of optical diffraction, as shown in Fig. 1. In this way, optical diffraction results in blurring imaging. Therefore, in order to improve depth reconstruction precision at the nanoscale, a method with the ability to relate optical diffraction and depth variation is necessary.
The problem of DFD and optical diffraction has been addressed in variety of contexts. FitzGerrell et. al.  presented a two-dimensional function illustrating the effects of defocus on the optical transfer function (OTF) associated with a circularly symmetric pupil function, but their OTF did not consider optical diffraction. Stokseth  analyzed the optical properties of an aberration-free defocused optical system, and the exact diffraction optical transfer function (OTF) and point spread function (PSF) were compared to the geometrical OTF and PSF. These properties of the defocus transfer function make it useful for analyzing optical systems with circularly symmetric pupils. However, the relationship between depth information and OTF or PSF did not mention in his work. In reference , a DFD formula for a diffraction-limited imaging system was derived, and the result shows that a correction factor is need to compensate the traditional DFD reconstruction formula when an imaging system with large sensor displacement is used. However, lacking of theoretical analysis on optical diffraction, they focused on a path-length error resulted from optical diffraction, rather than a model between depth information and intensity distribution which is highly related to the blurring imaging. Until now, modeling depth information based on the presence of optical diffraction has rarely been researched. In our previous work, a DFD method with fixed camera parameters was proposed [9, 10], while optical diffraction has not been considered.
In this paper, a high resolution DFD method with optical diffraction is proposed. Our present approach is novel in several ways and provides a mathematical relation between optical diffraction and depth information. Firstly, the basic principle of Fresnel diffraction in an optical system is analyzed, and the relationship between Fresnel diffraction and depth information is developed. Secondly, a defocus imaging model with optical diffraction is developed through curve fitting of a numerical model, taking into account relative blurring and heat diffusion. The heat diffusion equations combined with optical diffraction are developed, and their solution is transformed into a dynamic optimization problem. Finally, experiments with static and dynamic samples are conducted and the results show that our proposed method is capable of obtaining more accurate depth information at the nanoscale.
2. Defocus with optical diffraction
There appear two typical diffractions including Fraunhofer diffraction and Fresnel diffraction [11–13]. In an optical imaging system, normally, optical diffraction that occurs is convergent-wave Fresnel diffraction, and the diagrammatic sketch of optical path is shown as Fig. 2. The amplitude of a random point P on the imaging plane can be described as :
On the imaging plane, a coordinate system OXYZ, where X is the optical axis, O is the origin point, and YZ is the imaging plane, is constructed. On XY plane, X = -R + b. Since XY plane and XZ plane are almost axisymmetric, their imaging property is supposed to be same. Due to symmetry, parameter ρ is replaced by parameter y. Here, we focus on research the intensity distribution near the idea imaging point. Therefore, Eq. (1) can be transformed into:
Then, Eq. (2) can be derived as:
Then, the normalized intensity distribution of P is:
From Eq. (2) and Eq. (3), we can see that the intensity distribution of a random point P on the imaging plane is a function of x and y. In order to observe its optical property, we suppose that λ = 600nm, sinu = 0.5, and the intensity distribution along y axis with different x is shown in Fig. 3. From it, we can find that if the source point is on the optical axis, the intensity of a random point P is maximal when P is the intersection point of the imaging plane and the optical axis. On other positions around the intersection point, the intensity value decreases with the distance from the maximal point.
Since, in a camera, the scale factor between the object distance and the imaging distance is the axial magnification m, the imaging distance variation x in Fig. 3 can be transformed into the variation of the object distance l with:
If we fix all the parameters of a camera, the intensity distribution resulted from x variation can also be realized by the variation of l, and the distribution of Ip (l, y) is almost same to that of Ip (x, y) since l is a linear function of x. When l = 0, Ip (l, y) does not distribute in a point as expected, and the image of a source point is a blurred round spot.
3. Blurring model comparison
In geometrical optics, when the focal length f, the distance of the object from the principal u, and the distance of the focused image from the lens plane v fulfills Eq. (6), researchers assume the image of a source point is a focused point, and the imaging process is focused. Otherwise, the image is a round spot and a blurred image appears.
If the camera parameters are fixed during a defocus imaging process, depth can be reconstructed with blurring degree of blurred images. Radius of the blurred round rg can be denoted as:
If the point spread function is a Gaussian function, the blurring kernel σ is:
Therefore, the reconstructed depth s in geometrical optics is:
When optical diffraction is considered, with fixed camera parameters, blurring is resulted from depth variation and optical diffraction, and the blurring kernel can be denoted as:
From Fig. 3, we can see that Ip(l, y) is close to a Gaussian function of y with a fixed l, so it is easy to fit each Ip(l, y) with a Gaussian curve, shown in Fig. 4, where the dot value is the calculation result, and the solid line is the fitting Gaussian curve. From each fitted curve, we can easily attain the Gaussian kernel σ of different l. If λ = 600 nm, sinu = 0.5, f = 0.357 mm, u = 3.4 mm, a = 0.18 mm, and γ = 300, the relationship between l and σ with/without optical diffraction is compared in Fig. 5, where the solid line is the result without optical diffraction, and the dash line is the result with optical diffraction. From Fig. 5, we can see that when depth variation l is zero, in geometric optics where diffraction is not considered, σ is zero; While it is 1.57e-4 with optical diffraction. With increasing of depth variation l, their σ is becoming close. That is the reason that in marco optical system, it is difficult to observe optical diffraction, but on nano scale, influence of optical diffraction cannot be ignored.
4. Depth reconstruction with optical diffraction
In order to calculate depth information from blurring degree of blurred images, a numerical model between l and σ is needed. From Fig. 5, we can see that the relationship between σ and l can be fitted with an aquadratic curve, and the fitted curve in our paper is:
The solution of Eq. (12) is:
The finial depth can be calculated as:Eq. (14), a, b, and c are known after the curve fitting, and in order to calculate depth information, we only need to obtain the blurring kernel σ. In this paper, we will calculate depth information from the basic principle of blurring imaging.
If we use a real aperture camera, the blurred image E measured on the imaging plane with the blurring setting of the optics and the radius of the blurred round rd is a function that can be approximated via the following equation:
An important case that we will consider is that of a scene made of an equifocal plane, that is, a plane parallel to the image plane. In this case, the depth map satisfies s(y, z) = s, the PSF h is shift-invariant, that is, h(y, z, rd) = h(y-z, rd), and rd is a constant. Hence, the image formation model becomes the following simple convolution:
From Fig. 4, it can be seen that the intensity distribution of a random point on the imaging plane can be approximated with a Gaussian function. When the PSF is approximated by a shift-invariant Gaussian function, the imaging model in Eq. (16) can be formulated in terms of the isotropic heat equation in physics:
It is also easy to verify that the variation σ is related to the diffusion coefficient ε via:
When the distance map s is not an equifocal plane, the PSF is in general shift varying. The equivalence with the isotropic heat equation does not hold, and the diffusion process can be formulated in terms of the inhomogeneous diffusion equation as:
By assuming the surface s is smooth, we can relate again the diffusion coefficient ε(y, z) to the space-varying variance σ via:
Here, we modeled the image E(y, z) via diffusion equations starting from the radiance image I(y, z), which we do not know. However, without prior knowledge or an accurate restoration model, it is very complicated to calculate I (y, z). Even if it can be calculated, the resolution is always very low. Rather than having to estimate it from two or more images, in this paper, we introduce a model of the relative blurring between two blurred images.
Suppose there are two images E1(y, z) and E2(y, z) for two different blurring settings σ1 and σ2,also, σ1<σ2 (that is, E1(y, z) is less blurred than E2(y, z)), from Eq. (14), then E2 (y, z) can be written as:
Suppose E1(y, z), whose depth map is s1(y, z), is the blurred image attained before a depth variation along the optical axis; E2 (y, z) with depth s1(y, z) is another blurred image attained after depth variation; s0 is the focus depth; s1(y, z)-s2(y, z) = Δs(y, z). If the depth variation Δs is known, the initial depth s1(y, z) can be calculated from the following method.
Because of the imaging relationship between two blurred images in Eq. (16), the blurring process between E1(y, z) and E1(y, z) can be denoted as the following heat diffusion functions:
The relative blurring between E1(y, z) and E2(y, z) is:
One can view the time Δt as the variable encoding the global amount of blurring, and the diffusion coefficient ε as the variable encoding the depth map s via:
Through simplification, Eq. (25) can be denoted as:
Then we get:
As a global algorithm, we construct the following optimization problem to calculate the solutions of the diffusion equations.
However, the optimization process above is ill posed, that is, the minimum may not exist, and even if it exists, it may not be stable with respect to data noise. A common way to regularize the problem is to add a Tikhonov Penalty :
Thus the solution process is equal to the following:
Finally, it is easy to attain the new depth value with Eq. (19). The algorithm can be divided into the following steps:
- Give camera parameters f, D, γ, v,s0; two blurred images E1,E2;a threshold τ;α and step size β;
- Initialize the depth map with a plan s, to be simple, we can suppose that the initial plane is an equifocal plane;
- Compute Eq. (25) and attain the relative blurring;
- Compute Eq. (24) and attain the solution of diffusion equations;
- Compute Eq. (32) with the solution of step (4). If the cost energy is below, the algorithm stops; or computes the following equation with step:
- Compute Eq. (29), update the depth map, and return to step (3).
In order to validate the new algorithm, we use it to reconstruct the depth information of a static nano standard grid and a dynamic AFM cantilever, and then compare the results with those of our previous DFD without optical diffraction. The height of the nano grid is 500 nm and the accuracy of this product is 3%; an Iphysik Instrumente (PI) nanoplatform is working up to the tip of the AFM cantilever in our dynamic experiment, and its raise height of each step is 100 nm. In the experiment, we use the microscope of HIROX-7700, and its amplification factor is 7000. The rest parameters are as the following: f = 0.357 mm, s0 = 3.4 mm, F-number = 2, D = f/2.
5.1 Static experiment
First, the standard grid is scanned by an atomic force microscopy (AFM) of Veeco Dimension 3100, and the 3D image of the nano scale grid is shown in Fig. 6. Then, the experiment using 120 × 110 pixel grid region is conducted. The results are shown in Figs 7(a)-(b) to Fig. 10. Figures 7(a) and 7(b) are two blurred images, in which Fig. 7(a) is the blurred image before depth variation and Fig. 7(b) is that after depth variation; the reconstructed depth of the nano grid is shown in Figs 8 (a)-8(b), where Fig. 8 (a) is the reconstructed depth with our new DFD in this paper and Fig. 8(b) is the result of our precious DFD without optical diffraction. The unit of the depth axis is mm.
In order to investigate the precision of the new algorithm, first we construct the error map Φ between the true shape s in Fig. 6 and the estimated shape , and the compute formulas are shown in Eq. (35). The error map with/without optical diffraction is shown in Figs 9 (a)-9(b), where Fig. 9(a) is the error map with optical diffraction and Fig. 9(b) is that without optical diffraction. Then we calculate the average estimation error of 500 points with Eq. (36), as the known accurate height of the standard grid is 500 nm. In order to compare precision of them, we make a section of 3D depth in Fig. 10, where the solid line is the result of our new method in this paper, and the dash line is the result of traditional DFD.
From Figs 9 (a)-9(b) and Fig. 10, we can see that the reconstructed depth error of the new proposed algorithm with optical diffraction is smaller than that of the algorithm without optical diffraction. Eave of the DFD without diffraction is 91 nm, while Eave of our new algorithm is 46 nm. That means with our new algorithm, the reconstructed error of the previous DFD in geometric optics can be decreased by 49.5%. Furthermore, the reconstructed depth with our new algorithm is smaller than that of the previous DFD. This result is consistent with our preceding analysis that blurred images are the combination of optical diffraction and depth variation.
5.2 Dynamic experiment
We capture a blurred image of the AFM cantilever. After the PI nano platform rises with a step of 100 nm, we capture a blurred image after each step. Then, we reconstruct the depth information of the bended cantilever, and the results are shown in Figs. 11(a)-(b) to Fig. 13. Figures 11(a)-11(b) are two blurred images in which Fig. 11(a) is the blurred image before depth variation and Fig. 11(b) is that after depth variation; the reconstructed depth of the AFM cantilever is shown in Figs 12 (a)-12(b), in which Fig. 12(a) is the reconstructed depth with optical diffraction and Fig. 12(b) is that without optical diffraction. Figure 13 is a depth section of the bended cantilever, where the solid line is the result of our new method in this paper, and the dash line is the result of traditional DFD. In Fig. 13, in order to compare the depth difference on nano scale, we substrate 3.4 mm on the vertical axis, and the unit of the depth axis is mm.
- The cantilever’s end with the tip bends obviously, and the height difference between the maximal bended value and the bottom of the hollow is 97 nm with our new method. While with the previous DFD without optical diffraction, it is 145 nm.
- For the platform depth reconstruction, it is lower than the bended cantilever with our method, and this coincides with the experiment fact; while with the previous method, it is higher than the cantilever at some points, so it is far from expected.
In this paper, a global nanoscale depth reconstruction method from defocus with optical diffraction is proposed and validated with static and dynamic samples. Our primary contribution here is to develop a relationship between Fresnel diffraction and blurred imaging. Our second contribution is a proposed imaging model for defocus with optical diffraction through curve fitting, based on relative blurring and heat diffusion. In this way, we have constructed a new DFD method considering optical diffraction. Finally, a static standard nano grid and a dynamic AFM cantilever are used to validate the proposed DFD method at the nanoscale. The results show that the proposed algorithm is more effective method for reconstructing depth information from blurred images at the nanoscale.
The authors thank the funding support from the Natural Science Foundation of China (No. 61305025) and the Fundamental Research Funds for the Central Universities (N13050411).
References and links
2. P. N. Vinay and C. Subhasis, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007). [CrossRef]
3. S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996). [CrossRef]
5. P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from blurred images,” Int. J. Comput. Vis. 52(1), 25–43 (2003). [CrossRef]
7. P. A. Stokseth, “Properties of a defocused optical system,” J. Opt. Soc. Am. 59(10), 1314–1321 (1969). [CrossRef]
8. C. Mair and C. J. Goodman, “Diffraction-limited depth-from-defocus,” Electron. Lett. 36(24), 2012–2013 (2000). [CrossRef]
9. Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012). [CrossRef]
10. Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global depth reconstruction of nano grid with singly fixed camera,” Science China. Technol. Soc. 54(4), 1044–1052 (2011).
11. R. C. Word, J. P. S. Fitzgerald, and R. Konenkamp, “Direct imaging of optical diffraction in photoemission electron microscopy,” Appl. Phys. Lett. 103(2), 021118 (2013). [CrossRef]
12. I. Kantor, V. Prakapenka, A. Kantor, P. Dera, A. Kurnosov, S. Sinogeikin, N. Dubrovinskaia, and L. Dubrovinsky, “A new diamond anvil cell design for X-ray diffraction and optical measurements,” Rev. Sci. Instrum. 83(12), 125102 (2012). [CrossRef] [PubMed]
14. P. Wang, Y. G. Xu, W. Wang, and Z. J. Wang, “Analytic expression for Fresnel diffraction,” Optical Society of America 15(3), 684–688 (1998). [CrossRef]
15. R. Lagnado and S. Osher, “A technique for calibrating derivative security pricing models: numerical solution of an inverse problem,” Journal of Computational Finance 1(1), 13–26 (1997).